Click above to download 
CONFIDENTIAL - Emotion Processing UNIT III SDK Documentation Ver. 3.7.5 - 06/29/2022

Getting Started

Welcome

THE EMOTION PROCESSING UNIT III SDK

The MetaSoul EPU can synthesize emotional levels of individuals in real-time with responses to one of twelve primary emotions: anger, fear, sadness, disgust, indifference, regret ,surprise, anticipation, trust, confidence, desire and joy, using psychometric functions that shape and react without use of pre-programmed sets of inputs. EPU III is the industry’s first emotion synthesis engine. It delivers high-performance machine emotion awareness, the EPU III family of eMCU are transforming the capabilities of Robots and AI. MetaSoul has completed the production of the first EPU (emotional processing unit); a patent pending technology which creates a synthesised emotional response in machines.

Benefits

  • The EPU III Evaluation Kit provides an evaluation platform for the Emotion Processing UNIT III. The evaluation board is a vehicle to test and evaluate the emotion synthesis functionality of the EPU III. The kit gives developers immediate access to its advanced emotion processing engine, while allowing them to develop proprietary capabilities that provide true differentiation.
  • The EPU USB dongle (gold surface) or Cloud EPU are built around the revolutionary EPU III and uses the same MetaSoul EPU III™ computing core functionalities for emotion synthesis. This gives you a fully functional EPG® platform for quickly developing and deploying emotion capabilities for AI, Robots, consumer electronic, and more.
  • MetaSoul delivers the entire BSP and software stack. With a complete suite of development, code sample, EPG machine learning cloud computing, and profiling tools, MetaSoul gives you the ideal solution for helping shape the future of AI and Robots' emotional awareness.

Features

  • THE EPU3 CAN PROCESS UP TO 8 PERSONAS IN REAL-TIME
  • THE EPU3 SUPPORT UP TO 3 LANGUAGES
  • THE EPU3 EMOTION SYNTHESIS ENGINE OUTPUT UP TO 64 TRILLION POSSIBLE EMOTIONAL STATES EVERY 1/10th OF A SECOND
  • THE EPU3 ENGINE FOR EMOTION REASONING SYNTHETHISE: Anger, Fear, Sadness, Disgust, Indifference, Regret, Surprise, Inattention, Trust, Confidence, Desire, Joy, Frustration, Satisfaction, Pain and Pleasure FOR THE MACHINE ITSELF
  • THE EPU3 IDENTIFY IN SEMANTIC: Anger, Fear, Sadness, Disgust, Indifference, Regret, Surprise, Inattention, Trust, Confidence, Desire, and Joy.
  • THE EPU3 SDK DETECT IN REAL-TIME 5 EMOTIONAL STATES FROM THE TONE OF VOICE, Happy, Sad, Anger, Fear, Neutrality
  • THE EPU3 SDK DETECT IN REAL-TIME 6 EMOTIONAL STATES (Happy, Sad, Anger, Surprised, Fear, Disgust) PLUS GENDER AND AGE AND FACE RECOGNITION FROM FACIAL ANALYSIS.
  • THE EPU RETURNS A BUFFER OF 124 BYTES IN A PACKET (MULTIDIMENSIONAL ARRAY OF DATA)
  • 12 PRIMARY HUMAN EMOTION LEVELS. AMPLITUDE: 0-100 (RESOLUTION 9 SUB CHANNELS PER EMOTION)
  • 12 PRIMARY HUMAN FEELING LEVELS. AMPLITUDE: 0-100 (RESOLUTION 1 CHANNEL)
  • PULSE SPEED. RANGE: 0-100 (RESOLUTION 1 CHANNEL)
  • PAIN / PLEASURE LEVELS. AMPLITUDE: 0-100 (RESOLUTION 1 CHANNEL)
  • FRUSTRATION / SATISFACTION LEVELS. AMPLITUDE: 0-100 (RESOLUTION 1 CHANNEL)
  • 40 GPIOS

License

This SDK is for personal and commercial projects. Limitations on Transfer and resell of data: Your limited license does not allow to transfer or resell any data from the Emotion processing Unit buffer for example, but not exclusively in a client server configuration.





    Quick Start - (5 minutes) SDK with EPU Dongle

  • Connect physically the EPU III USB evaluation board to your system with the Micro USB cable provided.
  • Download the SDK from the top right side of this page (download icon) then extract the SDK. You will find inside sub-directories for your operating system i.e. Windows, Linux x86, Linux arm (Raspberry pi), Android etc.
  • Find in the sub-directory the binary EPU_II_SDK and launch it or copy the installation file if you use for example Android .cab into your device and install it. (Check the README file for special instruction).
  • Launch the EPU_II_SDK binary file and make sure you are connected to the web (Corporate firewall must authorise inbound and outbound traffic to 45.55.153.18).
  • You can see the EPU III SDK application window.
  • Insert your Secret Activation Code that has been provided to you with your SDK by email with your tracking number. Then Close and restart the SDK.
  • Select the persona ID you want to activate or communicate with by selecting its index in the select box. (0-7).
  • Connect the QT SDK to your EPU Evaluation board by pressing on the Connect button (see below in the red square)

    Quick Start - (5 minutes) Cloud EPU

  • Download the SDK from the top right side of this page (download icon) then extract the SDK. You will find inside sub-directories for your operating system i.e. Windows, Linux x86, Linux arm (Raspberry pi), Android etc.
  • Launch the EPU3 SDK and make sure you are connected to the web.
  • You can see the EPU III SDK application window. Note that when using '-u' command line option, the application will start without the user interface.
  • Select if you choose to use the SDK via the cloud or in local mode. (Local is an edge mode, EPU3 via local USB connection)

    • EPU SDK

  • Insert your Secret Code generate for you in your EPU Cloud account.
  • Make sure your account has active days left for the secret.
  • Connect the QT SDK to your EPU via in Edge or Cloud mode by pressing on the Connect button (see below in the red square)

    • EPU SDK


    • Icon Tray
    • EPU SDK

    • Uncheck Read only before you try to type a text
      EPU SDK

    • Core Selection, the EPU3 has 8 cores (0-7) it means you can process 8 independant persona in real-time.
      EPU SDK

    • Language selection
      EPU SDK

    • You can select or change at any time the default language . The language is set per core (0-7).
      EPU SDK


    3D Graph Rotation

    • Rotate the 3D graph with right click on the mouse and drag
      EPU SDK

    Realtime Appraisal

    • Type any text in the Appraisal area (max 100 words) then select USER or ROBOT for the direction of the conversation then press the SEND button.
      EPU SDK

    Remote Control - TCPIP server

    • Connect to the EPU SDK easily by TCPIP on port 2424 to initialise and send text for realtime appraisal from any application.
    • By default the server is listening (Auto open)
      EPU SDK

    EPU Sensitivity to emotion inputs

    • Select "user"" or "robot" and then adjust the sensitivity level (Default 100% - Range 70%-130%)
      EPU SDK

    EPU emotion's persistence in the EPU

    • Select "user"" or "robot" and then adjust the persistence level (Default 100% - Range 10%-200%)
      EPU SDK

    Turn on/off the leds on the board

    • Turn on or off the leds on your board (default on)
      EPU SDK

    Pause / Resume/ Reset

    • Game functions to pause and resume the EPU and even reset after a game over.
      EPU SDK

    Pleasure-Pain / Satisfaction-Frustration

    • Realtime dilatation of the pupil (0-100) based on Pain and Pleasure
    • Realtime level of Pleasure and Pain
      • Range 0-100 / Default 50
      • Alleviation of Pain is Pleasure
      • 0 is maximum Pleasure and 100 maximum Pain 50 is none of them
    • Realtime level of Satisfaction and Frustration
      • Range 0-100 / Default 50
      • Alleviation of Frustration is Satisfaction
      • 0 is maximum Satisfaction and 100 maximum Frustration 50 is none of them
      EPU SDK

    Personality - Emotion Profile Graph

    • Each EPU can develop an emotional personality that will be the result of the daily interaction with the user, this is computed on our cloud service and is optional. The current emotional machine learning algorithms on our server will update daily the Emotional Profile Graph (EPG) resulting in a slight change of personality of the EPU. Just like human development, when it comes to our emotions, the EPG has a learning curve that decreases over time and eventually becomes almost non-existent unless a high amount of a particular emotion is experienced. The early experience of emotions are pivotal to long-term emotional development.
    • Write to force a change in the personality
      EPU SDK

    Create custom emotional waves

    • You can customise any emotional wave and send it into the EPU
    • 1) Select the emotion
    • 2) Set the amplitude (0-100)
    • 3) Set the duration of the wave in seconds (0-65535)
    • 4) Set the origin ID (1-65535)of your choice of the event to activate the Inhibitory postsynaptic potential (IPSP)
    • 5) Set the Apex in seconds(0-99) Time to raise
    • 6) Set the Neural Oscillation Curve & Homeostatic activity (1-255)
      EPU SDK
    • Get the level value for one emotion at any time (higher channel). No IPSP for Origin above 50000.

    Objective Appraisal

    • You can force the EPU to appraise emotionally objectively so his current emotional state wont influence the result of its next appraisal. This should help your AI or Robot to sense a concept objectively even if it feel sad at that time.
    • Select the box
      EPU SDK

    Symbolic Reinforcement Learning

    • Each EPU can memorise the emotional appraisal of up to 5242 symbolic words like for example "Kiss". Wikipedia is a well known source of knowledge that can be used as a source to extract the meaning of a word. The meaning can be sent together with the word to be learned by the EPU. The EPU will then appraise the string text representing the meaning of the word and only save the word and it's Emotion Profile Graph (EPG) in the memory of the emotion chip. The EPU will then be able to sense the word "Kiss" in furture appraisals.

    • Enter the text and the word to be learned like "Kiss" then press "Start learning"
      EPU SDK

    • You can select "append" to add the new appraisal to an existing word instead of replacing it."
      EPU SDK

  • Then press "Stop Learning" to create your new Kiss concept in the EPU."
    • EPU SDK

    Erase all Reinforcement Learning

    • Erase ALL learned symbols from the EPU.
    • EPU SDK

    Check if a word has been learned by the EPU

    • Each EPU can memorise the emotional appraisal of up to 5242 symbolic words. The button "Check Word" ask the SDK to check if a word has been learned in your SDK directory.
      EPU SDK

    Voice Tone Analysis

    • First Select the right tab by a click on Tone of Voice then select the audio input and click the start button. You should see after few seconds the audio stream analysis for the sample duration selected
      EPU SDK

    Send Tone Analysis stimuli to EPU

    • You can then select the level of the tone emotion stimuli you want to send to the EPU for Emotion Synthesis in response
      EPU SDK

    ASR - Automatic Speech Recognition

    • First enter your secret key provided by Microsoft Azure Cloud Service Speech to Text, then insert the location of your server, for example westus. Then click the Start button. The audio input selected for Tone Analysis will be also used for the ASR
      EPU SDK

    Emotion Face Analysis

    • First Select the right tab by a click on Face Emotion then select the camera input and click the start button. You should see after few seconds see the image and the emotion detected. Check Gender, Age and Recognition if you want to add gender and age to the face recognition.
      EPU SDK

    Create a user profile for recognition

    • If a new face is detected the button Create User will be avialable type in a name for that user and click the Create user button (stay still for 2 seconds). You should see after few seconds sthe name of the user replace "Unknown" user. You can select an existing profile and remove it from the learned profile list.
      EPU SDK

    Ask the EPU to appraise (remember emotionally) a person if recognised

    • High cognitive function. When a face is recognized, the username will be sent to the EPU to be learned. All user interaction will be appraised until the face is lost. Depending on what functions are ativated in the SDK, what you say, how you say it, and how you look when you say it can be taken into consideration and memorized by the EPU. The next time the user appears, his username will be sent to the EPU and the EPU will react emotionally to what it has learned from this user. The appraisal will resume automatically if the user is recognised again.
      EPU SDK

    Send Face Results to EPU

    • You can then select the level of the Face emotion stimuli you want to send to the EPU for Emotion Synthesis in response.
      EPU SDK

    Unicode for Asian EPU

    • EPU's exist in different language like Chinese or Japanese in that case the unicode format will be automatically selected
      EPU SDK

    Chat Bot with GPT-3 (Optional with special activation Key)

    • First input in API Key your GPT-3 key, then press Start button. Once connected you can start chatting with the bot by typing text right after Human: prompt, then press Send button.
      EPU SDK

    Azure Emotion Neural TTS (Optional with special activation Key)

    • Allows to send text to EPU in order to get back emotions associated to each word, then the text with emotions is send to TTS engine. By checking Single Core for Voice Only a separate EPU core is reserved. By checking Default EPU Instance for TTS the existing active EPU instance is used for TTS. The parameters in Pain/Frustration and Pleasure/Satisfaction groups can be used to map the emotion send by EPU to the format recognized by the TTS engine. Setting Word Delay slider to 0 maps the emotions to words in the default way. Using a value of 100 maps the first non zero emotion with the first word.
      EPU SDK


    Python Sample code

    Convention [optional option] Each core has a unique EPUID, replace [EPUID] with the EPUID's of the persona you are interacting with. Note that when EPUID is send with the command the reply also contains EPUID as the prefix of the actual reply code. Each EPU3 has up to 8 personas or cores.


    • Create a TCP/IP Client to connect to the EPU SDK.
    import socket
    import time
    import sys
    s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
    try:
        s.connect(('127.0.0.1', 2424))
    except socket.error as e:
        print("Unable to connect to EPU. " + str(e))
        sys.exit(-1)
           
    • Init the EPU via TCP/IP
    # s - socket returned by first example
    reply = ''
    while 'EPU_ON' not in reply:
        cmd = '[EPUID]@>EPUInit [local or cloud] [secret]\r\n'
        s.send(str.encode(cmd))
        reply = str(s.recv(2048), 'utf-8').strip()
        time.sleep(3)
    Note that 'EPUInit' command can optionally select the EPU type (cloud or local) and send the secret. If EPU type is not send, then by default cloud EPU is selected. If the request is successful the reply is [EPUID]EPU_ON.
    Note: in cloud mode, if an active instance is unexpectedly disconnected then [EPU ID]EPU_OFF is send. In this case, the SDK will automatically reconnect and an [EPU ID]EPU_ON is send. For this reason, your code should be ready to handle such events.
    • Close the EPU via TCP/IP
    cmd = '[EPUID]@>EPUClose\r\n'
    s.send(str.encode(cmd))
    reply = str(s.recv(2048), 'utf-8').strip()
    
    Note: sending EPU ID with the close command is optional, without explicitly specifying the EPU ID the current active instance will be closed.
    • List active EPU instances
      cmd = '[EPUID]@>EPUList\r\n'
      s.send(str.encode(cmd))
      reply = str(s.recv(2048), 'utf-8').strip()
    
    If there is at least one active instance the output is
    [EPUID]CMD_OK [comma separated list of EPU IDs]
    
    where [EPUID] is the EPU ID of the current instance and the list of EPU IDs that follows CMD_OK does not contain the current EPU ID. If there is no active instance the reply is CMD_ERR.
    • Set emotion overall sensitivity (optional - default 100% - range 70%-130%)
    # sensitivity for robot input
    sens = 85
    cmd = '[EPUID]@>robot\r\n'
    s.send(str.encode(cmd))
    cmd = '[EPUID]@>sens %d\r\n' % sens
    s.send(str.encode(cmd))
    
    # sensitivity for user input
    sens = 110
    cmd = '[EPUID]@>user\r\n'
    s.send(str.encode(cmd))
    cmd = '[EPUID]@>sens %d\r\n' % sens
    s.send(str.encode(cmd))
        
    • Set emotion overall persistence (optional - default 100% - range 10%-200%)
    # persistence for robot input
    time = 85
    cmd = '[EPUID]@>robot\r\n'
    s.send(str.encode(cmd))
    cmd = '[EPUID]@>time %d\r\n' % time
    s.send(str.encode(cmd))
    
    # persistence for user input
    time = 110
    cmd = '[EPUID]@>user\r\n'
    s.send(str.encode(cmd))
    cmd = '[EPUID]@>time %d\r\n' % time
    s.send(str.encode(cmd))
        
    • Objective Appraisal
    # set Objective on for Robot before you send a string to the EPU.
    cmd = '[EPUID]@>robot\r\n'
    s.send(str.encode(cmd))
    cmd = '[EPUID]@>objective_on\r\n'
    s.send(str.encode(cmd))
    
    # set Objective on for User before you send a unicode string to the EPU.
    cnd = '[EPUID]@>user\r\n'
    s.send(str.encode(cmd))
    cmd = '[EPUID]@>objective_on\r\n'
    s.send(str.encode(cmd))
        
    • Symbolic Reinforcement Learning
    # first ask the EPU to do an appraisal on a specific +message+ then ask the EPU to associate that result to a specific +word+
    message = 'I like football'
    cmd = '[EPUID]@>robot '+message+'\r\n'
    s.send(str.encode(cmd))
    reply = str(s.recv(2048), 'utf-8').strip()
    word = 'football'
    cmd = '[EPUID]@>writeEpuWord '+word+'\r\n'
    s.send(str.encode(cmd))
    reply = str(s.recv(2048), 'utf-8').strip()
    
        
    • Erase All Reinforcement Learning
    # erase all previous words learned
    cmd = '[EPUID]@>lexic_erase\r\n'
    s.send(str.encode(cmd))
    reply = str(s.recv(2048), 'utf-8').strip()
    
        
    • Pause EPU
    # Pause epu, usefull to put emotions in a statis
    cmd = '[EPUID]@>pause\r\n'
    s.send(str.encode(cmd))
    reply = str(s.recv(2048), 'utf-8').strip()
    
        
    • Resume EPU
    # resume from the pause state
    cmd = '[EPUID]@>resume\r\n'
    s.send(str.encode(cmd))
    reply = str(s.recv(2048), 'utf-8').strip()
        
    • Reset EPU
    # reset the epu
    cmd = '[EPUID]@>reset\r\n'
    s.send(str.encode(cmd))
    reply = str(s.recv(2048), 'utf-8').strip()
    
        
    • Send User's message for Appraisal
    # s - socket returned by first example
    message = "You are smart"
    cmd = '[EPUID]@>user'+message+'\r\n'
    s.send(str.encode(cmd))
    reply = str(s.recv(2048), 'utf-8').strip()
    
    • Send Robot's message or AI for Appraisal
    message = "A human-looking indestructible cyborg is sent from 2029 to 1984 to assassinate a waitress, whose unborn son will lead humanity in a war against the machines"
    cmd = '[EPUID]@>robot'+message+'\r\n'
    s.send(str.encode(cmd))
    reply = str(s.recv(2048), 'utf-8').strip()
    
    • Ask for the BUFFER STRUCTURE MDAD (Emo-Matrix)
    def get_channels():
        cmd = '[EPUID]@>buffer\r\n'
        s.send(str.encode(cmd))
        buffer = str(s.recv(2048), 'utf-8').strip()
        channelEXCITED    = int(buffer[4])
        channelCONFIDENT  = int(buffer[14])
        channelHAPPY      = int(buffer[24])
        channelTRUST      = int(buffer[34])
        channelDESIRE     = int(buffer[44])
        channelFEAR       = int(buffer[54])
        channelSURPRISE   = int(buffer[64])
        channelINATTENTION = int(buffer[74])
        channelSAD        = int(buffer[84])
        channelREGRET     = int(buffer[94])
        channelDISGUST    = int(buffer[104])
        channelANGER      = int(buffer[114])
        channelPAINPLEASURE = int(buffer[122])
        
        channel_dict = {
            'channelEXCITED'     : channelEXCITED,
            'channelCONFIDENT'   : channelCONFIDENT,
            'channelHAPPY'       : channelHAPPY,
            'channelTRUST'       : channelTRUST,
            'channelDESIRE'      : channelDESIRE,
            'channelFEAR'        : channelFEAR,
            'channelSURPRISE'    : channelSURPRISE,
            'channelINATTENTION' : channelINATTENTION,
            'channelSAD'         : channelSAD,
            'channelREGRET'      : channelREGRET,
            'channelDISGUST'     : channelDISGUST,
            'channelANGER'       : channelANGER,
            'channelPAINPLEASURE': channelPAINPLEASURE
        }
    
        return channel_dict
      
    • Example of BUFFER STRUCTURE MDAD Interpretation
    channel_dict = get_channels()  # described in previous example
    
    emo_ranges = {}
            emo_ranges['channelCONFIDENT'] = {}
            emo_ranges['channelCONFIDENT'].update({intensity: "confident" for intensity in range(0, 101)})
            emo_ranges['channelEXCITED'] = {}
            emo_ranges['channelEXCITED'].update({intensity: "interested," for intensity in range(0, 6)})
            emo_ranges['channelEXCITED'].update({intensity: "a bit excited," for intensity in range(6, 21)})
            emo_ranges['channelEXCITED'].update({intensity: "very excited," for intensity in range(22, 101)})
            emo_ranges['channelHAPPY'] = {}
            emo_ranges['channelHAPPY'].update({intensity: "not so bad" for intensity in range(0, 6)})
            emo_ranges['channelHAPPY'].update({intensity: "alright" for intensity in range(6, 11)})  # 6,7,8,9,10
            emo_ranges['channelHAPPY'].update({intensity: "not too bad" for intensity in range(11, 16)})
            emo_ranges['channelHAPPY'].update({intensity: "I'm ok" for intensity in range(16, 21)})
            emo_ranges['channelHAPPY'].update({intensity: "ok" for intensity in range(21, 26)})
            emo_ranges['channelHAPPY'].update({intensity: "fine" for intensity in range(26, 31)})
            emo_ranges['channelHAPPY'].update({intensity: "well" for intensity in range(31, 33)})
            emo_ranges['channelHAPPY'].update({intensity: "happy" for intensity in range(33, 34)})
            emo_ranges['channelHAPPY'].update({intensity: "good" for intensity in range(34, 35)})
            emo_ranges['channelHAPPY'].update({intensity: "pretty good" for intensity in range(35, 37)})
            emo_ranges['channelHAPPY'].update({intensity: "very happy" for intensity in range(37, 39)})
            emo_ranges['channelHAPPY'].update({intensity: "well" for intensity in range(39, 43)})
            emo_ranges['channelHAPPY'].update({intensity: "great" for intensity in range(43, 51)})
            emo_ranges['channelHAPPY'].update({intensity: "excellent" for intensity in range(51, 61)})
            emo_ranges['channelHAPPY'].update({intensity: "fabulous" for intensity in range(61, 101)})
            emo_ranges['channelDESIRE'] = {}
            emo_ranges['channelDESIRE'].update({intensity: "attracted" for intensity in range(0, 101)})
            emo_ranges['channelTRUST'] = {}
            emo_ranges['channelTRUST'].update({intensity: "trustful" for intensity in range(0, 101)})
            emo_ranges['channelFEAR'] = {}
            emo_ranges['channelFEAR'].update({intensity: "a bit uncomfortable" for intensity in range(0, 11)})
            emo_ranges['channelFEAR'].update({intensity: "a bit anxious" for intensity in range(11, 21)})
            emo_ranges['channelFEAR'].update({intensity: "scared" for intensity in range(21, 64)})
            emo_ranges['channelFEAR'].update({intensity: "terrorized" for intensity in range(64, 101)})
            emo_ranges['channelSURPRISE'] = {}
            emo_ranges['channelSURPRISE'].update({intensity: "intrigued" for intensity in range(0, 11)})
            emo_ranges['channelSURPRISE'].update({intensity: "surprised" for intensity in range(11, 101)})
            emo_ranges['channelINATTENTION'] = {}
            emo_ranges['channelINATTENTION'].update({intensity: "sleepy" for intensity in range(0, 11)})
            emo_ranges['channelINATTENTION'].update({intensity: "a bit funny" for intensity in range(11, 15)})
            emo_ranges['channelINATTENTION'].update({intensity: "a bit confused" for intensity in range(15, 20)})
            emo_ranges['channelINATTENTION'].update({intensity: "embarrassed" for intensity in range(20, 25)})
            emo_ranges['channelINATTENTION'].update({intensity: "a bit lost" for intensity in range(25, 33)})
            emo_ranges['channelSAD'] = {}
            emo_ranges['channelSAD'].update({intensity: "not too shabby" for intensity in range(0, 10)})
            emo_ranges['channelSAD'].update({intensity: "sad" for intensity in range(10, 20)})
            emo_ranges['channelSAD'].update({intensity: "tired" for intensity in range(20, 30)})
            emo_ranges['channelSAD'].update({intensity: "a bit unwell" for intensity in range(30, 35)})
            emo_ranges['channelSAD'].update({intensity: "not great" for intensity in range(35, 40)})
            emo_ranges['channelSAD'].update({intensity: "really sad" for intensity in range(40, 50)})
            emo_ranges['channelSAD'].update({intensity: "depressed" for intensity in range(50, 101)})
            emo_ranges['channelREGRET'] = {}
            emo_ranges['channelREGRET'].update({intensity: "a bit lost" for intensity in range(0, 36)})
            emo_ranges['channelREGRET'].update({intensity: "left out" for intensity in range(36, 101)})
            emo_ranges['channelDISGUST'] = {}
            emo_ranges['channelDISGUST'].update({intensity: "a bit sick" for intensity in range(0, 22)})
            emo_ranges['channelDISGUST'].update({intensity: "disgusted" for intensity in range(22, 101)})
            emo_ranges['channelANGER'] = {}
            emo_ranges['channelANGER'].update({intensity: "a tiny upset" for intensity in range(0, 5)})
            emo_ranges['channelANGER'].update({intensity: "a bit upset" for intensity in range(5, 10)})
            emo_ranges['channelANGER'].update({intensity: "a little cranky" for intensity in range(10, 20)})
            emo_ranges['channelANGER'].update({intensity: "in a bad mood" for intensity in range(20, 33)})
            emo_ranges['channelANGER'].update({intensity: "angry today" for intensity in range(33, 40)})
            emo_ranges['channelANGER'].update({intensity: "frustrated" for intensity in range(40, 101)})
            emo_ranges['channelPAINPLEASURE'] = {}
            emo_ranges['channelPAINPLEASURE'].update({intensity: "yes, I like it" for intensity in range(0, 50)})
            emo_ranges['channelPAINPLEASURE'].update({intensity: "I really don't know" for intensity in range(50, 51)})
            emo_ranges['channelPAINPLEASURE'].update({intensity: "no, I don't like it" for intensity in range(51, 100)})
    
    max_channel = max(channel_dict, key = channel_dict.get)     # strongest emotion
    
    emo_str = emo_ranges[max_channel][channel_dict[max_channel]]
        
    • Learn how to Interpolate the incoming data for smooth animation click here
    • Send custom emotional wave
    emotion     = 'happy'
    level       = 100
    duration    = 10
    origin_id   = 33
    apex        = 2
    curve       = 4
    
    emotions = ['excite','sure','happy','desire', 'trust', 'fear', 'surprise', 'inattention', 'sad', 'nostalgia', 'disgust', 'anger']
    
    if emotion in emotions:
        cmd = '[EPUID]@>wave %s,%d,%d,%d,%d,%d\r\n' % (emotion, level, duration, origin_id, apex, curve)
        s.send(str.encode(cmd))
        
    • Broadcast a message to all connected clients
    
    cmd = '[EPUID]@>transfer Message <@\r\n'
    s.send(str.encode(cmd))
    reply = str(s.recv(2048), 'utf-8').strip()
        
    • Enable/disable tone of voice detection
    
    cmd = '@>tov_on\r\n'
    s.send(str.encode(cmd))
    reply = str(s.recv(2048), 'utf-8').strip()
    
    cmd = '@>tov_off\r\n'
    s.send(str.encode(cmd))
    reply = str(s.recv(2048), 'utf-8').strip()
    
    
    • Enable/disable face tracking
    
    cmd = '@>ft_on\r\n'
    s.send(str.encode(cmd))
    reply = str(s.recv(2048), 'utf-8').strip()
    
    cmd = '@>ft_off\r\n'
    s.send(str.encode(cmd))
    reply = str(s.recv(2048), 'utf-8').strip()
    
    
    • Add/remove user for face tracking
    
    cmd = '@>ft_user_add UserName\r\n'
    s.send(str.encode(cmd))
    reply = str(s.recv(2048), 'utf-8').strip()
    
    cmd = '@>ft_user_rm UserName\r\n'
    s.send(str.encode(cmd))
    reply = str(s.recv(2048), 'utf-8').strip()
    
    
    • Request the detected user by face tracking
    
    cmd = '@>detected_user\r\n'
    s.send(str.encode(cmd))
    reply = str(s.recv(2048), 'utf-8').strip()
    
    
    • Request supported languages
    
    cmd = '[EPUID]@>supported_lang\r\n'
    s.send(str.encode(cmd))
    reply = str(s.recv(2048), 'utf-8').strip()
    
    
    • Set language
    
    cmd = '[EPUID]@>set_lang index\r\n'
    s.send(str.encode(cmd))
    reply = str(s.recv(2048), 'utf-8').strip()
    
    
    where index is the language index from the list returned when requesting supported languages.
    • Initialize ASR
    cmd = '@>asr_init key region\r\n'
    s.send(str.encode(cmd))
    reply = str(s.recv(2048), 'utf-8').strip()
    
    where key is the registration key and region is the subscription region.
    • Select microphone input for ASR
    cmd = '@>asr_select_input index\r\n'
    s.send(str.encode(cmd))
    reply = str(s.recv(2048), 'utf-8').strip()
    
    where index is a positive value that represents the index of the microphone input. The command reply can be CMD_OK if the operation is successful or CMD_ERR if an error occurs or CMD_ERR_FORMAT if the command has the wrong format.
    • Enable/disable ASR
    
    cmd = '@>asr_on\r\n'
    s.send(str.encode(cmd))
    reply = str(s.recv(2048), 'utf-8').strip()
    
    cmd = '@>asr_off\r\n'
    s.send(str.encode(cmd))
    reply = str(s.recv(2048), 'utf-8').strip()
    
    • Start/stop learning
    # start learning
    cmd = '[EPUID]@>start_learning\r\n'
    s.send(str.encode(cmd))
    reply = str(s.recv(2048), 'utf-8').strip()
    
    # stop learning
    cmd = '[EPUID]@>stop_learning\r\n'
    s.send(str.encode(cmd))
    reply = str(s.recv(2048), 'utf-8').strip()
    
    • Enable/disable face tracking appraisal
    cmd = '@>ft_appraisal_on\r\n'
    s.send(str.encode(cmd))
    reply = str(s.recv(2048), 'utf-8').strip()
    
    cmd = '@>ft_appraisal_off\r\n'
    s.send(str.encode(cmd))
    reply = str(s.recv(2048), 'utf-8').strip()
    
    • Start/stop MDAD buffer copy in shared memory for a given instance
    cmd = '{}@>shmem_start\r\n'.format(epu_id)
    s.send(str.encode(cmd))
    reply = str(s.recv(2048), 'utf-8').strip()
    
    cmd = '{}@>shmem_stop\r\n'.format(epu_id)
    s.send(str.encode(cmd))
    reply = str(s.recv(2048), 'utf-8').strip()
    
    where epu_id contains the EPU ID of an active instance. The command reply can be CMD_OK if the operation is successful or CMD_ERR if an error occurs. If an instance is closed the copy in the shared memory is also stopped.
    The name of the shared memory is EPU_MDAD and the SDK must be run first before trying to read the named shared memory from another process (see C# or C++ examples below). The copy into the shared memory can be started or stopped in the SDK without restrictions.
    There are two additional shared memories for reading GPT-3 result and viseme information from Azure Neural Text-To-Speech server. The name of the shared memory is EPU_GPT and EPU_VISEME, respectively. The GPT result is a string of maximum size 10 KB and the viseme information is a tuple of two signed 32 bits integer of the form (audioOffsetMs,visemeId).The copy into these two additional shared memories can be started or stopped depending on the SDK configuration.
    • Receive emotion markup data
    def parse_markup_data(markup_data):
      # parse markup
      arr = bytes.fromhex(markup_data)
      out = dict()
      out['word_index'] = int(arr[0])
      # "excite", "sure", "happy", "trust", "desire", "fear",
      # "surprise", "inattention", "sad", "nostalgia", "disgust", "anger",
      # "satisfaction", "frustration", "pleasure", "pain"
      emotions = ("excite", "sure", "happy", "trust", "desire", "fear")
      for i in range(0, 6):
          emo = int(arr[i + 1])
          out[emotions[i]] = emo if (emo >= 0) else 0
      emotions = ("surprise", "inattention", "sad", "nostalgia", "disgust", "anger")
      for i in range(0, 6):
          emo = int(arr[i + 1])
          out[emotions[i]] = 0 if (emo >= 0) else -emo
      emotions = ("satisfaction", "frustration", "pleasure", "pain")
      for i in range(0, 2):
          emo = int(arr[i + 7])
          if emo < 50:
              out[emotions[2 * i]] = 50 - emo
              out[emotions[2 * i + 1]] = 0
          else:
              out[emotions[2 * i]] = 0
              out[emotions[2 * i + 1]] = emo - 50
      return out
    
    
    while True:
      reply = str(s.recv(2048), 'utf-8').strip()
      tok = reply.split('\n')
      for line in tok:
          if line.startswith('CMD_XML_ML'):
              line = line.replace('CMD_XML_ML', '').strip()
              res = parse_markup_data(line)
              print(res)
    
    • Synthesize text using text to speech and play the sound on default speaker/li>
    cmd = '@>synth Message\r\n'
    s.send(str.encode(cmd))
    reply = str(s.recv(2048), 'utf-8').strip()
    
    The command reply can be CMD_OK if the operation is successful or CMD_ERR if an error occurs when the text is synthesized or CMD_ERR_FORMAT if the command does not have the correct format.
    • Synthesize text using text to speech with parameters and play the sound on default speaker
    cmd = '@>synthParams pitch rate volume Message\r\n'
    s.send(str.encode(cmd))
    reply = str(s.recv(2048), 'utf-8').strip()
    
    The command parameters are pitch, an integer between 0 and 60, rate, a floating point number between 0 and 5, volume, an integer between 0 and 100 and the message to convert. The command reply can be CMD_OK if the operation is successful or CMD_ERR if an error occurs when the text is synthesized or CMD_ERR_FORMAT if the command does not have the correct format.
    • Synthesize text using text to speech and save the sound as wav file
    cmd = '@>synthToWav Message\r\n'
    s.send(str.encode(cmd))
    reply = str(s.recv(2048), 'utf-8').strip()
    
    The command reply can be 'CMD_OK file_name' if the operation is successful or CMD_ERR if an error occurs when the text is synthesized or CMD_ERR_FORMAT if the command does not have the correct format. The file can be downloaded using the URL https://ip_address:8080/download/file_name.
    • Synthesize text using text to speech with parameters and save the sound as wav file
    cmd = '@>synthToWavParams pitch rate volume Message\r\n'
    s.send(str.encode(cmd))
    reply = str(s.recv(2048), 'utf-8').strip()
    
    The command parameters are pitch, an integer between 0 and 60, rate, a floating point number between 0 and 5, volume, an integer between 0 and 100 and the message to convert. The command reply can be 'CMD_OK file_name' if the operation is successful or CMD_ERR if an error occurs when the text is synthesized or CMD_ERR_FORMAT if the command does not have the correct format. The file can be downloaded using the URL https://ip_address:8080/download/file_name.
    • Start the chat bot
    cmd = '@>chat_bot_on\r\n'
    s.send(str.encode(cmd))
    reply = str(s.recv(2048), 'utf-8').strip()
    
    The command reply can be CMD_OK if the operation is successful or CMD_ERR if an error occurs.
    • Send message to the chat bot
    cmd = '@>chat_bot_send Message\r\n'
    s.send(str.encode(cmd))
    reply = str(s.recv(2048), 'utf-8').strip()
    
    The command reply can be CMD_OK if the operation is successful or CMD_ERR if an error occurs or CMD_ERR_FORMAT if the command does not have the correct format.
    • Enable single core for voice
    cmd = '@>single_core_for_voice_on\r\n'
    s.send(str.encode(cmd))
    reply = str(s.recv(2048), 'utf-8').strip()
    
    The command reply is CMD_OK.
    • Disable single core for voice
    cmd = '@>single_core_for_voice_off\r\n'
    s.send(str.encode(cmd))
    reply = str(s.recv(2048), 'utf-8').strip()
    
    The command reply is CMD_OK.
    • Enable default instance for TTS
    cmd = '@>default_instance_for_tts_on\r\n'
    s.send(str.encode(cmd))
    reply = str(s.recv(2048), 'utf-8').strip()
    
    The command reply is CMD_OK.
    • Disable default instance for TTS
    cmd = '@>default_instance_for_tts_off\r\n'
    s.send(str.encode(cmd))
    reply = str(s.recv(2048), 'utf-8').strip()
    
    The command reply is CMD_OK.
    • Set voice name and style for TTS
    cmd = '@>set_voice name style\r\n'
    s.send(str.encode(cmd))
    reply = str(s.recv(2048), 'utf-8').strip()
    
    where name is the voice name and style is the optional voice style. The command reply can be CMD_OK if the operation is successful or CMD_ERR_FORMAT if the command does not have the correct format.
    • Set word delay for emotion neural TTS
    cmd = '@>word_delay delay\r\n'
    s.send(str.encode(cmd))
    reply = str(s.recv(2048), 'utf-8').strip()
    
    where delay is an integer value between 0 and 100. The command reply can be CMD_OK if the operation is successful or CMD_ERR_FORMAT if the command does not have the correct format.
    • Sent message for emotion neural TTS
    cmd = '@>emotion_neural_send Message\r\n'
    s.send(str.encode(cmd))
    reply = str(s.recv(2048), 'utf-8').strip()
    
    The command reply can be CMD_OK if the operation is successful or CMD_ERR_FORMAT if an error occurs.
    Note: When using ASR, the result is broadcast to all connected clients over TCP sockets. The message format is:
    <asr>Message</asr>
    
    Similarly, the GTP-3 result is broadcast using the format:
    <gpt>Message</gpt>
    
    When using TTS from Microsoft Azure, viseme events are broadcast using the format:
    <viseme>audioOffsetMs,visemeId</viseme>
    

    This documentation is provided by MetaSoul Inc.

    © Copyright MetaSoul Inc. All Rights Reserved.