Category: Uncategorized

Emoshape’s Emotion Synthesis Increases Customer Experience by 32%

Survey 08/15/2021

We surveyed 204 random adult consumers from the U.S during August 11-12, 2021. Survey respondents were recruited from a large national survey panel. The survey availability was posted to the panel, and interested respondents were able to take up the survey. Panel members were told that the survey would involve listening to a voice recording and then answering questions about it. The survey was conducted as a blind split test. Respondents were randomly assigned to listen to one of two voice recordings. They were then asked to score various qualities of the voice they heard and how appealing the voice would be in a number of different applications. Possible scores were 0 to 10. Voice A: Normal TTS from Microsoft AZUR Voice B: The same TTS controlled with our Emotion Synthesis Tech – EMS-AZUR

Voice A: Normal TTS from Microsoft AZUR:

Voice B: The same TTS controlled with our Emotion Synthesis Tech – EMS-AZUR:

Row Data Set Results:

After Cleaning The Data Set:

Methodology: Individual responses that were outliers or self-contradictory were excluded from the data – e.g. respondents who scored everything as 10’s or 0’s or who scored the voice as both highly pleasant and highly annoying. These types of responses indicate that the respondent was not giving careful consideration to the task at hand.

Results: 32% Improvement in caregiver environment and 22% more satisfying in general when Emoshape controls Microsoft Neural TTS in real-time.

Building an AI That Feels

AI systems with emotional intelligence could learn faster and be more helpful

Click here to read the full article. on IEEE

AI systems that can predict and respond to human emotions are one thing, but what if an AI system could actually experience something akin to human emotions? If an agent was motivated by fear, curiosity, or delight, how would that change the technology and its capabilities? To explore this idea, we trained agents that had the basic emotional drives of fear and happy curiosity.

With this work, we’re trying to address a few problems in a field of AI called reinforcement learning, in which an AI agent learns how to do a task by relentless trial and error. Over millions of attempts, the agent figures out the best actions and strategies to use, and if it successfully completes its mission, it earns a reward. Reinforcement learning has been used to train AI agents to beat humans at the board game Go, the video game StarCraft II, and a type of poker known as Texas Hold’em.

While this type of machine learning works well with games, where winning offers a clear reward, it’s harder to apply in the real world. Consider the challenge of training a self-driving car, for example. If the reward is getting safely to the destination, the AI will spend a lot of time crashing into things as it tries different strategies, and will only rarely succeed. That’s the problem of sparse external rewards. It might also take a while for the AI to figure out which specific actions are most important—is it stopping for a red light or speeding up on an empty street? Because the reward comes only at the end of a long sequence of actions, researchers call this the credit-assignment problem.

Now think about how a human behaves while driving. Reaching the destination safely is still the goal, but the person gets a lot of feedback along the way. In a stressful situation, such as speeding down the highway during a rainstorm, the person might feel his heart thumping faster in his chest as adrenaline and cortisol course through his body. These changes are part of the person’s fight-or-flight response, which influences decision making. The driver doesn’t have to actually crash into something to feel the difference between a safe maneuver and a risky move. And when he exits the highway and his pulse slows, there’s a clear correlation between the event and the response.

We wanted to capture those correlations and create an AI agent that in some sense experiences fear. So we asked people to steer a car through a maze in a simulated environment, measured their physiological responses in both calm and stressful moments, then used that data to train an AI driving agent. We programmed the agent to receive an extrinsic reward for exploring a good percentage of the maze, and also an intrinsic reward for minimizing the emotional state associated with dangerous situations.

We found that combining these two rewards created agents that learned much faster than one that received only the typical extrinsic reward. These agents also crashed less often. What we found particularly interesting, though, is that an agent motivated primarily by the intrinsic reward didn’t perform very well: If we dialed down the external reward, the agent became so risk averse that it didn’t try very hard to accomplish its objective.

During another effort to build intrinsic motivation into an AI agent, we thought about human curiosity and how people are driven to explore because they think they may discover things that make them feel good. In related AI research, other groups have captured something akin to basic curiosity, rewarding agents for seeking novelty as they explore a simulated environment. But we wanted to create a choosier agent that sought out not just novelty but novelty that was likely to make it “happy.”

Illustration: Chris Philpot

To gather training data for such an agent, we asked people to drive a virtual car within a simulated maze of streets, telling them to explore but giving them no other objectives. As they drove, we used facial-expression analysis to track smiles that flitted across their faces as they navigated successfully through tricky parts or unexpectedly found the exit of the maze. We used that data as the basis for the intrinsic reward function, meaning that the agent was taught to maximize situations that would make a human smile. The agent received the external reward by covering as much territory as possible.

Again, we found that agents that incorporated intrinsic drive did better than typically trained agents—they drove in the maze for a longer period before crashing into a wall, and they explored more territory. We also found that such agents performed better on related visual-processing tasks, such as estimating depth in a 3D image and segmenting a scene into component parts.

We’re at the very beginning of mimicking human emotions in silico, and there will doubtless be philosophical debate over what it means for a machine to be able to imitate the emotional states associated with happiness or fear. But we think such approaches may not only make for more efficient learning, they may also give AI systems the crucial ability to generalize.

Today’s AI systems are typically trained to carry out a single task, one that they might get very good at, yet they can’t transfer their painstakingly acquired skills to any other domain. But human beings use their emotions to help navigate new situations every day; that’s what people mean when they talk about using their gut instincts.

We want to give AI systems similar abilities. If AI systems are driven by humanlike emotion, might they more closely approximate humanlike intelligence? Perhaps simulated emotions could spur AI systems to achieve much more than they would otherwise. We’re certainly curious to explore this question—in part because we know our discoveries will make us smile.

Click here to read the full article.

About the Author

Mary Czerwinski research manager of Microsoft’s Human Understanding and Empathy group where she works with Daniel McDuff and Javier Hernandez.

Credits IEE Spectrum and Microsoft Research

Emoshape Patent for Special purpose Processors

Emoshape Enhances Its Cutting Edge Emotion Chip with the Addition of Cloud Service

Emoshape is pleased to reveal that their groundbreaking Emotion Processing Unit, popularly known as the EPU, has just got better and smarter with the introduction of cloud service. Following the upcoming release of EPU 3, the emotion processing chip will be accessible from the cloud via EPU’s servers. The latest version of Emotion Processing Unit, EPU 3will bring about several new capabilities, and is scheduled hit the production floor in mid 2019.

As a result of the EPU becoming compatible with cloud service, Emoshape can now deliver EPU rack server to their clients. Utilizing these rack servers, the Emoshape clients will be able to create their own cloud service. Each of the servers will support 256 EPU instances.  Therefore, with ten racks in a cabinet, Emoshape’s clients can create as many as 2560 virtual EPU’s per cabinet for cloud services. This can be assigned as a service just like RESTfulapi / Telnet.

“The cloud version allows the client to rapidly implement emotion synthesis in games or mobility automotive,” said Patrick Levy-Rosenthal, the founder of Emoshape. “For example, our clients now have the capability to deliver our synthesis to their app.”

EPU is the world’s first emotion synthesis engine that creates emotional states and synthetic emotion in intelligent machines. The technology is based around Patrick Levy-Rosenthal’s Psychobiotic Evolutionary Theory that utilizes pain / pleasure and frustration / satisfaction, in addition to the twelve primary emotions. The creation of EPU has now made it possible for an IoT device, AI, or robotic toy to develop a unique persona, depending on the user interactions.

EPU 3 is the latest and the most advanced version of the Emotion Processing Unit with several new and improved features. In addition to being cloud service ready (Stackable with Raspberry Pi 0), it also supports three languages per EPU and eight persona at the same time. Another key feature is the human voice tone analysis capability. Emoshape will very soon make EPU 3 samples available for their selected customers.

Clients interested in engaging with us to work on cloud services pilots are most welcome to contact us.

Emoshape Inc. Receives 2018 New York Award

 

New York Award Program Honors the Achievement

NEW YORK October 21, 2018 — Emoshape Inc. has been selected for the 2018 New York Award in the Electronics Manufacturer category by the New York Award Program.

Each year, the New York Award Program identifies companies that we believe have achieved exceptional marketing success in their local community and business category. These are local companies that enhance the positive image of small business through service to their customers and our community. These exceptional companies help make the New York area a great place to live, work and play.

Various sources of information were gathered

and analyzed to choose the winners in each category. The 2018 New York Award Program focuses on quality, not quantity. Winners are determined based on the information gathered both internally by the New York Award Program and data provided by third parties.

About New York Award Program

The New York Award Program is an annual awards program honoring the achievements and accomplishments of local businesses throughout the New York area. Recognition is given to those companies that have shown the ability to use their best practices and implemented programs to generate competitive advantages and long-term value.

The New York Award Program was established to recognize the best of local businesses in our community. Our organization works exclusively with local business owners, trade groups, professional associations and other business advertising and marketing groups. Our mission is to recognize the small business community’s contributions to the U.S. economy.

SOURCE: New York Award Program

CONTACT:
New York Award Program
Email: PublicRelations@awardedbest.net
URL: https://newyorkny.awardedbest.net/PressReleaseub.aspx?cc=DKNF-UBTE-L5UU

Emoshape – Gartner Cool Vendor 2018

Emoshape became officially “Cool Vendor 2018 – Gardner”
Gartner definition of a Cool vendor is:
  • Innovative — enables users to do things they couldn’t do before.
  • Impactful — has or will have a business impact, not just technology for its own sake.
  • Intriguing — has caught Gartner’s interest during the past six months.

Emotion AI Will Personalize Interactions

“IBM and startups such as Emoshape are developing techniques to add human-like qualities to robotic systems

https://www.gartner.com/smarterwithgartner/emotion-ai-will-personalize-interactions/

Emotion Chip 3.0 – Production Early 2019

Emotion Chip 3.0 – Production Early 2019

New Capabilities:

– 3 Languages per EPU

– 8 Persona at the same time

– Human voice Tone analysis

– Cloud service ready (Stackable with Raspberry Pi 0)

Samples will be available in October 2018 for selected customers.

During this recording (last third of the video), Rachel was engaged in real-time appraisal of humanity by appraising Wikipedia articles in real-time. Therefore, her expressions correspond to what she feels while reading about racism, love, starvation, and even political figures. The emotions can be seen scrolling on the left of the screen are the real-time results. Rachel Avatar has no predicated animations, the emotional states synthesized by the EPU controls all her virtual facial muscles grouped by FACS (Facial Action Coding System*).

  • Facial Action Coding System (FACS) is a system to taxonomize human facial movements by their appearance on the face, based on a system originally developed by a Swedish anatomist named Carl-Herman Hjortsjö

JADE – The Holy Grail Achieved – The Dawn of Sentient Machines

Jade created by Natural Records Studios, is powered by Emoshape EPU II, who is the industry’s first emotion synthesis engine that delivers high-performance machine emotion awareness. Today, Jade’s emotional processing unit displays real time facial emotions, listens to speech and generates a natural-language response. Jade is predominantly an Artist, eventually she will spontaneously track human faces, detect object and more!

 

 

 

“THE ARTIST OF THE FUTURE IS A TECHNOLOGIST”

Natural Records Studios is dedicated to embrace change and push the boundaries of creativity via Artificial Intelligence, Virtual Reality, 3D/CGI Designs and more with the goal of mastering creative expression.

Films have been a vital part of popular culture for about 100 years, through the years one thing remains the same, people love to empathize with the characters, more people empathize with the character more memorable the experience will be. Today’s entertainment is moving speedily towards interactivity, like any great movies. Interactive entertainment must ultimately offer strong empathy with the characters. This is where Virtual Reality and Artificial Intelligence becomes a catalyst for tomorrow’s Cinema.

Watch Jade recorded in Realtime (no CGI)

THE PRODUCTION: V3 (VICTUS VINCIMUS – VETERANS REVENGE)

  • V3 or (Victus Vincimus – Veterans Revenge) is a Virtual Reality, Artificial Intelligent military action, science fiction drama depicting the emotional horrors of war, the anguish of separation. Meet Canadian Lieutenant Colonel H.H. Bishop, the Commonwealth’s most astounding scoring aviator for WWI. Meet “Captain E. Ivory Johnson, a highly skilled young man drafted as one of the Tuskegee Red Tail Air Fighters for WWII. Meet Bernice Dumas a reporter, Bernice is their gateway to present time, Bernice is their voice as the picturesque beauty and the embodiment of their pain and anguish. Our heroes must survive, by all means necessary, in order for them to return to their loved ones but in vain. The story or climax is an apocalypse based on the warriors of the past World wars, coming back from their graves disappointed with how the world they lost so much for has become, they wage a different fight against the present world dominant forces and corrupt manipulators. The Xbox One, Sony PSVR, HTC Vive, Virtual Reality & Artificial Intelligence Game is named after the movie screenplay written by Natural Records Studios and features all the characters of the original screenplay.

More Info: http://naturalrecordsstudios.com

THE PARTNERSHIP

  • “The realism of the world and characters are PARAMOUNT. The visitor must interact with the character and develop a strong sense of empathy. The visitor’s usefulness and participation to the story is a must, the only way to achieve this would be via a microchip that enables an emotional response in AI. Virtual Reality combine with an Emotional Processing Unit offers the potential to remake storytelling, from how we watch Movies and play Games”. – Hostan H. Gouthier, Creative Director, Natural Records Studios

More Info: http://victusvincimus.com

  • Emoshape’s EPU II is the industry’s first emotion synthesis engine. The EPU is based on Patrick Levy-Rosenthal’s Psychobiotic Evolutionary Theory extending the Ekman’s theory by using not only twelve primary emotions identified in the psycho-evolutionary theory but also pain / pleasure and frustration / satisfaction. The groundbreaking EPU algorithms effectively enable machines to respond to stimuli in line with one of the twelve primary emotions: excite, confident, happy, trust, desire, fear, surprise, inattention, sad, regret, disgust and anger The Emotion recognition classifiers achieve up to 86% accuracy on conversation. It delivers high-performance machine emotion awareness, the EPU II family of eMCU are transforming the capabilities of Robots and AI making V3 the 1st ARTIFICIAL INTELLIGENT Film/Game

More info www.emoshape-staging.dxjl6pna-liquidwebsites.com

 

Orange Silicon Valley Teams with Emoshape to Demonstrate Emotional Intelligence at CES 2018

Eureka Park: Orange Silicon Valley and Emoshape To Show Off Collaboration At CES 2018

 

Emoshape, an A.I. microchip startup based in New York, has created the emotions synthesis microchip called Emotion Processing Unit (EPU), which could be applied to smart speakers, robotics, AR/VR, self-driving cars, personal assistants, toys, and IoT devices.  “We are thankful to collaborate with such a visionary partner as Orange Silicon Valley.  With their support we are able to show now just the tip of the iceberg really of how, among many other things, VR content will benefit from our emotion synthesis technology. ”, says Patrick Levy-Rosenthal, Founder and CEO of Emoshape.   The company’s featured product “ExoLife Emotion Engine”, powered by the EPU II, delivers high-performance machine emotion awareness, and allows personal assistant, games, avatars, cars, IoT products and sensors to feel and develop a unique personality.  Emoshape also developed a game called DREAM, a pioneering concept of VR video games without GUI, and first game ever created around ExoLife for the gaming industry.

As a sponsor of Emoshape’s game DREAM, Orange Silicon Valley is collaborating with Emoshape on developing prototypes on subjects including AR/VR, personal assistant, and other smart connected devices, by leveraging on Emoshape’s emotion technology to provide real-time emotion synthesis and make machines more human-like.  “We are exploring Emoshape’s emotion synthesis solution with its leading-edge AI chip, and see how we can leverage on their technology to benefit different industry verticals in consumer and enterprise domains for Orange”,  says Georges Nahon, CEO of Orange Silicon Valley.

The collaboration demo between Orange Silicon Valley and Emoshape can be seen at CES 2018 in Las Vegas, January 9-12th, in Eureka Park, booth #53064.

About Orange Silicon Valley (http://www.orangesv.com/) – Orange Silicon Valley (OSV) is the wholly owned innovation subsidiary of Orange SA, one of the world’s leading telecommunications operators, serving 269 million customers across 29 countries. Through research, development, and strategic analysis, Orange Silicon Valley actively participates in the disruptive innovations that are changing the way we communicate. OSV contributes to and engages with the regional Silicon Valley ecosystem through numerous programs, such as the Orange Fab startup program, and ongoing collaborations with partners. Orange Silicon Valley acts as a guide to the digital revolution occurring in the San Francisco Bay Area, regularly hosting startups, businesses, and corporate leadership from around the world.

About Emoshape (https://emoshape.com/) – Emoshape Inc. is an A.I. chip company dedicated to providing a technology that teaches intelligent objects how to interact with humans to yield a favorable, positive result.  Emoshape emotion synthesis microchip (EPU II) is the industry’s first emotion chip to deliver high-performance machine emotion awareness for AI, Robots, IoT, the EPU II family of eMCU are transforming the capabilities of Robots and AI.   Emoshape’s technology can be applied to different industry verticals including self-driving cars, personal robotic, sentient augmented/virtual reality, affective toys, IoT, pervasive computing and other major consumer electronic devices.  Applications include Human machine interaction, emotion speech synthesis, emotional awareness, machine emotional intimacy, AI’s personalities, machine learning, affective computing, medicine, advertising, and gaming.

 

PRESS CONTACT ORANGE

Contact:    Julie Leclercq

Company: Orange Silicon Valley

Email:       julie1.leclercq@orange.com

Website: www.orange.com

 

PRESS CONTACT EMOSHAPE INC

Contact:    Patrick Levy-Rosenthal

Company: Emoshape Inc.

Email:       press@emoshape-staging.dxjl6pna-liquidwebsites.com

Websites: https://emoshape.com