Emotion AI is Changing the Definition of Conversational AI: What are the Risks?



AI has become capable of reading the emotions of people now. Does that mean there are risks now?

The regular advancements in AI are both thrilling and chilling at the same time. While most industry people are thrilled with technology’s growth propelling new heights, some experts are concerned about the adverse situations it could bring shortly or probably already brought.

The AI algorithms are growing more human in space of emotions. Yes! AI has become capable of reading the emotions of people now. The advents in computer vision and facial recognition have led researchers to actively work on developing algorithms that can determine the emotions and intent of humans, along with making other inferences.


What is Emotion AI?

As noted by MIT Management Sloan School, Emotion AI is a subset of artificial intelligence (the broad term for machines replicating the way humans think) that measures, understand, simulates, and reacts to human emotions. It’s also known as affective computing or artificial emotional intelligence. The field dates back to at least 1995 when MIT Media lab professor Rosalind Picard published “Affective Computing.”

Javier Hernandez, a research scientist with the Affective Computing Group at the MIT Media Lab, explains emotion AI as a tool that allows for much more natural interaction between humans and machines. “Think of the way you interact with other human beings; you look at their faces, you look at their body, and you change your interaction accordingly,” Hernandez said. “How can a machine effectively communicate information if it doesn’t know your emotional state if it doesn’t know how you’re feeling, it doesn’t know how you’re going to respond to specific content?”

As explained by MIT Sloan professor Erik Brynjolfsson, “while humans might currently have the upper hand on reading emotions, machines are gaining ground using their strengths. Machines are very good at analyzing large amounts of data – they can listen to voice inflections and start to recognize when those inflections correlate with stress or anger. Machines can analyze images and pick up subtleties in micro-expressions on humans’ faces that might happen even too fast for a person to recognize.”


What are the Horrifying Concerns of Affective AI?

The annual report released by AI Now Institute which is an interdisciplinary research center studying the societal implications of artificial intelligence comprehends the flaws in emotion detection through AI and calls for a ban on technology designed to recognize people’s emotions in certain cases. According to the researchers, the technology should not be used in decisions that “impact people’s lives and access to opportunities,” such as hiring decisions or pain assessments, because it is not sufficiently accurate and can lead to biased decisions.