How AI Can Identify Emotions

Artificial Intelligence simplified is creating software that allows a computer act as a human being intellectually and emotionally. AI is all around us. From Apple’s Siri to Google’s predictive searches, artificial intelligence is already present in our daily lives. However, we have only scratched the surface of the potential uses of AI. Artificial Intelligence is increasingly becoming some of the most valued technology because of the different ways in which the technology can be applied. The increased interest in how AI can detect emotions is leading the way for major breakthroughs in many industries including security, automobiles, and retail.

Speech Recognition

From Apple’s Siri to Amazon’s Alexa, AI’s speech and voice recognition is just the beginning of what these artificial robots can do. Speech recognition (tone, volume, etc.) is key in determining the emotion of the speaker.  Artificial Intelligence speech recognition uses “text-to-speech” software which essentially puts together words and phrases from pre-recorded files of a particular voice. Siri and Alexa know a lot more about you than just the fact that you don’t like their jokes. Alexa can recognize the tone in your voice and determine if you are happy, sad, frustrated, etc. Amazon gives an example that if you tell Alexa you are hungry and Alexa can recognize that you have a sniffle in your voice, Alexa will have the knowledge to offer soup recipes. Alexa would also know to target ads that relate to food and sickness. For example, Alexa could suggest Panera’s soups. AI speech recognition is creating a new space for targeted advertising. Speech recognition is just one way emotions are detected. To get a better sense of the power of speech recognition, see Lyrebird’s artificial voice creation of Barack Obama.

 

 

Facial Expressions

Another way Artificial Intelligence can determine emotions is through facial recognition. The simplelist example is when someone smiles, they are happy. Deep Learning is used to improve the accuracy of facial recognition. Xi Zhang and Xiaolin Wu conducted a study to see whether they could build facial recognition software that could determine if someone was a criminal or not. They fed their facial recognition software images of criminals and non-criminals and then feeding it the correct answer. Soon enough, Zhang and Wu’s software could determine if a person was a criminal based solely on facial recognition with a 90% accuracy. This criminal detecting software has the potential to help improve security throughout the world.

The power of facial recognition can be used in many helpful ways, like unlocking your iPhone, but also raises ethical concerns. The facial recognition software also presents the issue of bias. If the facial recognition software is only fed images of people of a certain ethnicity or gender, then the software is going to be biased. Imagine finding out that you have the facial attributes of a criminal.

facial-recognition

Affectiva – Automotive AI

Affectiva, an emotion measurement technology company, uses behaviors, cognitive states, and interactions to help determine the emotion of a person. Their latest product, Automotive AI, identifies the emotional and cognitive state of the driver and passengers to improve the experience of  transportation and to help improve road safety. Automotive AI is designed to be used in semi-autonomous vehicles and eventually complete autonomous vehicles. One of Affectiva’s goals for the Automotive AI is to improve road safety. For example, if the driver is recognized to be distracted, the software can signal the driving machine to take appropriate action. This would help build trust between the driver and the car. Affectiva’s Automotive AI is also designed to detect the driver and the passenger’s mood. Once a mood is identified, the software is able to recommend music to play, or the dimness/color of the lights in the car. Affectiva has created potentially life-saving technology that could redefine the way we drive.

There are many concerns around the idea of a robot detecting emotions. Emotions are a very personal  affair and some people might not like their emotions being tracked and stored. Some people might argue that emotions should be kept private and should not be stored in a database.

affectiva-emotion2affectiva-emotion

Future

The possibilities are endless for the uses of artificial intelligence especially through identifying emotions. AI could potentially identify the symptoms of an anxiety attack and contact the appropriate people or play specific music to calm the victim down. As  “Barack Obama” mentioned in the video, Lyrebird’s AI technology can help give a voice to people who have lost theirs. Artificial intelligence can be used in the retail industry (with the help of CCTV cameras) to detect a customers body language and potentially identify the customer’s intent or concerns. Did person A keep coming back to look at a product? Did they buy it? Were they looking at other items as well to compare? The questions are endless and AI can help us answer some of those questions.

Artificial Intelligence emotion classification can help make our world a safer place. If a automobile has Affectiva’s Automotive AI installed and it detects that a passenger is  uncomfortable and nervous, it could potentially save that person’s life by contacting the appropriate authorities. Affectiva plans on developing software to determine if a driver is under the influence of alcohol or drugs. If the system can identify that, it could prohibit the car from driving and potentially save people’s lives.

 

Sources:

 

 

 

8 thoughts on “How AI Can Identify Emotions

  1. Nice post! Your post made me wonder if the opposite situation may also happen. Where AI uses the facial mapping feature to generate “deep fakes” – videos of people saying and doing certain things. @mattallen did post a wired article about a theory that the recent “10 year challenge” was really a way to train AI to learn aging patterns.

    Like

  2. Great Post! This is such an interesting topic because the idea that technology has the ability to track human emotions and senses is kind of crazy. It’ll be intriguing to see what products and how this technology is incorporated into AI and everyday society in the future and if it has a positive or negative effect on the community or if people consider it too invasive.

    Like

  3. Hey Jess, awesome post! It is so interesting how advanced AI is getting! The idea of AI being able to potentially save lives by interfering with unsafe drivers is something I would never imagine possible, but would benefit our society in a huge way. Also, as someone who owns an Amazon Alexa and uses Siri frequently I never knew that they were able to pick up on my tone and collect information to then target ads, recipes, ect. Looking forward, I wonder what else AI will be able to detect just from someones facial expression or tone.

    Like

  4. Love this!
    We’ve discussed where the line of “private” ends, but we have not considered emotions yet! One would think that our emotions in our homes and cars to be private. Although, if my car told me to take a break because I was too tired on a road trip, I can’t see how that could be an injustice! Thinking long term, do you think there could be potential for this technology to replace breathalyzers in vehicles of those who are convicted of DUIs?

    Like

  5. This is a great post! I think especially with the car example that would be very effective and would save many lives as well. Even though it is a little weird to think that Alexa knows when I am in a sad mood or can detect if I am sick, that still would be really helpful for her to have some recommendations for me to feel better!!

    Like

  6. Jessica, this is a great post encompassing all sorts of relevant information to our world. The automotive AI is especially interesting to me–there was a point last year when I was on my seventh hour in the car and my eyes were drooping as I drove home. I realized that I was slowly getting more and sleepy and had to quickly change the music I was listening to in order to keep myself awake. Having technology like Affectiva to recognize when people may need some external stimulation in order to improve their actions on the road could save lives. However, it seems to dance along the “creepy/cool” line that Professor Kane likes to mention–there may be a point at which people are fed up with technology’s involvement in our privacy.

    Like

  7. Great post! Your post really opened my eyes in terms of more directions AI can bring us. Like other comments, I am also concerned about the privacy issues that advanced AI technologies can cause. This concern was also raised in the book I read over break (“Machine, Platform, Crowd”). The authors brought up the possibility that future speech recognition and video synthesis technology can easily create a video of some politician or celebrity saying outrageous things which can cause society uproar considering the power fake news has.

    Like

  8. Cool post. Interesting to see how Lyrebird will help those who lost their voice to speak to friends and love ones again. On the topic of investment, this company would fit the criteria of an angel investment who may have an emotional connection to helping the unfortunate in society. However, depending on the size of the future industry, VC’s may also want to invest. Relating back to the topic of fake news, it’ll be interesting to see how people interpret more delicately crafted false news to the public in the future.

    Like

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s