Emotion Recognition Algorithms (ERAs)

From SI410
Jump to: navigation, search
Back • ↑Topics • ↑Categories

Emotion recognition algorithms (ERAs) or emotion artificial intelligence (EAIs) are a form of technology that recognizes, infers, and harvests "emotions using data sources such as social media behavior, streaming service use, voice, facial expressions, and biometrics" [1]. ERAs are artificial intelligence technologies that are used in many different situations such as employment [2], and these algorithms can be useful in identifying information pertaining to different contexts, but they may also be invasive to individuals who feel uncomfortable being closely observed[1]. The history of emotion recognition comes from literature on the importance of emotions such as Charles Darwin's The Expression of the Emotions in Man and Animals, Anniversary Edition [3]and from Paul Ekman[4] and from Paul Ekman[4] and Wallace V. Friesen[5] who identified different but salient human emotions which became used throughout emotion recognition technologies. As technology has advanced overtime, ERAs have been implemented into everyday life including social media. Discussions have emerged on the benefits and harms of emotion recognition technologies and the implications it can have on technology users.


Human emotions are crucial to understand emotion recognition technologies. Charles Darwin, in 1872, described the importance of human emotion in his book, The Expression of the Emotions in Man and Animals, Anniversary Edition[3]. Emotion recognition roots can also be traced back to Paul Ekman[4] and Wallace V. Friesen[6] who, in 1975, identified what he found to be "basic" and "universal" emotions: anger, disgust, fear, joy, sadness, and surprise.[7] As technology has advanced, emotion recognition technologies such as emotion artificial intelligence algorithms have emerged using much of Ekman's work. These technologies are constantly being iteratively designed to better the process in which these technologies function. [8] Major companies are now using emotion recognition technologies to make decisions and curate content [9] .

ERAs on Social Media

Companies such as Google [10] have begun to use emotion recognition technologies with patents [11] to analyze user data through emotions and to then curate content for users. Companies with emotion recognition data that are gathered from their users have control over how the data is used which can be referred to as information asymmetry [12]. Emotion recognition technology can also help determine the state of wellbeing of users; for instance, emotion recognition algorithms on Facebook can be used to identify individuals in states of deep distress when posting about their emotions which can then curate helpful and supportive content to those individuals. [13] Emotion recognition on social media can, overall, enhance the user experience by optimizing the content and interactions exposed to the users using emotion recognition data [14]

Implications of ERAs

Benefits of ERAs

ERAs can be used in a plethora of ways. Emotion recognition can be used to provide social media users with curated content to enhance experiences such as protecting users' wellbeing by keeping track of statements pertaining to negative mental and emotional wellbeing[13] and improving interactions on interfaces through curation and design[14]. Much discussion surrounding the benefits of ERAs pertains to the detection of mood instability and wellbeing [15]. Being able to detect negative wellbeing can help users, overall, in improving their mental and emotional state as well as finding help and providing resources for these in need if necessary.

Harms of ERAs

Although emotion recognition technologies can help identify a potentially negative situation, ERAs come with harms as all other technologies. Emotion recognition often requires artificial intelligence bias mitigation methods such as auditing the data from artificial intelligence and highlighting disparities in data can influence companies to make changes [16] Bias can often become integrated into the technology itself by its creators intentionally or unintentionally. Bias comes in many forms such as age bias [17], race, [18] and gender [18]. Inaccurate interpretations of one's emotions using such technology cannot be used to extend to larger populations. Additionally, it can be harmful to users by using inaccurate information to present incorrect and potentially harmful information.

See also


  1. 1.0 1.1 Andalibi, Nazanin and Buss, Justin. (2020). The Human in Emotion Recognition on Social Media: Attitudes, Outcomes, Risks. CHI 2020.
  2. Robert, L. P., Pierce, C., Marquis, L., Kim, S., & Alahmad, R. (2020). Designing fair AI for managing employees in organizations: a review, critique, and design agenda. Human–Computer Interaction, 35(5-6), 545-575.
  3. 3.0 3.1 Darwin, Charles. (2009). The Expression of the Emotions in Man and Animals, Anniversary Edition. (4 ed.). Oxford University Press.
  4. 4.0 4.1 4.2 https://en.wikipedia.org/wiki/Paul_Ekman
  5. https://en.wikipedia.org/wiki/Wallace_V._Friesen
  6. https://en.wikipedia.org/wiki/Wallace_V._Friesen
  7. Ekman, Paul and Friesen, Wallace V. (1975). Unmasking the face: A guide to recognizing emotions from facial cues.
  8. Fasel, Beat and Luettin, Juergen. (January, 2003). Automatic facial expression analysis: a survey. Pattern Recognition 36. Issue 1. https://doi.org/10.1016/S0031-3203(02)00052-3
  9. https://patents.google.com/patent/WO2017078960A1/en
  10. Puneet Agrawal. 2017. Emotionally connected responses from a digital assistant. Retrieved June 26, 2019 from https://patents.google.com/patent/WO2017078960A1/en
  11. https://en.wikipedia.org/wiki/Patent
  12. https://en.wikipedia.org/wiki/Information_asymmetry
  13. 13.0 13.1 https://engineering.fb.com/2018/02/21/ml-applications/under-the-hood-suicide-prevention-tools-powered-by-ai/
  14. 14.0 14.1 Wang, Yichen and Pal, Aditya. (2015). Detecting Emotions in Social Media: A Constrained Optimization Approach. In IJCAI.
  15. Koustuv Saha, Larry Chan, Kaya De Barbaro, Gregory D. Abowd, and Munmun De Choudhury. 2017. Inferring Mood Instability on Social Media by Leveraging Ecological Momentary Assessments. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 1, 3: 95:1–95:27. https://doi.org/10.1145/3130960
  16. Raji, Inioluwa D. and Buolamwini, Joy. (2019). Actionable Auditing. Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society (1 2019). https://doi.org/10.1145/3306618.3314244
  17. Kim, E., Bryant, D. A., Srikanth, D., & Howard, A. (2021, July). Age bias in emotion detection: an analysis of facial emotion recognition performance on young, middle-aged, and older adults. In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society (pp. 638-644).
  18. 18.0 18.1 B F Klare, M J Burge, J C Klontz, R W Vorder Bruegge, and A K Jain. 2012. FaceRecognition Performance: Role of Demographic Information. IEEE Transactions on Information Forensics and Security 7 (2012), 1789–1801. Issue 6. https://doi. org/10.1109/TIFS.2012.2214212