Tuesday, September 29, 2009

The Rise of Truly Emotional Computing

Brian R Zaik

It’s clear from taking a look at past research in the space that emotional responses to computing is a topic of great interest in the world of HCI. But I would also suggest that these articles, as well as my external knowledge of what exists right now in the world, help me to conclude that emotional computing is a frontier that has yet to be fully explored.

The first questions to answer are the most basic: do people actually develop emotional attachments to computers and computing devices? Do they assign emotional labels and metaphors to their computing experiences? And do they view computers as social actors with whom they may interact in ways similar to how they may interact with human beings? Nass et al. focused on what we might term as more traditional computing experiences (text-based), providing strong evidence that human beings begin to assign social and emotional context to computers as soon as the machines exhibit characteristics that may be compared to human characteristics, such as gender, attitude, disposition, and personality. While today we may take the findings of this 1997 study for granted, I imagine that Nass and his associates made quite an impact on the field of psychology upon the publishing of their research. To me, the question now is not, “Can users view computers as social actors with emotions and personalities?,” but rather, “How can we best design computers to harness emotions for the benefit of their users?”

Clark and Brennan showed that grounding is an essential part of human communication. And the process of grounding, as they defined it, involves the coordinated action of all the participants. Being aware of emotions and the emotional context of a communication is also key to grounding, as it will allow both parties of a two-way exchange to understand the implications of a particular message (though this is not always likely to work out perfectly). In order for computers to truly interact with us on a deeper level, they must be able to both elicit emotions from their users AND respond to those emotions in kind. The research study covered in Nass et al. only examines the first of these design requirements – as they state, every kind of interaction between the user and the computer was scripted for the study. While I can believe that even text-based interactions can generate social responses from users, I find it difficult to view that kind of interaction on the same level as human-to-human interaction. It’s one thing to be able to generate a one-sided social response from the user and another to be able to engage user and machine in a two-way, grounded exchange. I would argue that the true future of emotional computing lies in the design of computers that can both study human reactions AND dynamically deal with the reactions they recognize to improve the effectiveness of the human-machine exchange.

The European Union (EU) recently funded a research program aimed at realizing that kind of future. The Humaine project (1) is founded upon the notion that interfaces haven’t developed enough beyond simple user interface mechanics, despite massive gains in computing power over the years. Humaine is an attempt to expand the modern palette of HCI research beyond those mechanical approaches. To do this, Humaine researchers have started back at the ground level to figure out exactly how human beings should interact with computers, and vice-versa. Humaine’s strength lies in its interdisciplinary nature: psychologists, philosophers, artists, and computer scientists all work together to better understand how emotion can be incorporated into HCI design. And one of the first objectives of the program is to develop computer systems that can recognize human emotions using multiple modalities.

These systems have already been tested in museums in Scotland and Israel (2). In these trials, museum guides were issued to visitors. These handheld computers included earpieces and microphones to monitor visitors’ levels of interest in different types of display and react accordingly. As the Humaine program coordinator, Professor Roddy Cowie, points out, “While this is still at a basic level, it is a big step up from a simple recorded message.” This kind of interaction certainly goes far beyond the simple studies that Nass and his colleagues conducted in 1997.

Perhaps we could build emotion-aware computer systems that operate as closed loop machines. The computer would be capable of recognizing and analyzing how human users react to the user experience in front of them, and later shift its behavior to better suit the user. Nass et al. showed that the strength of the computing experience could be enhanced by strongly matching the user’s personality with a similar computer personality. For example, the data supported the conclusion that users with submissive personalities are generally more attracted to computers they view as submissive. We must realize that the computing experience might need to be finely tuned to the specific personality of the user, and that’s why being able to dynamically adapt the computing experience to the exhibited emotions of the specific user makes so much sense. In the lab, scripting experiences and personalities may suffice, but in order for computers to be truly capable interacting with human users on an emotional level, the experience must dynamically change based on the emotional detections and compensations of both parties.

One other consideration of the Humaine project is how computer representations can be designed to elicit emotions from users most naturally. Professor Cowie claims that Humaine researchers have “identified the different types of signal which need to be given by an agent – normally a screen representation of a person – if it is going to react in an emotionally convincing way.” This brings up another question relevant to our in-class discussions: are graphical avatars effective ways of displaying and exhibiting emotions in a two-way exchange between a human and a computer? Is an avatar that resembles a person necessary to interface properly with a human being on an emotional level? Nass et al. concluded that even simple text messages are enough to elicit emotions from human users, yet researchers are still trying to figure out the true benefits of more sophisticated ways of representing computers. All the way back in 1987, Apple painted a clear vision of the Knowledge Navigator, a type of computer assistant that could interface at a level similar to how two human beings may interact. In that video, Apple showed used a graphical, humanlike avatar for the Navigator’s interactions with human users. Is the computer avatar here to stay?

Emotional computing holds promise in allowing computers to better anticipate and respond to human emotions, which may help us to design breakthrough computer interfaces that can adjust to users. The Humaine program in the European Union may be an important building block for this future. For now, though, we’ll be the only ones who hear all those insults we throw at our computers.


  1. McKie, Robin. "Machine rage is dead ... long live emotional computing." The Observer. Guardian News and Media Limited, 11 Apr. 2004. Web. 28 Sept. 2009.

  2. "ICT Results - Emotional Results." ICT Results. European Commission, 3 Apr. 2008. Web. 28 Sept. 2009.

No comments: