Tuesday, September 29, 2009

girls roaring into the online game space

My two sons (aged 5 and 9) play a wide range of video games on their Nintendo DS devices, our Wii, and online at the family PC. Of the online games, several are community-oriented such as Club Penguin and Webkinz. There is a great difference between the action games they play such as the LEGO series (Batman, Star Wars, and Indiana Jones) and this other type. The action games are violent (yes, LEGO games involve a great amount of destruction and “killing” although there is no blood), competitive, fast-paced, and adventure-based. The boys are constantly seeking ways to outscore and outplay one another.

On the other hand, Club Penguin and Webkinz are tame, safe (no one gets hurt), non-competitive, puzzle and relationship-based games. Although there are scores to earn in both types, the points in the community-oriented games amount to a form of currency that players can spend on a variety of items that serve to create atmosphere. For example, in Club Penguin, players can buy an assortment of housing goods and style-related accoutrements to “trick out” their pad or character's appearance.

Although I have not made a gender distinction between these markedly different styles of gaming, our American cultural mores and ways of perceiving gender interests will undoubtedly create the divide for us. Action games are for boys and community-oriented games are for girls (penguins and stuffed animals do not rank high on the testosterone meter). The fact that my sons play these games is not so much a statement of their gender association as it is a testament to the overall design integrity of the games. Simply put, boys didn't play these kinds of games when I was growing up because these kinds of games weren't being designed (for boys or girls).

One person who can be credited with a large share of the popularity of the second type of game is Brenda Laurel, who over the course of several decades, has pioneered games for young girls. Laurel is a writer, researcher, and game designer. In the early 1990s, she joined Interval Research to begin looking at the question of how play varies by gender. In an interview for Designing for Interaction (http://www.designingforinteraction.com/laurel.html), she discusses why she didn't start with the question “why don't girls play computer games?” – an answer which corroborates my experience with games growing up: “... the answers to that question (at least at the time we were asking it, in the mid-1990s) were at the same time highly predictable (e.g., the early, rapid vertical integration of the computer game industry around a monolithic male demographic) and not particularly actionable (e.g., girls don't play games because games aren't design for them or offered in retail spaces where girls go).”

After several years of research involving over a thousand children and literature reviews which covered thousands more, Laurel and her team began developing heuristics for interface design practices that applied to young girls and gaming. The project, called Safari – as it sought the big game girls would play – sailed against the prevailing winds of the industry. At this time in history, the $10 billion dollar computer game industry largely ignored girls.

Some of the research involved mixing gender signal traits: “For example we made a pink furry truck, and we learned that pinkness overrides truckness. We did a diary with bullet holes in it, and found that it is still a diary and boys won't use it” (http://www.designinginteractions.com/). Based on the findings of the Safari research, Laurel founded Purple Moon in the mid-90s, a startup company funded by Interval. The company consisted of three interconnected business ventures: “interactive CD-ROMs, the purplemoon.com website, and an array of Purple Moon collectibles.” At one point, over 40,000 young girl gamers occupied the virtual space of Purple Moon. Descriptions of the now-defunct site read like a playbook for Club Penguin and Webkinz-style community-oriented games with an emphasis on participatory narrative play. Like these newer game environments, “There was an internal postcard system, like an internal e-mail system, so that we could protect kids from predatory behavior by adults.” Before Disney bought Club Penguin, the site owners emphasized the safety of their internal game communication system as having similar characteristics.

It is, in my estimation, Laurel's research acumen that helps to set her apart from others who may walk a similar path. She stresses several times in various sources the need for solid research: “I fervently believe in research as a necessity for good design and I teach it that way” (http://blog.ted.com/2009/03/interview_with_2.php). One of the first courses students must take in the Graduate Program in Design Laurel designed (and chairs) at California College of the Arts is Design Research. In her own work, listening to the girl gamers gave her insight into the special world they occupied as players. The girls said they wanted to “make up their own stories about the characters and to make up new characters and possibly put themselves as characters into the stories.” Research into ways girls engage in sports “had a huge influence on both the plots and the UI” for the successful Starfire series.

In 1998, Laurel was invited to speak at the Technology, Entertainment and Design (TED) Conference in Monterey, California. During that talk, she showcased and discussed several of the features of Purple Moon and the nature of designing games for girls:

At the height of the dot com boom, Mattel purchased Purple Moon. However, in a frustrating turn of events, Mattel killed the venture in 1999. Laurel writes about her experience in the book Utopian Entrepreneur. Even though her bitterness is justified, she still takes pride in seeing the continued cultural artifacts of her failed game empire in other areas: “The 'emotional navigation' interface we developed ... [has been] useful for working with folks with autism in helping them read emotional cues” (http://blog.ted.com/2009/03/interview_with_2.php).

The impact of the work Laurel did at Interval and Purple Moon shouldn't be underestimated. Will Wright's The Sims and Spore stand as exemplars of the participatory narrative frameworks championed by Laurel. Even the ability to customize one's Mii on the Wii game settings is an idea whose roots extend back to the kind of work Laurel was doing even before Purple Moon - she holds an MFA and a PhD in Theater and her dissertation was the “first to propose a comprehensive architecture for computer-based interactive fantasy and fiction” (http://www.tauzero.com/Brenda_Laurel/BrendaBio.html).

The fact that girls occupy such a large space in the world of gaming as compared to the virtual terrain of the 80s and early 90s speaks to the impact her work and research has had on a once boys-only zone. And although she is generally unwilling to take much credit in this regard, she does concede that “interventions like Purple Moon enhanced girls' comfort with computers, which we set out to do, and brought girls roaring into the online game space” (http://blog.ted.com/2009/03/interview_with_2.php).

My computer, my friend

To suggest that people do not have a relationship with their computers would be a mistake. Everyone I know talks to their computer at one time or another. Most of the time, they are irritated and voicing their frustration, but, none-the-less, they are having some sort of “conversation” with their computer. Everyone knows the computer will not answer back (yet), but it cannot be helped. As a student, much of my time is spent working with my computer, so it should not come as a surprise that I would feel something when it doesn’t work properly. My computer is set up just the way I want, enabling me to find things quickly. Another computer may function just the same, but it may not be quite as pleasant of an experience.

A similar phenomenon exists with people and their cars. People have been naming their cars for as long as I can remember. In essence, this gives their car an identity and shows that they have some sort of relationship; so why should this emotion toward a computer seem so farfetched? Both the computer and the car are types of technology that we may spend countless hours working on, working with or riding in from place to place. These technologies are vehicles for our daily life, making a lot of what we do possible. Some of these technologies even take on human voices and characteristics, which could be why people give or attribute a human quality to these items.

The research I found interesting is the study that showed people are polite to computers. I suppose if you have attributed a human quality to a device, you would treat it in a similar manner as you would a person. We all know, in our head, the computers’ feelings cannot be hurt, but this research shows we may be thinking more emotionally than expected, possibly with our heart.

Emotions are often at the center of how we go about our day. Therefore, if my computer were to flatter me each time I turned it on, I may start my session in a better mood. The research suggests people are not partial to flattery from a computer versus flattery from a person. Interesting to think an insincere compliment from a computer could make you feel the same way as an insincere compliment from a person. Our reasoning could be that when you receive flattery from a person, there stands a chance it may be genuine. This is obviously not so with a computer, which goes to show that ego is a powerful part of our personality.

People often relate to other people with the same personality characteristics, so why not a computer with the same characteristics? People often say opposites attract, but the opposite (pardon the pun) may be true. People seem to be drawn to others with similar interests, similar styles and similar ideas. People can also be too much alike causing them to repel one another, hence opposites attract. However, people identify with others that they can relate to and personality characteristics are no exception. It should come as no surprise to me, but it does, that people would be attracted to a computer that is similar to their own personality. Once you have given a computer a human quality and feel emotional about it, it stands to reason you would be drawn to it if the personality characteristics were similar to your own.

One of the other studies pondered whether or not a computer could be male or female. Before I began reading the passage, I had no idea the researcher was going to use stereotypes to test for gender. I find it a bit offensive that, in 1997, the author was still attaching and reinforcing such stereotypes that have been present for hundreds, if not thousands, of years. Of course these stereotypes still exist (even today) and I do not mean to suggest otherwise. But to create a study that puts them at the forefront of this research is, in my opinion, unimaginative. I suppose if you wanted to test for stereotypes this was the way to do it, but it was not what I was expecting.

I think that we are emotional beings who are protective of the things around us. The more time spent with something, the more protective you may become and the more personal these items may feel. They become a part of your world and without realizing, you have humanized your computer.

One final thought: as I googled humanized I found http://humanized.com/. The site has free software called ENSO which turns your caps lock button into a command key. You use it by holding down the caps lock button and typing in something simple and when you let it go, it puts your command into action. Haven’t tried it yet, but it looks interesting.

On “Emotion in Human-Computer Interaction” and Developing Compassion

People are inherently emotional. Emotions, sentiments, and moods effect each aspect of response and interaction in our lives. Indeed, if any interface ignores this, it “risks being perceived as cold, socially inept, untrustworthy, and incompetent.” Part of the truth of this lies in the personification of the computer or software by the user. However, we also have the choice to change our emotions, sentiments, and moods in order to modify our behavior. The key lies in consistent intention and progression.

Brave and Nass reveal the underlying mechanisms of emotion with a figure that demonstrates the connections between the thalamus, the cortex, and the limbic system. The thalamic-limbic pathway is responsible for the primary emotions, while secondary emotions “result from activation of the limbic system by processing in the cortex.” They then go on to cover the debate over whether or not emotion is innate or learned. Regarding this debate, Brave and Nass describe the middle of the two extremes where “the limbic system is prewired to recognize the basic categories of emotion, but social learning and higher cortical processes still play a significant role in differentiation.”

One recent article that came to my attention involves the ability to learn compassion using meditative states. The experiment in the article entitled Regulation of the Neural Circuitry of Emotion by Compassion Meditation: Effects of Meditative Expertise used functional magnetic resonance imaging (fMRI) to measure differences in brain activity between novice and experienced meditators, those having 10,000 plus hours of Buddhist compassion mediation practice. They were asked two alternate between actively generating a condition of compassion mediation and refraining from the practice.

In this case:
The meditative practice studied here involves the generation of a state in which an “unconditional feeling of loving-kindness and compassion pervades the whole mind as a way of being, with no other consideration, or discursive thoughts” ... According to the tradition, as a result of this practice, feelings and actions for the benefit of others arise more readily when relevant situations arise. Our main hypothesis was thus that the concern for others cultivated during this meditation would enhance the affective responses to emotional human vocalizations, in particular to negative ones, and that this affective response would be modulated by the degree of meditation training.

They analyzed the areas of the brain associated with empathy including the insula cortex and the somatosensory cortex. The data support their main hypothesis, namely that “the brain regions underlying emotions and feelings are modulated in response to emotional sounds as a function of the state of compassion, the valence of the emotional sounds and the degree of expertise.”

One interesting way to see these results is to acknowledge that the meditation itself, over time, changes the neurology of the practitioner in such a way that the empathy and compassion become more automatic and spontaneous. In the section on Effects of Affect: Attention, Brave and Nass state, “people also often consciously regulate mood, selecting and attending to stimuli that sustain desired moods or, alternatively, counteract undesired moods.” Compassion meditation would then put people in a more sustained compassionate mood. I say mood because it seems to fit the definition better than emotion. “Moods...are nonintentional; they are not directed at any object in particular and are thus experienced as more diffuse, global, and general.” This would seem to be the case with long-term practitioners of compassion mediation. It becomes a “way of being.”

Brave and Nass support this by saying, “Intense or repetitive emotional experiences tend to prolong themselves into moods.” The Lutz, Johnstone and Davidson study used fMRI results to measure affect, but Brave and Nass also suggest other methods for doing so, including electroencephalogram (EEG) to test neurological responses, autonomic activity, facial expression, voice, self-report measures, and affect recognition by users.

Depending on the conditions of the study, these methods for testing affect differ in efficacy. The Compassion Mediation study did not use behavioral analysis during the testing because the meditators informed them that this would interfere with the meditation process itself. So they has to rely on the fMRI measurements and a certain amount of self-reporting. Self-report measures, in particular, suffer from a problem with temporal relevance. Brave and Nass point out that “questionnaires are capable of measuring only the conscious experience of emotion and mood. Much of affective processing, however, resides in the limbic system and in nonconscious processes.”

The dimensional theories using arousal and valence were also used during the Compassion Mediation study and with correlational results. This would support that for further research, where any self-reporting is necessary, compassion should be tested as a mood emergent from a two-dimensional space of “conscious emotional experience.”

By Christine Rosakranse
Professor Nathan Freier
Theory and Research on Tech Comm and HCI
September 29, 2009

Brave, S., & Nass, C. (2007). Emotion in human-computer interaction. In Sears, A. & Jacko, J. (Eds.). The Human-Computer Interaction Handbook: Fundamentals, Evolving Technologies and Emerging Applications, 2nd Edition. (pp. 77-92). Lawrence Erlbaum.
Fledman Barrett, L. & Russell. (1999). The structure of current affect: Controversies and emerging consensus. Current Directions in Psychological Science, 8(1), 10-14.
Lutz A, Brefczynski-Lewis J, Johnstone T, Davidson RJ (2008) Regulation of the Neural Circuitry of Emotion by Compassion Meditation: Effects of Meditative Expertise. PLoS ONE 3(3): e1897. doi:10.1371/journal.pone.0001897

1 Brave, S., & Nass, C. (2007). Emotion in human-computer interaction. In Sears, A. & Jacko, J. (Eds.). The Human-Computer Interaction Handbook: Fundamentals, Evolving Technologies and Emerging Applications, 2nd Edition. (pp. 77-92). Lawrence Erlbaum.
2 Brave, S., & Nass, C. (2007). Emotion in human-computer interaction. In Sears, A. & Jacko, J. (Eds.). The Human-Computer Interaction Handbook: Fundamentals, Evolving Technologies and Emerging Applications, 2nd Edition. (pp. 77-92). Lawrence Erlbaum.
3 Lutz A, Brefczynski-Lewis J, Johnstone T, Davidson RJ (2008) Regulation of the Neural Circuitry of Emotion by Compassion Meditation: Effects of Meditative Expertise. PLoS ONE 3(3): e1897. doi:10.1371/journal.pone.0001897
4 Brave, S., & Nass, C. (2007). Emotion in human-computer interaction. In Sears, A. & Jacko, J. (Eds.). The Human-Computer Interaction Handbook: Fundamentals, Evolving Technologies and Emerging Applications, 2nd Edition. (pp. 77-92). Lawrence Erlbaum.
5 Brave and Nass, pg. 87.
6 As described by Fledman Barrett, L. Russell, 1999. The structure of current affect: Controversies and emerging consensus. Current Directions in Psychological Science, 8(1), 10-14.

The Rise of Truly Emotional Computing

Brian R Zaik

It’s clear from taking a look at past research in the space that emotional responses to computing is a topic of great interest in the world of HCI. But I would also suggest that these articles, as well as my external knowledge of what exists right now in the world, help me to conclude that emotional computing is a frontier that has yet to be fully explored.

The first questions to answer are the most basic: do people actually develop emotional attachments to computers and computing devices? Do they assign emotional labels and metaphors to their computing experiences? And do they view computers as social actors with whom they may interact in ways similar to how they may interact with human beings? Nass et al. focused on what we might term as more traditional computing experiences (text-based), providing strong evidence that human beings begin to assign social and emotional context to computers as soon as the machines exhibit characteristics that may be compared to human characteristics, such as gender, attitude, disposition, and personality. While today we may take the findings of this 1997 study for granted, I imagine that Nass and his associates made quite an impact on the field of psychology upon the publishing of their research. To me, the question now is not, “Can users view computers as social actors with emotions and personalities?,” but rather, “How can we best design computers to harness emotions for the benefit of their users?”

Clark and Brennan showed that grounding is an essential part of human communication. And the process of grounding, as they defined it, involves the coordinated action of all the participants. Being aware of emotions and the emotional context of a communication is also key to grounding, as it will allow both parties of a two-way exchange to understand the implications of a particular message (though this is not always likely to work out perfectly). In order for computers to truly interact with us on a deeper level, they must be able to both elicit emotions from their users AND respond to those emotions in kind. The research study covered in Nass et al. only examines the first of these design requirements – as they state, every kind of interaction between the user and the computer was scripted for the study. While I can believe that even text-based interactions can generate social responses from users, I find it difficult to view that kind of interaction on the same level as human-to-human interaction. It’s one thing to be able to generate a one-sided social response from the user and another to be able to engage user and machine in a two-way, grounded exchange. I would argue that the true future of emotional computing lies in the design of computers that can both study human reactions AND dynamically deal with the reactions they recognize to improve the effectiveness of the human-machine exchange.

The European Union (EU) recently funded a research program aimed at realizing that kind of future. The Humaine project (1) is founded upon the notion that interfaces haven’t developed enough beyond simple user interface mechanics, despite massive gains in computing power over the years. Humaine is an attempt to expand the modern palette of HCI research beyond those mechanical approaches. To do this, Humaine researchers have started back at the ground level to figure out exactly how human beings should interact with computers, and vice-versa. Humaine’s strength lies in its interdisciplinary nature: psychologists, philosophers, artists, and computer scientists all work together to better understand how emotion can be incorporated into HCI design. And one of the first objectives of the program is to develop computer systems that can recognize human emotions using multiple modalities.

These systems have already been tested in museums in Scotland and Israel (2). In these trials, museum guides were issued to visitors. These handheld computers included earpieces and microphones to monitor visitors’ levels of interest in different types of display and react accordingly. As the Humaine program coordinator, Professor Roddy Cowie, points out, “While this is still at a basic level, it is a big step up from a simple recorded message.” This kind of interaction certainly goes far beyond the simple studies that Nass and his colleagues conducted in 1997.

Perhaps we could build emotion-aware computer systems that operate as closed loop machines. The computer would be capable of recognizing and analyzing how human users react to the user experience in front of them, and later shift its behavior to better suit the user. Nass et al. showed that the strength of the computing experience could be enhanced by strongly matching the user’s personality with a similar computer personality. For example, the data supported the conclusion that users with submissive personalities are generally more attracted to computers they view as submissive. We must realize that the computing experience might need to be finely tuned to the specific personality of the user, and that’s why being able to dynamically adapt the computing experience to the exhibited emotions of the specific user makes so much sense. In the lab, scripting experiences and personalities may suffice, but in order for computers to be truly capable interacting with human users on an emotional level, the experience must dynamically change based on the emotional detections and compensations of both parties.

One other consideration of the Humaine project is how computer representations can be designed to elicit emotions from users most naturally. Professor Cowie claims that Humaine researchers have “identified the different types of signal which need to be given by an agent – normally a screen representation of a person – if it is going to react in an emotionally convincing way.” This brings up another question relevant to our in-class discussions: are graphical avatars effective ways of displaying and exhibiting emotions in a two-way exchange between a human and a computer? Is an avatar that resembles a person necessary to interface properly with a human being on an emotional level? Nass et al. concluded that even simple text messages are enough to elicit emotions from human users, yet researchers are still trying to figure out the true benefits of more sophisticated ways of representing computers. All the way back in 1987, Apple painted a clear vision of the Knowledge Navigator, a type of computer assistant that could interface at a level similar to how two human beings may interact. In that video, Apple showed used a graphical, humanlike avatar for the Navigator’s interactions with human users. Is the computer avatar here to stay?

Emotional computing holds promise in allowing computers to better anticipate and respond to human emotions, which may help us to design breakthrough computer interfaces that can adjust to users. The Humaine program in the European Union may be an important building block for this future. For now, though, we’ll be the only ones who hear all those insults we throw at our computers.


  1. McKie, Robin. "Machine rage is dead ... long live emotional computing." The Observer. Guardian News and Media Limited, 11 Apr. 2004. Web. 28 Sept. 2009.

  2. "ICT Results - Emotional Results." ICT Results. European Commission, 3 Apr. 2008. Web. 28 Sept. 2009.

Reading Response: Affective and Social Responses to Computing

In today’s age, the word “social” is thrown around with reckless abandon to describe the rich interpersonal connections that the Internet provides as a platform for us to engage with each other in many ways. Of notable popularity in the last few years have been social web applications like Facebook and Twitter. As a society, we aren’t quite sure what to think about the relationship between computers and human sociology. One thing is for sure: these issues are now becoming a point of household dialog. But researchers have been intrigued by humans’ uncanny ability to socialize for a long time. Consider our historical relationship to inanimate items; while it has been commonly acknowledged that humans may have always established emotional bonds with some objects- especially when they represent some person, event, or idea- that relationship could hardly be classified as social. However, as objects get anywhere even remotely close to the likes of a human, an amazing transition occurs where we begin to socialize with them as if they were humans, expecting manners, and judging their personality characteristics. In the paper “Computers are Social Actors: A Review of Current Research,” Clifford Nass and colleagues present five experiments that serve as evidence that computers are indeed social actors- whether intended or not.

The paper offers two sides that have commonly been taken when discussions have taken place regarding humans’ social behavior towards computers: An attribution of youth or ignorance on the part of the human (which assumes that humans whom act socially with computers are either young or- frankly- stupid, and therefore represents a small set of the human population) or an indication that users actually interact with the creators/programmers of the computer via proxy. But Nass's studies show that social reactions and affectations toward computers are not representative of a small set of the human population. In fact, it is quite common. And in either case, both of these sides assume that people’s behavior matches their beliefs, which is uncharacteristic of humans and turns out to be false.

Essentially, the experiments that Nass and his colleagues present offer several key takeaways:

  1. Users exhibit politeness to computers, and further expect politeness in reciprocity
  2. Users respond to computer personalities in the same way they respond to human personalities
  3. Users are susceptible to flattery from computers (sincere or not)
  4. Users apply gender stereotypes to computers
  5. Users do not react socially to computers simply because they are thinking of the creators/programmers via proxy (a direct contradiction to one of the two sides that many fall into)

One major theme that ran across all studies was that people were (sometimes defiantly) unaware that they had treated the computer as a social actor. After all, what rational person would think that politeness is important when interacting with computers? Yet time after time (and supposedly unconsciously), participants would treat the computers with manners- as if they hadn’t wanted to hurt the computer’s feelings. Participants rated computers as more likable if they matched personality characteristics. They found themselves wooed by the charming flattery of the computers, even if they were told that the computers didn’t know better. It was as if people couldn’t help but to treat computers like their long lost Aunt Maude.

Nass and colleagues built many of their premises for presented experimental studies on established results from sociology and communication findings. Cleverly inserting the word “computer” where originally “human” was used allowed them to bootstrap concepts previously thought to only apply to human-to-human interactions and instead apply them to computers. This may have presented some issues, especially while taking complicated social concepts like dimensions of personality into the lab. For instance, in the second experiment the researchers based their premises on studies showing that the more similar people are, the more likely they will be attracted to each other- especially true with respect to personality. However, in their study the researchers only measured the variable personality across one dimension: dominant vs. submissive. I find it difficult to rely on one dimension in order to validate the hypothesis that similar personalities between computers and humans lead to attraction.

Nonetheless, this particular study did offer a valuable implication in that it showed how words alone can be used to engender personality and foster a relationship between a person and computer- whether it was intended or not. Words! These design elements that we use ubiquitously in the things we create. The diction we choose to label something or request input is either attributing to a user’s affection for a computer, or deteriorating it whether we like it or not. The extended implications of this: What happens when we introduce other design elements that foster personality traits in computers (as we arguably already have been doing for a long time)? Anthropomorphic technology can now use facial expressions and embodied gestures to communicate with humans. Does this require even more careful thought on our behalf as we begin to use higher fidelity ways of communicating (and inherently conveying personality traits), or is our job as “designers of the social” getting easier?

To step back, how does the common acceptance of Internet usage now change how people interact with the computer- that hunk of metal that still sits on our desks and on our laps? Recalling applications like Facebook and Twitter, it now seems arguable that people actually are looking through the computer and reacting socially with human counterparts on the other side. It appears the proximity effect of the messenger that was illustrated to us in the fourth study isn’t so ironclad any longer, and perhaps new experiments need to be conducted to test this.

I wonder how the results for re-administration of the 5 studies would fare today, given the explosion of the Web 2.0’s cutesy landscape, riddled with rounded corners, bright colors, and plain language copy. Surely we rely on a lot more cues in our designs to allow people to judge a website’s personality compared to 1994- when this paper was written. People can go to many-a-web-app, and get a load of positive feedback for simply registering for a free account. Yet, one can’t help but wonder if the Web 2.0 movement has relied too much on the social nature that computers can play. Websites have gotten pretty good at increasing conversion rates, but on average, people stop using their free online accounts just two weeks after signing up. It’s as if they fall in love, but it turns out to be just a fling, or worse- a one night stand! Have we gotten so good at designing the social into web applications that we’ve become players?! It’s dreadful to think that we may have been spending too much time and effort trying to engage the social aspects of our interactions and forgetting to address the content of the interactions in the first place.

Monday, September 28, 2009

Computers are Social Actors: A Review of Current Research

How do people respond to the computer socially? Will people respond to the computer or other communication technology as they do to other people? Will people behave politely to the computer? Will people like that computer showing similar personality to them? Will people like to see a flattering word or sentence from a computer? Will people apply gender stereotypes to computers? Is people’s social response to a computer toward a programmer behind the computer, or is the response toward the computer itself?

This article, “Computers are Social Actors: A Review of Current Research”, is about the experiment to find out the answer for the questions above. Through five experiments, the authors confirm their hypothesis. So, the authors found out that “people apply social rules to computers even in situations in which they state that such responses are wholly appropriate”.

When people interact with an object, their response and behavior is determined by what the object is and the characteristics that the object shows. For example, people tend to treat an object that looks similar to a human, such as a human-like Robot or doll, more like a human. Also, the reason why some people like a robot pet would be that the robot pet makes a sound similar to a dog and that it behaves similar to a dog, so these characteristics of the robot pet help people think that they are pets. However, how to perceive and interact with an object depends on how people regard the object and how they situate the object in the context of their life. Even though interacting with an object is in the social context, the interaction is basically based on each Individual’s recognition and feeling to the object.

Some people would think flower pots are their friends; some people would think their cars are their lovers. I think, maybe, for some people or for many, the computer could be the closest friend. I think this is a kind of emotional response to the computer. When thinking about and designing HCI, we have to consider every aspect which affects the interaction between human and machine. So, I think the emotional part of the interaction is also very important.

Thinking of this, I thought what is the meaning of my computer for me? I was very sad and cried when my first laptop was broken. It was very similar feeling to when my rabbit died. My laptop sent me all news, good and bad, it helped me ease my sadness by singing a song and telling me a funny joke when I broke up with my boyfriend, it helped me get a good grade in school, and it was with me when I was extremely nervous at my first conference presentation. I was emotionally bonded with my labtop, and I responded very emotionally to it even though I knew “such responses are wholly inappropriate”. So I thought it would be interesting to think about emotionally attractive interface design or an interface that draws out more emotional response from people. Not just friendly interface, an interface that can be a real friend of people. This approach could be useful for designing an interface for children.

Another thought is, ultimately, with whom do we interact when we interact with computers. It would depend on what kind of work we do with computers. When a computer programmer works at making software with software language, it would be more like interacting with the computer itself. But in most cases, we would interact with other people through computers. So, if it is a communication tool between people, they would want to directly feel the other people that they interact with rather than feel computer interface. Personally, I prefer computer interface to be concealed as much as possible when working with it. However, in some ways, the authors of the article consider how to expose computer and its interface. I think before trying to think a way to design a socially well-responding computer interface, we might have to think what people really want to feel from interface, a well designed computer interface or their friends who are interacting directly without recognizing interface.

Finally, I like the article in that the authors think that the machine is not just a machine, and sometimes it can have more meaning than just as an useful machine. However, I doubt a few ideas. Especially, their emphasis on the effectiveness of word-based interface looks to be a little bit on the wrong track. And also, I think the less feedback from a computer there is, the better it is. People would want feedback from a computer when it is essentially necessary. The error message is essential. If we can, it would be good to use a comment as nicely as possible. However, I don’t think we need to try to add positive feedback to the system.

Byul Shin

Way back before virtual girlfriends…

Liz Foster
Sept 29, 2009

Maybe it is because we’ve moved on quite a bit since Nass et al conducted their study in 1997, or maybe it is because I hold frequent, out-loud, one-way conversations with my computer, but when I read “Computers Are Social Actors” I thought “Really? We needed a study for that?” Of course we did, and the research conducted by Nass has much more validity than my intuition, but today there is abundant evidence that humans are conducting deep and often intimate relationships with technology.

Opportunities for a “no strings attached” relationships via computer or cell phone abound. Imaginerygirlfriends.com uses technology to deliver “a completely fictitious, yet authentic looking relationship with the girl of your choice”. In this case, although the relationship is not real, the person behind the e-mails, letters, chats, etc is. With V-girl from Hong Kong’s Artificial Life you can download and animate a completely virtual girlfriend to your cell phone. The V-girl is, in fact, nothing but computer code, but you can earn her electronic love by being on time for dates, writing her love letters, and plying her with gifts that cost the phone user real money. Let her down, and she’ll stand you up or dump you.

This is an overt example of how moods and emotions can be shaped and even exploited to keep users focused on something they perceive to be important. In the case of V-girl, users are motivated by emotion (or maybe sentiment?) to return to the interface again and again, and even manipulated into spending money, or risk losing the connection and affection they believe they are getting from this virtual person. In “Emotion in Human-Computer Interaction”, we read that “negative events, which tend to be highly arousing, are typically remembered better than positive events” (p. 81). So, to ensure that the user keeps coming back, it might behoove the interface designers to ensure that there is a break up in the user’s future. In a very basic way, the interface could be used to perceive the emotions of the user – if he buys gifts for V-girl, he’s shown his hand, and we know that he’s hooked.

Of course, it may be that many users of these technologies are not expending actual emotions on their virtual girlfriends, and see V-girl as just another game, but Nass’s work leads us to confidently believe that many users are in fact gaining emotional satisfaction or disappointment from these encounters. I’ve seen how involved my 12 year old daughter gets with her Nintendog; I can only imagine how a lonely 17 year old feels when he’s dumped by his virtual girlfriend.

The implications for this are worrying. Computers already had an influence over our lives - what we purchase, how we perceive our status, how well we perform at work. But if we are viewing computers as more than a thing, if we are endowing it with affection or anger or jealousy, then we may be giving it too much power over our lives. We spend huge amounts of time interacting with machines at work, and then come home and log in again to socialize. Its so easy, and face to face relationships are messy. So why not avoid that messiness by having your relationship with technology, where, if things don’t go your way, you can reset and start again? Virtual girlfriend nagging you? Switch her off! The Internet supposedly led us into a world with no borders, yet it seems that technology sometimes pushes us farther apart.

I don’t know the results of any studies that have been done on whether behavior in virtual relationships translates into real-life behavior, but it does seem that, under the right circumstances, there is the potential for us to allow virtual relationships to become replacements for real ones. A few months ago, the NY Times ran an article on the phenomenon of “2-D”, a Japanese social curiosity involving adult men who form extreme emotional and sexual attachments to a stuffed pillowcase with a cartoon of a young woman printed on the fabric. These 2-D relationships are being carried on to the exclusion of real relationships with other humans. Of course, the pillows can’t talk (perhaps that’s the appeal?), and they couldn’t exactly be called “technology” but this phenomenon illustrates the point that it is possible for significant numbers of people to transfer their emotions to inanimate objects that have no flesh and blood person behind the representation.

I wonder how Clark & Brennan would view these 2-D pillow relationships? The authors present the idea that grounding in communication - developing "mutual knowledge, mutual beliefs, and mutual assumptions” - is essential for communication between two people. They extend that assertion to humans’ relationship with technology to determine how we look for grounding in our technological interaction. It is through grounding that we confirm that our communication has been received and understood. The techniques used for grounding in one medium may not be useful to another, each media has its constraints, and the cost of different techniques varies with the media. For instance, in our distance discussions, grounding can’t be found in synchronicity – there is a high cost in trying to time our replies precisely when there are more than a dozen participants. A conference with only two participants would pay a much lower asynchrony cost. Because we can’t see each other, we also pay a high speaker change cost. We can see who has typed previously, because each comment is preceded by the speakers’ name, but when participants are formulating their thoughts, there are no facial or gestural cues, we can’t see the speaker looking at the person he or she intends to respond to. We see a prompt at the bottom of the input box saying “so-and-so is typing”, but sometimes that comment never appears, or the speaker is responding to a different discussion chain than expected.

It seems to me that, as a medium, pillows come with a significant set of contraints, and there are several significant costs associated with a 2-D relationship. On the other hand, you could argue that there can be no breakdowns in communication when one person has complete control over the relationship (and pillows can’t have an opinion).