Tuesday, November 11, 2008

On robots and morals

Responding to articles written by my own professors two weeks in a row: awful or awesome?

The articles on children and technology this week, especially the two studies asking what can be seen in interactions between children and personified technologies (admittedly also affected by our recent study of value-sensitive design), raised some interesting questions, some of which fall outside the realm of Human Computer Interactions and some of which extend beyond the topic of children and technology. Instead of writing a straightforward essay attempting to encompass all of these questions, I'd like to break them down one by one with a brief discussion.

1. Do moral judgments in interactions with "personified agents" generalize to all interactions with technology?

This question was inspired by Freier's use of a Half-Life 2 model for his morality study. The model used in the study was given a voice separate from that of the game that played a researcher in a game of Tic-Tac-Toe. Insults from the researcher were observed by children to be immoral, especially when the digital player spoke up for her own rights and feelings. The question above can be asked specifically in this case: If shortly after, the children taking part in the study played Half-Life 2, and the model who had previously played Tic-Tac-Toe was now a gun-toting member of the opposite team, would the child have issue with shooting her? Would the moral reaction to the character carry over from one context to another?

More generally, once the child has encountered enough digital models that advocate for themselves, would the child start to ascribe morals to relations with all digital models? After all, for instance, characters in video games often cower, or protest, or try to fight back as we carjack them or shoot at them or stab them or steal from them. Within the context of the game, we see these as conventional, rather than moral values. However, these children maintained moral wrong even when faced with the option that other cultures did not consider certain acts as such. Would the cultural rules of a video game be perceived in the same way?

2. Are these moral reactions limited to children of the current generation, who are growing up surrounded by technology?

I would be interested to see how the AIBO study might be replicated among members of different generations. My own interactions with similar lifelike robots have been a strange mixture of curiosity and revulsion. I'm interested in the biological mimickry, but at the same time, such...creatures?...still fail to transcend something similar to Mori's "uncanny valley." Stuffed dogs, in my mind, are inanimate objects that we can comfortably project our moods upon. Robot dogs that need regular stimulus to maintain "mood", or "health" or "happiness" are a different...well, they're a different animal entirely.

I will always prefer a real dog to a robot dog, and I might assume that others of my generation would feel the same. Part of the appeal of a real pet is that their dependency upon their owner forges a relationship with consequences. Feeding a dog regularly makes it like me, which makes me like it in return. If I forget to feed my robot dog, I change the batteries and it is fine (or, in the case of a fellow student with a Webkinz, I can starve it, give it a spa treatment, and get more points than I would have if I fed it regularly). Note my use of "it" as a pronoun, unconscious until a second readthrough.

However, children who grow up surrounded by technology in a world where a cellphone is a necessary appendage, where more knowledge comes from the internet than from teachers, and where hearts over the head of a Tamagochi equal reciprocated love, may inevitably see such anibots differently. So I think two further studies would be interesting: one of older people interacting with the AIBO, and one with children of the same generation a few years down the road from now as teenagers interacting with similar technologies.

3. Are moral reactions to "personified agents" fair or valid?

A different way to ask this might be, can the child subjects of these studies see the strings and the hands of the puppetmaster? All of the personification from avatars and robots and digital people come from real researchers and programmers. The technologies are not human, cannot relay values or emotions that were not implanted in them by their creator. Are children aware of this at all in their interactions with such technologies? Are they responding naturally, or are they responding in the context that they know they're being manipulated? Do children really think the AIBO gets pleasure from eating, or are they playing along? Or does it matter?

Another question might be how children would react if the study of morals were reversed. What if the computer was in charge of placing the X's and O's, and cheated the researcher. Would the children perceive that as a moral violation? Do these children believe that the personified agent is capable of making moral decisions as well, or only that real people should have morals in their interactions with them? What qualifications would have to be met to describe a complete moral relationship?

4. Once we ascribe morals to our interactions with technology, can they still function as tools?

A friend of mine is living in Hollywood working on spec scripts and I recently worked with him on punching up a scene where [copyright / trademark / stealing prohibited!] a man is too embarrassed to ask his female-voiced GPS for directions to an adult video store, so he asks how to get to a convenience store across the street. When he turns left into the parking lot of the video store, the GPS voice seems to admonish him for his trickery. Comedy aside, various technologies are used to perform morally grey tasks that some would argue as necessary. If we get to the point where computers respond to natural speech and talk back to us, as demonstrated in the Apple commercial we viewed in class, would it be morally wrong to, for instance, have a "personified agent" who mediated the process of putting a bolt through a cow's skull at a slaughterhouse? Who controlled the process of lethal injection? Who wielded the weapons system on a tank? Who fired nuclear missiles? Would we program such technologies with voices and reactions appropriate to their tasks? Would we make them so that we could lessen our guilt over performing such tasks ourselves?

5. What will this lead to in the crazy science fiction world that will inevitably be our future?

OK, so perhaps some of the last question fit more into this category. What future are we working toward in conducting our research? At the surface, these studies wanted to ascertain what effect technologies had on the development of children, but the questions they raised went beyond that for me. Freier offers in his conclusion the following observation: "The implications of the
alternative design [digital models that cannot self-advocate their own rights] are that children will come of age engaging in a significant number of social interactions that lack any moral feature possibly increasing the likelihood that children will not construct a rich understanding of the intimate relationship that exists between social reciprocity and morality." While the study offers conclusive evidence that digital interactions can be optimized to develop morality, as the above questions show, I wonder about the other half of the equation, in which we begin as a culture to ascribe moral agency to our own digital creations.

Do computers have moral rights, or are they limited to the morals we program them to have? Do digital representations of people online have the same rights as their offline counterparts? Can relationships with digital people and animals offer the same benefits as real interactions? Do personified agents dream of AIBO sheep?

No comments: