Wednesday, October 21, 2009

Value Sensitive Design

Daria Robbins

COMM 6480

Value Sensitive Design

Overview of the socio-technical problem space:

The dangerous conditions created by drivers who use a cellular phone while driving is the problem space for the purposes of this paper. The syndrome is called “distracted driving” and is attributed to, according to the Harvard Center of Risk Analysis, 636,000 crashes 330,000 injuries, 12,000 serious injuries and 2,600 deaths each year (Harvard Center of Risk Analysis). This distracted state of mind includes both using the cellular phone to talk and/or text while driving.

Nearly 80 percent of crashes and 65 percent of near-crashes involved some form of driver inattention within three seconds before the event.  Primary causes of driver inattention are distracting activities, such as cell phone use, and drowsiness (National Highway Traffic Safety Administration).

Though the connection between cellular phone use while driving and an accident may not be directly related, using a cell phone can degrade driver performance, attention and reaction time because of a cognitive distraction. This documented risky behavior has the potential to result in serious consequences, not only to the driver, but to those within the vicinity of the driver and his/her vehicle.

The Stakeholders:

  • The driving using the cellular phone
  • Other drivers
  • Pedestrians
  • Passengers in the cell phone user’s vehicle are potential victims as are passengers in other vehicles
  • Families of the resulting victims
  • Law enforcement officers and other first responders who respond to the resulting accidents
  • Insurance companies that pay out for personal injury and property damage from resulting accidents

The value of import in this problem space is that of personal and public safety. As citizens, we have a responsibility to one another to “not do harm.” This position goes beyond simply physical harm or property damage, but also encompasses the collateral harm done to individuals, families and communities when distracted driving results in physical harm and damage. Furthermore, the human interaction with technology should not result in harm to the user or others.

It is not clear from current research if hands-free use of a cell phone is “safer” than hand-held use, but some research suggests that there is similar cognitive distraction. Additionally, the driver may miss audio or visual cues necessary to avoid having an accident because they are attending to a conversation rather than the task at hand – the operation of a motor vehicle.

The conflict between the stakeholders is a matter of personal freedom. However, it is clear that the personal freedom of the reckless driver using the cell phone while driving infringes upon the rights of all other stakeholders to be safe.

A solution? There really isn’t one.

We have crossed that bridge and there is no turning back. Ultimately, the technology will have to advance to a kind of “smart” hands-free system that can adjust the volume based on the traffic conditions.

There is currently legislation in many states that address the use of hand-held cell phone use (talking and/or texting) that has made the practice illegal. However, enforcing the new law puts an added burden on law-enforcement and is difficult to prove.

 

 

 

 

 

VSD: Near Field Privacy


In his book, "Philosophical Dimensions of Privacy," author ferdinand Schoeman describes privacy as a claim, entitlement or right of an individual to determine what information about himself or herself can be communicate to others. Privacy is considered a houshold term (not to be a pun- the household is often riddled with privacy issues), however there are nuances in how people think about it. Some classify privacy as an issue of security and control (Parent, 1983), while others simply project it as a materialization of human dignity- the mere fact that there is a want for some things to be private and some things not is a cornerstone of many facets of humanity (Bloustein, 1964). In either case, privacy has undeniable value to many and there have been painstaking steps take towards protecting it in continually developing areas of advancing civilization. Three common approaches have been identified in terms of preserving and protecting privacy in value-sensitive design methodology (Friedman, 2007):
  1. Inform people when and what information about them is being captured and to whom the information is being made available
  2. Allow people to stipulate what information they project and who can get hold of it
  3. Apply privacy enhancing technologies (PETs) that prevent sensitive data from being tagged to a specific individual in the first place
With the breakdown of technological barriers, a network of available information has begun to flourish- some authorized and acknowledged by owners, and some not. One particular technological area of concern is Radio Frequency Identification (RFID), a technology whereby modest amounts of data can be stored on a tiny tag- usually integrated within or attached to owned objects of interest- so as to be read by active radio fields (readers), and in some cases written to. RFID and privacy have had a tumultuous relationship especially due to the fact that information can be accessed and exchanged without a line-of-sight. Both readers and tags can be completely hidden from view, making it difficult, if not impossible for the owners of the scanned objects to even be aware that such a process is taking place (Langheinrich, 2008). Additionally, the range at which unautorized tag readout (t"tag-sniffing") can occur is fairly large, with the help of wireless communications. In April of 2008, a search for scholarly articles on RFID privacy and security yielded over 700 titles. As of October of 2009, over 17,000 articles are returned.

Near Field Communication (NFC) is an extension of RFID Technology. It differs from traditional RFID communication protocols in that is only occurs over a very short distance (under 4 inches). Notable current instantiations of NFC include the Oyster card public transport system in the United Kingdom, and the payWave credit card augmentation in American banking and processing interactions. While RFID may be seen to uphold values of convenience, openness, and process efficiency ( e.g. government issued passports with RFID tags as well as retail chains that use RFID tags in products for inventory tracking are able to cut costs, reduce time between entities, and generally streamline the flow of information), NFC and the protocols that can be scribed to it (Paci, 2009) can be seen as an attempt to uphold privacy. Near fields inherently require that an interaction take place very close to one's object of choice (which has become the mobile phone. ABI Research has concluded that at least 20% of mobile phones will be NFC-enabled by the year 2012. Currrently, only a small number of models from manufactures such as Nokia are enabled for near field communication (Gallen, 2008)). Mobile exchanges of information that one may want to keep private such as mobile contactless payments must take place very close to one's mobile device. In this way, a sense of private space is being projected upon the transfer of information. Further, because near field communication can be tied to powerful processing technologies within a mobile device, the first two preservation attempts for privacy can be seen whereby the the user is informed when and what information is being captured and to whom it is being made available, and the user can be allowed to stipulate what information, if any, they can broadcast and who is allowed to get a hold of it. The NFC extension of RFID technology and the technological ecosystem that embodies and executes the private exchange of information is what allows privacy to be maintained, even in the chaos and paranoia that radio friency identification in general imbues.

In the terms of value sensitive design, we can attempt to identify the stakeholders involved. Those stakeholders that directly interact with the technology would be users of NFC-enabled mobile phones. Instead of considering their phone as a long-range communication device, they will come to see their phone as the gatekeeper to much of their identity- much of which they would like to keep private. Those whom we may consider to be indirect stakeholder would be those who may benefit from the fact that these users can use their mobile device to communication and transmit information. An example would be a restaurant owner, who allows his customers to pay their bills without requiring card swipes, paper receipts, or signatures. This would present him with feinite cost sacings. Further, he may allow people to touch their mobile devices over an advertisement poster to collect coupons. Only those direct stakeholder who feel so inclined to received a coupon would do so, but the owner would still be (positively?) affected by his endorsement adn encouragement of the use of NFC.

This example shows a somewhat clear delineation between direct and indirect stakeholder of NFC with respect to the value of privacy. However, stakeholder identification itself sheds light on the very issue that NFC addresses within the realm of privacy. When radio frequency is the primary vehicle of information transmission, and due to a lack of line-of-sight as well as unauthorized tag-sniffing, the typical stakeholder is an average consumer, and is both direct and indirect. For example, a consumer may make a purchase at a gas station using their payWave credit card- directly using the RFID functionality that the card permits. However, they are highly susceptible to unknown and unwanted tag sniffing, whereby a reader may be able to receive purchase records or banking account information. NFC aims to make it so that it is always clear to the user whether they are a direct stakeholder or an indirect one.

The value of privacy embedded in NFC comes at the cost of openness and conflicts with location-independence. Since NFC holds privacy so high and forces interactions to occur within people's personal space, it makes it difficult for people/systems to share information who aren't situated near each other. In fact, in cultures where personal space is very small, privacy may still be a concern. However, in cultures where openness and convenience are much more important than privacy, a technology like NFC would seem tedious and unintelligent compared to ones like WiFi. With regards to location-independence, and precisely because NFC requires a 4-inch proximity in order to exchange information, it can be used to ensure attendance for various activities. For example, it is conceivable that a school may use NFC to allow children/teens to "touch" in for attendance and that all a teacher must do to take classroom attendance is to check records. However, the value imposed here is that attendance is important in the first place. How does this play out against education infrastructures where attendance at a specific location isn't important, but rather online presence and respected assignment deadlines.

In summary, through the lenses of value sensitive design we can see how privacy is imbued and preserved within the usage and interactions of near field communication- especially within the common vehicle which is growing to be the mobile cellular device. It serves to use much of the benefits and intelligence that RFID provides, but attempts to apply a much more thorough materialization that respects the ability to which people can transmit information and to control how and to whom that information is shared.

Sources:

Bloustein, E. (1964). Privacy as an Aspect of Human Dignity: An Answer to Dean Prosser. New York University Law Review, 39, 962-1007.

Friedman, B., & Kahn, P. H., Jr. (2007). Human values, ethics, and design. In Sears, A. & Jacko, J. (Eds.). The Human-Computer Interaction Handbook: Fundamentals, Evolving Technologies and Emerging Applications, 2nd Edition. (pp. 1241-1266). Lawrence Erlbaum.

Langheinrich, M. (2007). A Survey of RFID Privacy Approaches. Springer-Verlag London Limited

Paci, F. (2009). Privacy-Preserving Management of Transactions' Receipts for Mobile Environments. Proceedings fo the 8th Symposim on Identity and Trust on the Internet. Gaithersburg, Maryland, 73-84.

Parent, W. (1983). Privacy, Morality and the Law. Philosophy and Public Affairs, 12, 269-288.


A Value Sensitive Design Approach to Airport Screening Systems

A brief overview of how to improve the airport screening process

Airport security has served as the forefront to operating the traveling process for travelers entering and exiting the airport.  Long lines, along with passenger and baggage checkpoints are core elements that affect the traveler’s experience.  Aside from my personal issues with specific screening process’, most of the people that I know have experienced similar airport screening issues at some point in their lives, in which they were either harassed, had failed the security checkpoint test or in most cases lost their ticket or identification card.  Especially after the 9/11 attacks, TSA (Transportation Security Administration) had been mandated by law to appropriately screen air travelers to ensure that certain items and persons prohibited from flying could not board commercial airlines (Security Screening, 2009). This federalizing of airport security was built on two assumptions: first one being that all passengers are equally suspicious and should receive the same scrutiny and secondly that the principal purpose of airport security is to keep dangerous objects (e.g., knives, guns and bottles) off airplanes (Poole, Carafano, 2006).  In addition to the increase to the federalized process of security, technical security screening improvements also increased as well. 

Socio-technical Problem Space

In a published article by Mathew L. Wald of the New York Times, screening technology in airports has been of discussion amongst TSA and government officials, in regards to the “reshaping” of airport screening technology in the U.S.   Wald points out that the U.S. has moved towards reshaping airport screening technology by implementing new computer systems that rely primarily on the governments database system, in which the personal information of passengers would be prescreened along with the traditional screening process.  Wald points out that their goal is to try to select about 4 percent of all passengers for more intense security, compared with the 14 percent identified by older systems.  However, even with these improved technical changes, many subjective issues still remain through this increase of trying to protect passengers. 

Cases of random or biased selection are still apparent in the screening process, however it seems that passengers are still “wrongfully” identified as “terrorist” depending on if their name is similar to a name on the “F.B.I. Terrorist” watch list, if they make last minute flight changes, or if they’re physical representation or ID raises suspicion to the TSA administrator (Security Training).  That is why my solution to this problem would be a design approach to a “finger-printing” screening process. In which people would no longer have to show forms of identification and boarding pass’ when going through the security checkpoint or when entering the plane, just their thumbs.

Questions that could arise:

1. How much would these systems cost?

2. Who would design and issue out the new systems to airports?

3. How is the government going to issue everyone a fingerprint?

4. What type of fingerprinting would airports take?


1. How much would these systems cost?

According to Poole and Carafano’s “Time to Rethink Airport Security” article, Government investments, along with taxpayers and airline travelers’ funds have been used to support the TSA’s annual budget, which in 2005 was primarily devoted to new baggage and passenger screening systems. My solution would be for the government to revise the new budget based upon systems that are supported already in law enforcement budgets, because law enforcement agencies already use this new type of screening method.

2. Who would design and issue out the new systems to airports?

SAIC (Science Applications International Corporation) is a company that specializes in solving critical problems with innovative applications of technology and expertise (Airport Security, 2009).  In addition, SAIC has been recognized as a leader within the Airport and Cargo security field. In particular, SAIC has experience in Airport Security Systems Integration Design and Installation, Smart Cards and Biometrics and Information Security (INFOSEC).  Airport Security Systems Integration Design and Installation could be applied to the design of the new screening system, because SAIC project managers are supported by Professional Engineers, who are up to date on the latest technologies and their deployments with varying environmental conditions (Airport Security).  Smart Cards and Biometrics could also be applied to the development of the new screening process, because they deal with the storage of personal data for access. Lastly, SAIC’s Information Security (INFOSEC) could be applied to the design concept of the new screening system, because it deals with information security and offers a detailed pragmatic approach of process analysis and implementation (Airport Security).

3. How is the government going to issue everyone a fingerprint?

A possible solution would be for the government to require individuals receiving identification cards to have their fingerprint taken.  In addition, anyone who already has forms of identification, but without the new fingerprint label, would then be required to make the update by a specific date.  This would allow the government to access everyone’s fingerprints and identification information from the database system, that would connect individuals’ information to their fingerprints at any given time.

4. What type of fingerprinting would airports take?

There are three distinct types of fingerprint impressions that can be recovered for identification purposes, which are: patent prints, plastic prints and latent prints (Fingerprints, 2009).  Patent prints are visible prints that occur when a foreign substance on the skin of a finger comes into contact with the smooth surface of another object (Fingerprints).  The foreign substances contain dust particles, which stick to the ridges of the fingers and are easily identifiable when left on an object.  Plastic prints are visible, impressed prints that occur when a finger touches a soft, malleable surface resulting in an indentation (Fingerprints).  These prints are easily observable and require no enhancement in order to be viewed.  Lastly, latent prints, are fingerprint impressions secreted in a surface or an object and are usually invisible to the naked eye (Fingerprints).  These fingerprints require enhancement in order to be viewed because they serve as a means of identifying the source of the print.

Latent prints have been proven to be extremely valuable when applied to the identification of their sources.  Therefore, because latent prints seem to be highly affective and also are harder to for people to visually see or change, this form of finger printing would best fit with the proposed screening process. Only issue with this type of fingerprinting would be that airports would have to add to their annual security budget for enhancement devices that can scan such prints.  Otherwise, if the budget becomes an issue, plastic prints would be an alternative option, because they are easily visible and easy to scan for, yet they might arise more security issues in regards to people being able to alter their prints.  The decision for which type of prints would be of primary use, would be based upon government and TSA annual budget availability.

 

Implicated Human Value: Freedom from Bias

An implicated human value to the new fingerprinting screening process would be “Freedom from Bias”.  According to Freidman and Kahns’ “Human Values, Ethics and Design” article, Blas refers to systematic unfairness for individuals in three forms of bias, which are: preexisting social bias, technical bias and emergent social bias.  For the new screening process, travelers will be able to be free from the present airport security biases that Blas points out within the article.

 

Direct Stakeholders

Passengers/ Travelers:

The people who travel are the most directly involved, because they are the ones that would need to go through the new security screening process and have their fingerprints taken.

Benefits:

  • It would help decrease long lines at security checkpoints for travelers
  • There would be a decrease in harassment from TSA administrators
  • Travelers would not have to worry about losing boarding passes once they are cleared from the security checkpoint. 

Airport worker/ Airline ticket agents/ TSA employees:

The people who work at the airport would also be directly involved, because they would have to be trained to issue and identify scanned fingerprints. In some cases the airport employees could also be considered as indirect stakeholders.

Benefits:

  • Each of these groups would be able to still keep their jobs but would not have to deal with having to stress or be intense during the security and ticketing processes. 
  • Overall it would be a decrease in workload and worry for them.

Indirect Stakeholders

Government:

The government would be indirectly affected because they would have to keep the database updated and make the information available to airports, but would not have to be apart of the actual screening process.  The government also would have to issue out fingerprints to be taken by individuals when receiving an Identification card.  In some cases, the government could also be a direct stakeholder.

Works Cited

Airport Security. SAIC. 20, October 2009.

“Fingerprints”. 20, October 2009

<http://www.fingerprinting.com/types-of-fingerprints.php>

Friedman, B., Kahn, P. H., Jr., & Borning, A. “Value Sensitive Design and information systems.” In P. Zhang & D. Galletta (eds.), Human-Computer Interaction in Management Information Systems: Foundations, (348-372). Armonk, New York: M.E. Sharpe, 2006.

Poole Jr., Robert W. and Carafano, James.  Time to Rethink Airport Security. 26, July 2006.

<http://www.heritage.org/Research/HomelandSecurity/bg1955.cfm>

“Security Screening”. Transportation Security Administration. U.S. Department of Homeland Security. 20, October 2009.

<http://www.tsa.gov/what_we_do/screening/index.shtm>

U.S. Government Accountability Office, Aviation Security: Screener Training and Performance Measurement Strengthened, But More Work Remains, GAO–05–457, May 2005.

<http://www.gao.gov/new.items/d05457.pdf >

Wald, Mathew L., “U.S. ‘Reshaping’ Airport Screening System”. The New York Times. 16, July 2004.

 

Synths and Accessibilty

INTRODUCTION

Though popular synthesizers, like the Minimoog, had been around since the early 1970s, the advent of standardizations such as MIDI, synthesizer polyphony, and digital interfaces and oscillators (DCOs) allowed for greater flexibility within the field of sound synthesis. These developments also allowed synthesists to store data digitally, eliminating the need for complicated charts that tracked every single parameter change on the knob-laden synthesizers that had been the standard only a decade before. The days of the Rock and Roll keyboardist deftly navigating an array of eight keyboards to acquire a different sound during one song were over.

PROBLEMS

While new technology often allows for new flexibility and greater opportunities to experiment, it inadvertently crafts a problem space in its wake. The new technological breakthroughs in the realm of sound synthesis brought wonderful results, but they also brought about dialogue regarding the way in which the electronic musician could better communicate with his or her peers. One such discussion surrounded the use of "presets," defined as "out of the box" sounds that come programmed into the synthesizer. In an article written by David Wessel for the University of California at Berkeley, "The sad truth is, many musicians never go beyond the factory presets." (Wessel) He continues"

"there are many [synthesizer] programmers who strive for new sounds with more expressive control. These programmers must struggle with various idiosyncratic and awkward front-panel programming systems. Patch editors help, but the whole enterprise lacks coherency, consistency, and expressive power. The time has come for a common programming language to describe the behavior of our synths" (Wessel)

What Wessel realized was that musicians are faced with difficulties just getting their machines to make their desired sounds. He argued for a standardized language, just like computers themselves follow. (Wessel)
It is easy for some to get a grasp of basic synthesis concepts. Oscillators, amplifiers, and filters - all elements of synthesis - require no special electrical knowledge nor do they require advanced music theory. Rather, they often come in the forms of knobs that can be freely tinkered with. Nonetheless, as one progresses up the synthesizer learning curve, things grow more and more complicated. This is especially true when you look at the great diversity of hardware and software synthesizers available. Add on DAWs (digital audio workstations) like Digidesign's Pro Tools, with over 900 pages of reference and how-to's in its manual, and the learning curve becomes staggering. How can we truly get all this wealth of information to users, especially non-professional users who still deserve to be afforded the same opportunities to create and express themselves as professionals?


VALUES IMPLICATED

Ultimately, the question at hand follows one of accessibility. However, the question immediately arise regarding "what is accessibility?" The answer takes on many forms. According to the United States' Rehabilitation Act of 1973, all federal agencies must make their computer and electronic resources accessible to people with disabilities. This amendment, Section 508, has served as the common template for all government agencies, and helps to ensure the rights of the disabled (Section). Most designers, when facing the task of designing for disability accessibility, are confronted with many shortcomings, mainly based on "past experiences and best practice" with little experimental evidence (Stephanidis 1) These techniques also tend to address the issues that very specific users (such as those with visual or motor impairments) (3).
More relevant to a study of synthesizers is the principle of "Universal Usability." According to Batya Friedman and Peter H. Kahn Jr., "universal usability refers to making all people successful users of information technology (Friedman 1253). It is, according to Friedman and Kahn, a sort of freedom from biases that designers may or may not take into consideration when creating a new product. They identify three major areas of research and design that face challenges with respect to universal usability:
1. Technological Variety
2. User Diversity
3. Gaps in User Knowledge (1253)
Through these challenges, Friedman and Kahn assert that universal usability is not always a moral issue -- some things simply do not need to be made accessible. They consider the example of the famed television program "I Love Lucy;" it is not a "moral good" that we make reruns accessible (1254). However, moral imperatives suggest that many things should definitively fall under the domain of universal usability. In conjunction with Section 508, they use the example of federal statistics being available only online; it would be obviously immoral to restrict this information to those who can access a standard computer without special modifications.
Universal accessibility, too, falls into the category of universal usability. Stephandis et. al. find the principles to be much broader than designing for people with "special needs," such as the disabled or the elderly. Rather, they feel that the design implications of new technology have grown to bring together a wider range of users with an even wider range of needs, extending accessibility problems beyond just the traditional views (Stephanidis 3). As designers, when we fail to express these expanding needs, we fail in our moral obligations to recognize users with different "abilities, requirements, and preferences" (3).
Why is this a moral need? As mentioned above, sometimes moral obligations are not part of the creative process or the product itself. However, as Friedman and Kahn stress, it is not only a moral idea, but a good idea to follow. " Moreover, universal access with ethical import often provides increased value to a company" (Friedman 1254). They use examples from a study of a communications company to show how an expanded take on accessibility creates a circumstance in which both the user and designer benefit.
Likewise, Stephandis et. al. apply the principle of universal design or "design for all." They write that universal design "promotes a design perspective that eliminates the need for 'special features'" (Stephanidis 3). Additionally, the researchers do go on to recognize that while one broad solution to encompass everyone is an attractive prospect for designers, this will undoubtedly include "different solutions for different contexts of use" (3).
One thing to remember and acknowledge, especially when dealing with synthesizers (which often have fixed interfaces with few options for customization), is that, as Stephandis et. al. note, "no single interface implementation is likely to suffice for all different users" (6).
Another thing to acknowledge is that users of synthesizers differ greatly. Some are very adept with circuitry while others are more skilled as pianists. Some will tinker with a sound until they have dissected every parameter a dozen times over while still others will rely on the "out of the box" sounds. In fact, "out of the box" presets are a factor that follows the "design for all" concept. When coming to grips with something as subjective as a sound, it becomes difficult to identify which is the best available. However, so long as a user is satisfied with a sound, preset or not, then it is the correct sound for his or her project.

DIRECT STAKEHOLDERS

When we examine direct stakeholders with regards to synthesis, we often look at the producers and musicians themselves. Producers, those who listen to the music as a whole and function for a recording, in many ways, as the conductor of a symphony might during rehearsals and performance, often double their role; they often assume the role of programmers as well - those who construct new sounds with the synthesizer itself. They are hands on people who have a direct interaction with the product. And, if relegated to preset sounds, they run the risk of having their creativity challenged. Thus, it becomes essential for the producer to understand the complex functions of the machines they work with.
Musicians often use different elements of the synthesizer, elements that carry a different learning curve. Most synthesizers use a piano-keyboard interface, so it often requires many years of practice to become adept at the instrument itself. Adding in "expression" tools, such as mod wheels and pitch bends, forces one to use additional practice time mastering these non-pianist skills. Ultimately, if the synthesizer is well designed, the tools for expression will find themselves in logically situated places on the interface, but in actuality, this is not always the case, again forcing the user to adapt to the limits of his or her hardware.

INDIRECT STAKEHOLDERS

The clearest indirect stakeholder with regards to the synthesizer, is the listener. Electronic music has grown in popularity across the 20th century and into the 21st century. Once the domain of classical musicians, it has become an element of popular music. And when users are unable to perform or program their synthesizers with ease, the listener's experience suffers.

CONCLUSION

In the end, using a broadest user approach to synthesizers may feel like it mitigates the skills required to use the machines, but as we can see with the direct stakeholders, the opposite is true. Because of the generally high learning curve for complex synthesis, designing a more usable interface that promotes broad accessibility will satisfy the needs of all the stakeholders and solve many of the accessibility problems that arise.

WORKS CITED

Friedman, Batya, and Peter H. Kahn Jr. "Human Values, Ethics, and Design." The Human-Computer Interaction Handbook, Second Edition. Ed. Andrew Sears and Julie A. Jacko. Lawrence Erlbaum Associates: New York/London. 2008. 1241- 1266.

"Section 508." Section 508. 30 Apr 2008. United States Government, Web. 21 Oct 2009. .

Stephanidis, C, D Akoumianakis, M Sfyrakis, and A Paramythis. "Universal Accessibility in HCI: Process-Oriented Design Guidelines and Tool Requirements." (1998): 1-15. Web. 19 Oct 2009. UI4ALL-98/stephanidis1.pdf>.

Wessel, David. "Let's Develop a Common Language for Synth Programming." Center for New Music and Audio Technologies 01 Aug 1991: n. pag. Web. 20 Oct 2009. programming>

Autonomous Browsing

Firefox, Greasemonkey, and Autonomy


by David F. Bello

Autonomy, as a value of which designers must be sensitive, is set in opposition to usability and security. The latter is threatened by the user, who with greater autonomy within a system, would be afforded opportunities to undermine the system and access private data. If the user may, because of excess autonomy, gain access to private data, then privacy, one of the most important and already-considered values of technology, would also be threatened by a lack of security, thereby losing trust of the system's user base and complicating the notion of informed consent (Friedman, Kahn, and Borning 5), if the given technology previously held high confidence for their own secure processes.

The implication that autonomy threatens usability, however, is a more complicated problem. If the user is afforded "too much" control over their system, not only are security, privacy, and trust elements at risk, but also the user will be faced with far too many options and required decision-making steps in the way of completing any task. "Most users of a word processor have little interest, say, in controlling how the editor executes a search-and-replace operation or embeds formatting commands" (Friedman and Kahn 1254). The second example given here, that of a user's concern with methods of formatting in text documents, reflects the move away from word-processing complex software (such as LaTeX, which requires the use of heavy mark-up in order to perform visual operations on text data) toward more simple, "user-friendly" options such as Microsoft Word, which can apply mark-up using a strictly GUI-based approach for the end-user. This trend toward more abstract software programs is fairly common. The common user would prefer to use currently traditional operating systems, like Windows and OSX, rather than something like GNU/Linux, which requires the user to take more control over software processes. Simplicity, then, becomes a factor of usability; where the tasks being completed on a machine are intended to be done with as small a learning curve as possible. However, users are sacrificing autonomy at the same time.

This writing examines how the balance of autonomy and usability plays out in the use of the Firefox extension, Greasemonkey. How is the Firefox user granted additional autonomy by the use of the Greasemonkey extension? In what ways does the design of Greasemonkey allow for user autonomy at the level of code, while effectively allowing for the non-programming user to possess similar autonomy?

AUTONOMY: IN GREASEMONKEY; IN FIREFOX




"Autonomy is centrally concerned with self-determination - making one's own decisions, even if those decisions are sometimes wrong" (Friedman and Nissenbaum 467)


At the time of writing, the Firefox homepage reflects an emphasis on user autonomy: "There are literally thousands of totally free ways to customize your Firefox to fit exactly what you like to do online" ("Firefox Browser..."). The browser is open-source, meaning that users are provided source-code to analyze and modify within the Mozilla development community. However, this raises the question of how many users actually have the technical ability to modify code to fit their specific needs? It is safe to assume that most users of the browser do not make use of the Firefox source in order to take advantage of the autonomy that they are provided. One does, however, have the potential for autonomy, which is as much as may be reasonably asked from Mozilla at this point. Users are often willing to forego the education of an entire system which would provide them with autonomy at the level of any other user/programmer/developer/etc. in exchange for ease of use that comes with not necessarily needing to possess that information.

This is where Greasemonkey comes in. Those who install the extension are not immediately provided with capabilities to alter the functionality of the browser, but are, in effect, capable of doing so at this point. Greasemonkey works by providing a framework for users to "install" scripts, which act as a sort of filter between the user and the aesthetic and function of websites which they are designed to manipulate. This is best illustrated with an example:










The first image is of a plain Gmail inbox. The second is what Gmail looks like with Josef Richter's Helvetimail user script applied. The script is based on a popular series of user scripts that redesigns Google web applications to incorporate a minimalist aesthetic and the font, Helvetica. What occurs is that the browser applies the code, in this case a script written by Josef Richter, to the browser function of opening the webpage. Within the script, there is a call to this particular domain, so that the script is not applied to every single page opened in Firefox, and the code within the script contains specific instructions for the web browser to apply when opening the page. In this case, all that is modified are the aesthetics of the page. Functionality to the original Gmail Inbox page is identical in the application of this script.

This is not always the case for user scripts in Greasemonkey. Skipscreen, now a full-on extension, but based on user-scripting and able to be implemented in Greasemonkey, skips the imposed wait-times on hosting sites which require the user to input a captcha and retains the download link until a set number of seconds have passed. With Skipscreen, the linnk is automatically generated, providing instant access to the requested download. Pirates of Amazon, which now is considered a new media art project about censorship and economics, added links to filesharing sites directly onto the pages of Amazon.com for specific items. For example, if one were to have this script installed and visit the purchasing page for the latest Kanye West CD, a link would also appear to a torrent site offering an illegal download of that same audio. The project encountered legal difficulties and removed the script from their webpages, Amazon made changes to their code so that older versions of the script would not work, and the developers claimed that the project was merely meant to provoke thought and discussion on issues of piracy and filesharing.

FACTORS OF AUTONOMY



Friedman and Nissenbaum outline five aspects of software agents which can either promote or undermine autonomy for the user:


  • Agent Capability

  • Agent Complexity

  • Knowledge About the Agent

  • Misrepresentation of the Agent

  • Agent Fluidity


  • (Friedman and Nissenbaum 467)


What follows is an examination of each in the context of the Greasemonkey Firefox extension:

Agent Capability - "User autonomy can be undermined when there are states the user desires to reach but no path exists through the use of the software agent to reach those states" (Friedman and Nissenbaum 467).


Greasemonkey extends the available end-states available to the user in Firefox to include those which may be determined by individual user scripts. In this way, Greasemonkey extends user autonomy in Firefox. Autonomy in the use of Greasemonkey can be seen to exist in two separate states of autonomy for two different categories of user. For the user comfortable with script writing, autonomy is relatively unlimited. On the other hand, users who do not know how to write scripts are limited to use those which have already been created and those which are available online. In this way, Greasemonkey's provision of autonomy can be seen as balanced atop the peak of a user's knowledge of coding. Capability of the agent (Greasemonkey), however, is relatively unlimited, as per this particular criteria for measuring autonomy.

Agent Complexity - "A path exists to the state the user desires to reach but negotiating that path is too difficult for the user" (Friedman and Nissenbaum 467).

Again, this element of autonomy is strictly limited by the user's own capability of script-writing. Depending on the individual desires for manipulating the browsing experience, a given state may be out of that user's range. However, it is not unreasonable to assume that if one were to request a given script be written, someone in the Greasemonkey script development community may be willing and capable of providing that script, which negotiates the path for the user, without them having to learn to code themselves.

Knowledge about the Agent - "When the designer of a software agent does not make information [regarding the particular processes taking place in the completion of a task] accessible to the user, then the user's autonomy can be undermined" (Friedman and Nissenbaum 467).

Here, Greasemonkey provides explicit readability of the scripts that it allows to be used within. Understanding of those scripts, however, depends again on the user's knowledge of code. Just as Firefox's source is openly available to the public, any Greasemonkey script can be opened and read in a text reader/editor. Greasemonkey further allows for these to be immediately modified and implemented by the user, as well.

Misrepresentation of the Agent - "Users can also experience a loss of autonomy when provided with false of inaccurate information about the agent" (Friedman and Nissenbaum 467).

While Greasemonkey itself is simply a vessel for user generated scripts, the extension does not pose this risk. However, since scripts are user-generated, there is the risk of deception and inaccuracy when using user scripts. This is something that the analysis of individual scripts must take into account. Greasemonkey allows scripts and scriptwriters great autonomy in modifying the way that users encounter and provide information online. In this way, Greasemonkey can be seen as enabling the autonomy of a script developer to limit the autonomy of users who may enact that script within Greasemonkey. So, while Greasemonkey itself is not (as far as is publicly known) misrepresented, deception may be element of some user generated content for use within the extension.

Agent Fluidity - "Software agents need to take such evolution [of user goals] into account and provide ready mechanisms for users to review and fine-tune their agents as their goals change" (Friedman and Nissenbaum 467).

Greasemonkey actually provides a stable mechanism for making ensuring the continuity of user goals. There is an element which provides notification when scripts are updated, and a small icon in the lower-right corner of the Firefox browser is highlighted when the Greasemonkey extension is enabled. Also, at any point of viewing a web page, users are able to see and modify which scripts are currently operating within the browser's view of that page, and allowed to switch them off one by one, as the circumstance may determine.

CONCLUSION



The unique aspect of the Greasemonkey developer community is not simply that their code is open-source, but that the scripts created for use with Greasemonkey are made available to users in a central location: Userscripts.org. At this page, users can search by the site which they decide that they want to modify their viewing of, function and aesthetic, or other tags which show up on the site. So, in addition to being able to write their own code which would define the action of Greasemonkey in altering the viewing of a page, Greasemonkey users are also afforded the opportunity to download scripts written by others. This means that they both possess the potential autonomy to create and modify scripts to fit their own needs and cultural values, but also to take advantage of the application of this autonomy by others. Therefore, autonomy in Firefox is independent of autonomy in Greasemonkey. One does not need to possess the technical prowess needed to modify scripts in order to use them in Firefox. Of course, users are limited by their inability to create their own user scripts.

What Greasemonkey demonstrates is the flexibility of the web browser to determine how web pages are recreated for the end user. While some pages are displayed very differently across the market of Firefox, Opera, Safari, Internet Explorer, etc., there is an even greater opportunity, with Greasemonkey, for the user's autonomy to be increased to the point of creating an entirely unique user experience.




WORKS CITED



"Firefox Browser | Free ways to customize your Internet." Mozilla. 21 Oct. 2009. http://www.mozilla.com/en-US/firefox/personal.html

Friedman, Batya and Peter H. Kahn, Jr. "Human Values, Ethics, and Design." The Human-Computer Interaction Handbook: Fundamentals, Evolving Technologies and Emerging Applications, 2nd Edition. Eds. Jacko, Julie and Andrew Sears. Hillsdale: Lawrence Erlbaum Associates, 2007. 1241-1266.

Friedman, Batya and Helen Nissenbaum. "Software Agents and User Autonomy." Proceedings of the First International Conference on Autonomous Agents. New York: Association for Computing Machinery, 1997. 466-469.

Friedman, Batya, Peter H. Kahn, and Alan Borning. "Value Sensitive Design and Information Systems." Human-Computer Interaction and Management Information Systems: Foundations. Eds. Ping Zhang & D. Galletta. New York: M.E. Sharpe, 2006. 348-372.

Richter, Josef. "Helvetimail." 10 Oct. 2009. http://www.josefrichter.com/helvetimail/

Will You Be My Mobile Neighbor?

The Problem

One sociotechnical problem that I see in today’s world is the enormous amount of time wasted while traveling. It is thought of as a common task to get in your car, or a form of public transportation, and just sit for multiple hours. Why do we subject ourselves to this monotonous task and what makes this trip bearable? There are two main variables when talking about the duration of a trip; the distance to travel and the vehicle providing the transportation. When talking about cost efficiency, most people opt for a longer cheaper trip. Since there have not been any major changes to the speed or automation of vehicular travel, people must find entertainment to keep the mind active. Most people enjoy music from mp3 player/radio, movies, audio books, driving games, and conversation with other riders as forms of entertainment while traveling. It is important to note the space where all of these Medias originated… the home. It is not hard to believe that humans would resort to using the same media to entertain themselves in travel as they to do excite themselves when bored at home. I believe that as new forms of entertainment conjure up in the home, they gradually mobilize and make their way into travel. However, it is evident that increasing the Medias brought from our homes to cars also increase distraction levels on the road. Not only are there more devices to distract you on the road over the years, but they also require increasing amounts of attention to use them.

Human Welfare On The Road

A very important value to consider while mobilizing entertainment for travel is human welfare. The value of human welfare should be one of the first values that every designer considers no matter what field they are in. Throughout history it has been proven that human welfare can be at stake even when a design is working acceptably. For example, The Chernobyl disaster, where workers ignored safety functions that seemingly had no short term effect causing a massive nuclear explosion. In The Human Computer Interaction Handbook it describes that not only can a person’s identity be stolen or manipulated, but so can their digital information. Furthermore, there is “Physical welfare, appealing to the wellbeing of individuals’ biological selves, which is harmed by injury, sickness, and death.” There is also, “Material welfare appealing to physical objects that humans value and human economic interests” (Andrew Sears & Julie A. Jacko, 2008). In terms of travel entertainment, physical welfare is what most products are designed for. There is a fine line between distraction and entertainment while driving. In most cases distraction and entertainment can cause negative physical welfare together. It is important to find a proper ratio of entertainment to distraction.

Solving Negative Welfare

Why not get rid of all entertainment in the car that can cause distraction, this should get rid of negative physical welfare while traveling right? Well, if the mind is not entertained in some way, it will want to go into standby mode (or sleep). Therefore, a lack of entertainment could also have the same negative effects on physical welfare as distraction can. Consequentially, devices are designed to be hands free, voice activated and automatic while they provide service to the user. For example, GPS, mp3 players, and cell phones all feature some kind of audio or wireless feature that decreases interaction with the device providing an increase in human welfare. In contrast, there are also many products that are not specifically design for travel, yet daily accident reports reveal their use. For example, fast food, some mp3 players, and cosmetic appliances are technologies that are taken out of their intended environment into mobile one. This causes negative human welfare. A good theory would be that, if designing in a travel environment, you want to create a product that the consumer give the least amount of attention to and gets the most entertainment value. When the user is less distracted they pay more attention to the complex environment of driving and yield higher human welfare for direct and indirect stakeholders.

When Social Networking Strikes

Social networking is a media that can have positive and negative effects on physical welfare while driving. Social networking is a widely used media that has equally as good a chance to evolve from the home to travel. In a paper by Nicole B. Ellison, Charles Steinfeld, and Cliff Lampe they reference the definition of social capitol as, “the sum of the resources, actual or virtual, that accrue to an individual or a group by virtue of possessing a durable network of more or less institutionalized relationships of mutual acquaintance and recognition". The students also reference studies that prove “Social capital has been linked to a variety of positive social outcomes, such as better public health, lower crime rates, and more efficient financial markets” (Nicole B. Ellison, Charles Steinfeld, Cliff Lampe, 2007). Social capitol has the power to increase human welfare for direct and indirect stakeholders by keeping the driver entertained and by creating community bonding. Equivalent to cell phones, it also has a great ability to distract the driver and cause negative welfare.


Variable Stakeholders


On the road it seems like everyone is a direct stakeholder for some kind of technology. The issues then become: Was the product entertaining to much causing distraction? Was the product not entertaining enough causing boredom? Was the product not design for the complex environment of driving? A direct stakeholder in the use of cell phones while driving would obviously be a driver talking on their cell phone. The driver is creating positive human welfare for themselves by being entertained, but negative welfare for the indirect stakeholder that the cell phoned driver will crash into by getting distracted. The entertainment to distraction ratio is solved by the indirect stakeholder, in this case being the company creating Bluetooth headsets. The company gains material welfare (money) for every increase of physical welfare (not being distracted, hospitalized, or arrested) that the direct stakeholder receives. Social networking in the car has similar prospects to cell phones in terms of distraction and it product of negative welfare. However, it has a much bigger potential to yield positive welfare for indirect stakeholders through social capitol. People being able to communicate with multiple sources of information at once could decrease accidents, increase awareness, and create an entertaining environment. If there was a way to apply the hands free idea to a social networking system (maybe some kind of voice activation), then I think that social networking could increase direct and indirect stakeholder’s welfare, and make a new safe culture of driving.

Works Cited
Andrew Sears & Julie A. Jacko. (2008). The Human–Computer Interaction Handbook. New York: Lawrance Erlbaum Associates.
Nicole B. Ellison, Charles Steinfeld, Cliff Lampe. (2007). The Benefits of Facebook "Friends:" Social Capital and College Students' Use of Online Social Network Sites. http://jcmc.indiana.edu/vol12/issue4/ellison.html: Journal of Computer-Mediated Communication.
Rogers, M. (2007). How social can we get? What evolutionary psychology says about social networking. MSNBC , http://www.msnbc.msn.com/id/20642550/ns/technology_and_science-innovation/.

Private Browsing

Ben Casbon

Socio-Technical System: The Web

Problem Space: User Tracking

Direct stakeholders: Internet browsers

Indirect stakeholders: Insurance and financial institutions.

Socio-Technical System: The Web


The World Wide Web is an immense system with billions of publicly available systems that not only serve-up information to users, but also allows remote users to interact with the system, or even other users of the same system. In general , most of the users will see the same information on the same impartial system.

Problem Space: User Tracking


Browsing the internet is made smoother by the use of ‘cookies’. Cookies are minute pieces of code that are lodged on the browser’s computer by their browser at the request of a web site. Since the World Wide Web is stateless, some mechanism is needed to ‘personalize’ a users interaction and make the site they are visiting ‘theirs’ for the time that they are visiting.

While there is nothing inherently wrong with cookies, they are a technology that can be, and frequently has been abused. Verifying personal information such as usernames and passwords can be an annoyance to users, so cookies have been developed that allow a user to stay ‘logged in’ to sites that they have visited. This can prove hazardous if the user is operating on a computer that may at some time be operated by another user, such as a library or campus computer, or if they lack adequate physical protection to their data.

Example: Facebook Beacon


On November 6th 2007, Facebook began a ‘service’ called ‘Beacon’ (http://www.facebook.com/press/releases.php?p=9166). Beacon was billed as a “… core element of the Facebook Ads system for connecting business with users and targeting advertising to the audiences they want.” In an ambiguous press release, Facebook hailed the benefits of ‘sharing’ information between itself and 44 other online companies.

According to Facebook, “Facebook Beacon is a way for you to bring actions you take online into Facebook. Beacon works by allowing affiliate websites to send stories about actions you take to Facebook.” (
http://www.facebook.com/beacon/faq.php) The Beacon Participating web-site would detect a user’s Facebook identity whether they were logged on to Facebook or not, by detecting the latent Facebook cookie on the user’s machine. One prominent example of the Beacon system was the Blockbuster online integration with Facebook, which would update a user’s Facebook wall with the movies that they added to their ‘queue’ while they were on the Blockbuster site.

Almost immediately after the launch of the Beacon service, MoveOn.org (
http://www.moveon.org) created a Facebook group and online petition demanding that Facebook cease violating users privacy without receiving users informed consent. In December of 2007, Facebook issed a ‘mea culpa’ to the privacy activists about the Beacon service and stated on the Facebook blog: (http://blog.facebook.com/blog.php?post=7584397130)
“At first we tried to make it (Beacon) very lightweight so people wouldn't have to touch it for it to work. The problem with our initial approach of making it an opt-out system instead of opt-in was that if someone forgot to decline to share something, Beacon still went ahead and shared it with their friends.”

Facebook changed Beacon from an opt-out system to an opt-in system in December 2007, but they were too late. Dallas County resident Cathryn Elain Harris filed a class-action lawsuit against Blockbuster Inc. over the company’s participation in the Facebook Beacon system in April 2008. (Computerworld) (
http://www.computerworld.com/s/article/9078938/Blockbuster_sued_over_Facebook_Beacon_information_sharing?taxonomyId=146&taxonomyName=standards_and_legal_issues)

Value: Privacy


Facebook and their cohorts had violated user’s privacy by exploiting the already-available architecture of cookies to track a user’s movements online without the user’s expressed consent. While Facebook and their partner companies were certainly to blame for the bulk of the violation of the user’s privacy, the current web-browsing architecture was also partly to blame.

Current architecture not only allows web sites to leave cookies on a user’s computer, it does so without informing the user by default. Without clearly understanding the implications of the ‘keep me logged in’ checkbox, users leave their information available to sometimes unscrupulous advertisers.

A solution: Incognito mode


In a surprise move in September of 2008, Google released a brand-new web browser. The new browser, dubbed “Chrome” featured many innovations in the interface and behind the scenes. Chrome had the benefit of being developed from the ground up for a more mature web. The designers of Chrome paid special attention to user’s desire for speed, flexibility and privacy.

Google’s chief concession to the users privacy was the incorporation of the ‘Incognito’ mode in Chrome. Incognito mode allowed the user to easily start a browsing session that exists in a temporary space. Instead of relying on the users to manage their privacy by periodically deleting their cookies, Incognito mode allowed the users to merely launch an ‘off the record’ session.

Incognito mode radically changes the web-browsing privacy equation. Whenever a knowledgeable user browses sensitive information, they experience the nagging doubt about whether or not traces of their personal information are being left behind them like inadvertent versions of Hansel and Gretel’s breadcrumbs. Incognito mode allows users to launch a browsing session that is blissfully temporary. Having a browsing session persist past the termination of the browser is an essentially unnatural mapping for the user to comprehend. Why would you still be virtually ‘logged in’ to a site, if you no longer have the window open?

Chrome’s Incognito mode is in keeping with one of the general methodologies to preserve privacy that Friedman mentions in the Handbook of Human Computer Interaction (Friedman et al, 2006) Incognito mode empowers the user to control what information remote sites are able to lodge on a user’s computer.

Chrome is not the first browser to offer the option of private browsing, but it was the most initially accessible version of the feature. Safari offered a version of the Private Browsing feature, but it has been fraught with embarrassing bugs where the user’s private information has been found to be stored on their system despite explicitly entering privacy mode.

The success of Incognito mode lit the fire under the other major browser producers, the Mozilla Corporation and Microsoft. Internet Explorer 8, released earlier this year has an ‘InPrivate’ mode and Firefox 3.5 incorporated a private browsing mode at its release in July of 2009. Once a feature has been added to a browser, it tends to remain in the feature set in future versions of the software. At this point, it is likely that all future versions of internet browsers will have some form of private browsing available to users.

Direct stakeholders: Internet browsers


Millions of people use the internet across the globe, and there are as many uses for the information online as there are users of the information. Each individual’s need for privacy makes them a direct stakeholder. The user’s financial well-being as well as their reputation is at stake on the internet. In February 2007, Javelin Strategy & Research conducted a survey which determined that privacy violation in the form of identity theft (http://www.privacyrights.org/ar/idtheftsurveys.htm) compromised the identity of 8.4 million people in 2007, with the mean resolution time to resolve the identity theft situation was 25 hours per victim. Victims reported a $5 billion out-of-pocket loss in 2003 (http://www.ftc.gov/os/2003/09/synovatereport.pdf ).

Indirect stakeholders: Insurance and financial institutions


According to the same study by the Federal Trade Commission, identity theft in 2003 accounted for $47.6 billion worth of losses. A shocking 11.6% of all identity theft occurred online. Companies who fail to protect their customer’s privacy through negligence or deliberate exploitation stand to lose business, and potentially face prosecution in civil court as in the Facebook Beacon case.

Lack of respect of user’s privacy almost is almost inextricably linked to the user’s loss of trust in that organization. Customers universally hate being treated as a commodity, and the relationship between provider and consumer is easily soured by the provider by expanding that relationship to another company without the consumer’s explicit permission.


Works Cited

Friedman, B., & Kahn, P. H., Jr. (2007). Human values, ethics, and design. In Sears, A. & Jacko, J. (Eds.). The Human-Computer Interaction Handbook: Fundamentals, Evolving Technologies and Emerging Applications, 2nd Edition. (pp. 1241-1266). Lawrence Erlbaum.