Wednesday, October 21, 2009

The facade of Facebook privacy

Value Sensitive Design Conceptual Investigation Essay
Jami Cotler

Problem Space/Context

With over 500 million registered users, Facebook has quickly emerged as the virtual place to be. (Gaspary, 2008) People from around the world are flocking to the social networking website from multiple demographics. With this attention comes a social compromise many of us aren’t aware we are making, while other less obvious stakeholders benefit. As the 6th most trafficked website in the US (Gaspary, 2008) this famous website is attracting attention from other groups, such as potential employers, stalkers and parents.

Value(s) implicated

Three central human values are implicated in the system design of Facebook. The value of visibility often is harmful and conflicting with the pursuit of privacy. (Cohen, Winter 2008) Why hasn’t awareness of our privacy or lack of, been brought to the forefront of our consciousness? (Cotler, Class Blog) As a society of automatic opt-ins where is the informed consent?

As a default Facebook profiles are open to everyone in the Facebook users designated network. While we “voluntarily” post our profile, pictures, and personal information on Facebook, many users do not comprehend the social significance and potential implications of this. In a survey of 75 college students, 37% had profiles open to their home network (survey conducted by author and research student). Of this group, 11 were seniors currently looking for post graduation employment. Out of 75 students, over 73% displayed a list of all their friends. The average number of friends was 501.59 with the maximum being 1120. In the pursuit of visibility; namely displaying how many friends a users has, privacy is substantially compromised perhaps without the knowledge or informed consent of the Facebook user. Several students were asked about privacy and some were not aware of the availability of privacy settings.

Privacy can be easily be compromised by a person with some knowledge of web design and a few spare minutes. (Colleague, 2009) Often when Facebook users make their profiles private they will maintain public friend lists in order to promote their visibility. In the survey of 75 college aged students, over 73% displayed their friend lists. While maintaining public friend lists offers visibility it also opens potential vulnerabilities of the Facebook privacy facade. If a user on the friend list has an open profile it can be used as a direct link to the original user. This can be accomplished with basic knowledge of the Facebook URL structures. Even without web design knowledge one can find any postings or images tagged by the person of interest in the open profile of a friend.

Direct Stakeholders

The direct stakeholders are the Facebook users. The Facebook users may use the social network to keep in touch with friends and family. Facebook has become the social “meeting place” for people of all ages, especially teenagers. (Boyd, 2007). Navigating a social network is quickly becoming a necessary and important social skill. (Boyd, 2007). Businesses are quickly becoming another direct stakeholder as they use Facebook for marketing and customer outreach.

Indirect Stakeholders

In a study conducted by Rosen and Kluemper they demonstrated that employers, while not actively using the social network in the same way as the direct stakeholders, are lurking and are often using Facebook to research potential employees (Kleumper, 2009). They found that many employers don’t look at actual Facebook profiles but find a much more revealing disclosure of information from the job candidate’s friends. They will analyze the comments and tagged pictures posted by the candidate’s friends. According to their study, employers also relate the number of Facebook friends to the popularity and extraversion of the candidate. Other employers look for revealing signs of speaking ill of former employers, evidence of drinking too much, or revealing too much confidential information (Sridharan, 2008).

The Internet in general and Facebook specifically can be rapture for a stalker. Where else can you find a list of hundreds of friends of your victim? The amount of information one can gather in minutes would have taken days, if not weeks to gather without Facebook. With an alarmingly high rate of users displaying full friend lists, the potential for a stalker to quickly find this information is quite real.

Parents are rarely invited to their children’s teenage or college parties (personal experience). With Facebook, parents now have an insider view into their children’s private lives. They can monitor and view who their friends are; see firsthand what happens at their parties and even become privy to private conversations. Social networks such as Facebook have been shown to cause tension between parents and children and have also been linked to loss of parental control (Subrahmanyam, Spring 2008).

Value and stakeholder conflict


Facebook Users



Privacy settings are available but implications of not using them are not clear. As a follow up to the survey, several interviews were conducted. The overwhelming response to the issues was notable. All students interviewed were very concerned with the false perception of Facebook security. Many of them weren’t aware of the privacy settings and search settings.

This indirect stakeholder appreciates profiles using default or open privacy settings as it allows them to view and learn more.


Facebook users often want visibility and achieve this through large friend lists and multiple tagged photos. This is in conflict and often compromises privacy.

This indirect stakeholder appreciates highly visible profiles as it compromises security settings.

Informed consent

Consent is the default and a social compromise not comprehended by many.

The ethical issue can be raised that a job candidate generally does not provide informed consent to a potential employer to look at their Facebook profile or related Internet postings. Employers who view this information often do so without the knowledge and certainly the consent of the candidate they are interested in hiring. Some potential employees never get a chance to be interviewed or hired because of information found in Facebook about them. Future employers can also learn of gender, age, sexual orientation, etc. which cannot be legally asked on a job application or interview. This information is all obtained without (informed) consent of the future employee.

Recommendations/Proposed technical solution to value conflicts

In the privacy setting Facebook offers a way to view profiles as a friend would. (See below)

I propose that options to view the profile as someone who is not a friend (either in or out of your network) is also offered. This will provide a clear way of seeing how the “world” sees your digital Facebook footprint.

Social networks have amazing potential to change and enhance the way we connect and socialize as humans. While embracing the capabilities, it’s important to maintain awareness and preempt potential value compromises. Awareness is the key to understanding how to best protect yourself while navigating the social networking highway.

Works Cited

Boyd, D. (2007). "Why Youth (Heart) Social Networks Sites: The Role of Networked Publics in Teenage Social Life.". In D. Buckingham, MacArthur Foundation Series on Digital Learning (pp. Youth, Identity, and Digital Media Volume). Cambridge, MA.: MIT Press.

Cohen, J. (Winter 2008). Visibility, Transparency, and Exposure. The University of Chicago Law Review , Vol. 75, No. 1 pp 181-201.

Colleague. (2009, October 15). Associate Professor of Computer Science. (J. Cotler, Interviewer)

Friedman, B. K. (2006). Value Sensitive Design and information systems. In P. Z. (eds), Human-Conmputer Interaction in Management Information Systems: Foundations (pp. 348-372). Armonk, New York: M.E. Sharpe.

Gaspary, S. (2008, May 28). Social Technologies and Recruiting - How to extend the reach of your employment brand. Retrieved October 19, 2009, from Career Builder Community:

Kleumper, D. &. (2009). Future employment selection methods: Evaluating social networking websites. Journal of managerial Psychology, 24 , 567-580.

Sridharan, V. (2008, Sept. 14). 22% of Employers Check your facebook profile when they're looking to hire you. Retrieved Oct. 19, 2009, from Business Insider:

Subrahmanyam, K. a. (Spring 2008, Vol. 18 No. 1). Online Communication and Adolescent Relationships. , pp. 119-140.

Social Network Systems and Identity:

From managing identity to managing fair use

Introduction: Online Identities and Social Consequences

As social networking systems (SNS’s) become ubiquitous, our engagement with these systems increasingly impacts significant aspects of our lives in unexpected ways. In 2007, an American banking intern was terminated after a photo posted to the social networking system revealed he missed work to attend a Halloween party (Owen, 2007). Public officials have been compelled to resign after expressing personal views on social networking platforms, in July of 2009 an aid to a city official in Manhattan resigned over posting controversial views to facebook (Chan, 2009). In October of 2009, MIT researchers created a program that analyzes public information about a Facebook user’s friends to accurately predict their sexual orientation. The shocking implication of this particular study is that a social network activity such as the friends one chooses, a fundamental and requisite form of engagement, could be used to effectively “out” a users sexual orientation against his will (Jernigan & Mistree, 2009).

Do privacy controls limit expression of identity?
SNS’s have been quick to identify emerging problems with managing our online identities and self-published content. Early interventions by engineers & designers focused on developing robust privacy controls to offer high-level management of access to personal information. Facebook, for example, now allows you to assign custom levels of privacy for individuals and groups of friends. Individuals can be assigned to custom groups like co-worker, family, or friend with each group having a pre-determined privacy setting.

Clearly, these identity management tools are valuable for those who use SNS’s. Over the last few years, a mini industry has risen to instruct users on sculpting their presentation and cultivating their identities to leverage social networking systems personal PR opportunities to maximize professional opportunities and hirability.

Beyond identity management for personal PR purposes, some have argued that sophisticated identity management affordances are necessary to authentically represent construction of identity. In 2005, Alice Marwick criticized the strictly representational model of early SNS’s as a “problem of authentiticity”. She writes, “Social networking sites overall presume that each user has a single ‘authentic’ identity that can be presented accurately.” Marwick views this singular construction of identity as a direct contradiction to the way in which we perform identity in everyday life; we present ourselves in various ways depending on the audience and context (Marwick, 2005).

Looking forward at emerging technology, it’s easy to imagine a rise in active identity management. A November 2009, Atlantic Monthly essay by Jamais Cascio imagines ubiquitous augmented reality systems that when coupled with advances in facial-recognition technology push self-published social information to unprecedented prominence. Personal beliefs, views, and values become visually inseparable from the face of each person you meet and become a fundamental component of the visual landscape. Cascio goes on to imagine a rising demand for “reality filters” that eliminate unwanted information and opposing viewpoints. The real problem with identity management, as Cascio sees it, is less about technology and more about our society’s inability to tolerate diverse viewpoints (Cascio, 2009).

Unequal power dynamics and leveling the identity field
One aspect of this problem space involves power dynamics. Individuals in positions of power can use self-published information on SNS’s against others. Because of this, users of SNS’s are pressured to manage, stifle, and censor authentic expression of identity in order to protect themselves from those who would exercise power informed by prejudice and intolerance to discriminate against and oppress them. Though there is a clear ethical value in designing SNS’s systems that allow marginalized peoples to protect themselves and their identity from these forces, designers also have an ethical obligation to develop this technology in such a way that maximizes opportunities for individuals to authentically express themselves and aims to transcend these oppressive structures. By simply transcribing real life self-limiting identity management techniques into social network technology we miss the opportunity to ethically reform these interactions and level power dynamics.

New expectations of personal identity management can create new set of pressures and demands on stakeholders. As we present many different optimized versions of ourselves to diverse and demanding audiences our understanding of who we are can become diminished and becomes more and more linked with how others want us to be. Presently, the burden is on the user to predict aspects of their identity that may be uncomfortable for others and manage, filter, and hide those aspects. By focusing on identity control and management, SNS’s risk perpetuating restrictive and oppressive limitations on authentic expression. Like Bill Clinton’s well known “don’t ask don’t tell” compromise, are designers of SNS’s contributing to a climate that encourages marginalized groups and populations to suppress themselves, while allowing people in positions of power to maintain intolerant and oppressive prejudices and judgments?

A June 2009 article by Caryn Brooks chronicles the benefits of coming out on facebook.
Coming out used to be an exhausting process. You had to come out again and again and again to all your friends at different times. Nowadays, even with social networking, gays still have to come out, but one of the key differences between our pre-profile selves and our new online presentations is that now (finally!) the burden is also on our friends to discover and digest our identities. For the lesbian, gay, bisexual and transgender (LGBT) community, Facebook et al have finally leveled the identity field, and it's kinda nice.
SNS’s and other social technological innovations have tremendous opportunity to shape new trends in social engagement and contribute to new achievements in stakeholders ability to understand themselves and articulate their identity.

The implicated value

Identity as understanding ourselves
Leaders in Value Sensitive Design, such as Batya Friedman have researched and presented key ethical values that should be considered in this problem space. This value of identity is most directly implicated in this socio-technological problem space. Friedman’s value of identity refers to “people’s understanding of who they are over time”. (Friedman, 2007).

This definition of identity has been researched extensively across analogous sociotechnological problem spaces. The rise of virtual simulations like second life and semi-anonyms social games have been widely studied as systems that implicate identity and people’s understanding of who they are over time. These systems allow individuals new opportunities and freedoms to project multiple constructions and liberating simulations of their identity. Sherry Turkles has written extensively on the importance of viewing construction of identity not as a calculation, but more of a simulation, where the self is conceived as a “multiple, distributed system”. Turkle writes:
Without a deep understanding of the many selves that we express in the virtual, we cannot use our experiences there to enrich the real. If we cultivate our awareness of what stands behind our screen personae, we are more likely to succeed in using virtual experience for personal transformation. (Turkle, 1996)
Identity management in virtual “games” versus social network systems
Turkles and others who have studied how multiple projections and simulations of identity contribute to an enhanced understanding of the self over time were writing exclusively about social games and virtual simulations. These virtual simulations are still widely used and these new ideas of identity issues are still relevant. However, unlike games and virtual playgrounds where users explore and experiment with the construction of their identities, managing and controlling multiple identities in SNS’s may undermine users positive formation of identity. Key figures from user’s daily life, including authority figures such co-workers, teachers, supervisors, parents, and others are highly aware of their every thought and move and poised to judge and discriminate. So it may be that when stakeholders in SNS’s are encouraged to project and simulate their identities to conform to the expectations of people in position of power they are diminishing their ability to understand who they are. Another way of thinking about it, is that new privacy controls in SNSs may be designed in such a way that stakeholders are pressured to cede control of their understanding of who they are to oppressive figures.

Identity as a dynamic integrated process
Defining Identity as the understanding of who we are over time is an important ethical value in this problem space, but this definition is not enough. Stakeholders have ethical rights that extend beyond understanding their identity. For SNS’s we need an expansive view of identity includes utilizing technology to both understand who we are and to freely express this authentic identity to others and thereby integrate this authentic identity into everyday life.

For this reason it is useful to apply Abraham Maslow’s humanist concept of self-actualization. Maslow conceived of 5 stages of human needs with the peak need being self-actualization. “A musician must make music, an artist must paint, a poet must write, if he is to be ultimately happy. What a man can be, he must be. This need we may call self-actualization”. (Maslow, 1943).

Vivian Cass presents an identity model that outlines six stages of gay and lesbian identity development. The stages include confusion, comparison, self-tolerance, self-acceptance, pride, and finally identity synthesis. Identity syntheis is achieved when the individual can “integrate gay and lesbian identity so that instead of being the identity, it is an aspect of self.” (Cass, 1979)

So for this paper we define our value not only as people’s understanding of who they are. We also incorporate the dynamic interactions involved in expressing, understanding, constructing, and integrating our identity with and through others as formulated by applying the models developed by Maslow and Cass.

Key Stakeholders

Direct Stakeholders
All current users of SNS’s and anyone who self-publishes personal information about themselves via the web can benefit and derive value from an authentic and dynamic expression of identity and are therefore direct stakeholders in this problem space.

Of particular ethical interest are individuals of marginalized groups and populations or people with minority beliefs and values. These stakeholders have legitimate concerns that fully engaging in social network systems could make them vulnerable to judgments, inequity, and discrimination. While advanced privacy filters can be utilized to protect these individuals from judgment, there is a value in these individuals utilizing social networks as a tool to more effectively express themselves and more authentically integrate and communicate all aspects of their identity.

Indirect Stakeholders
1. Investigators - Security professionals, prosecutors, government agents, and other individuals who’s professional duties include identifying potential threats to national security or domestic criminal activity. These individuals create judgments and inferences on behaviors of groups and individuals on SNSs and have an interest in this information being widely available.

2. Informants - Individuals who’s professional activity include leveraging SNS’s to unethically and perhaps illegally discriminate against others. For example, consulting firms that research SNS’s and attempt to predict and identify characteristics of job candidates such as ethnicity, sexual orientation, political views, pregnancy, psychological profiles, medical conditions and other characteristics that can be used to illegally discriminate against hiring them.

In both of the above cases, it’s important to note that the interests of these indirect stakeholders is in opposition to the interests of the direct stakeholders. For this reason it is of high ethical value to allow the direct stakeholders to establish expectations and guidelines over the use of such information and for lawmakers to make sure the activity of the indirect stakeholders in fair and in line with universal human rights, core constitutional protections, and all relevant civic laws.

3. Intolerants - Individuals who engage in SNSs and are intolerant, uncomfortable, and threatened by thoughts, ideas, and activities that are not consistent with their beliefs. These individuals have a stake in privacy controls that pressure individuals with marginalized views and behavior to censor themselves and at the same time have a stake in being able to freely express their own views that are backed by historical institutions of oppression.

4. Designers - Programmers and employees of SNSs who have an interest in protecting the corporations from privacy concerns and other perceived limitations by users. This individuals also have an interest in conceptualizing social identity in terms of profit and commoditizing social information over a concept of liberating ethical rights of their constituents.

Implications for Design

From broadcast to eavesdrop – reconsidering models of information sharing
Another aspect of this problem space is that many use inappropriate models to apply norms and expectations. For SNS’s the metaphor of broadcasting their views or identity may not be entirely appropriate. This is the same metaphor we use for radio, television, and newspapers. Comparing a user updating their profile on a SNS to the dedicated actions of a large media conglomerate or corporation may place unfair and inappropriate limitations on how people should express their identity. Instead, we could think of users contributions of personal information as a creative act of producing and contributing social intelligence to the SNS system. Likewise, it may be useful to extend the metaphor to the experience of receiving updates about others via a social network. Instead of the view that we are being bombarded with unwanted and uncomfortable information, we could take accountability for constructing and integrating information and judgments about others. It may be more accurate to say we are collecting, digesting, and synthesizing information about others in our selected network. The concept of eavesdropping versus broadcasting changes key assumptions that are involved in the actionable legal aspects of this information including terminating, prosecuting, or discriminating against others based on this information. These new metaphors, arguably more representative of the true functionalities and interactions of SNS’s, allow for new possibilities in imagining a shifting of the burden of increasingly complex management and filtering of authentic identity from the individual, to an issue of context and expectations, lawmaking, human rights, and fair use of this potentially valuable information. How can HCI professionals and academics shape the perception of SNSs with respect to these models in a way that is equitable and promotes the ethical value of identity?

From managing identity to managing fair use
In an analogous problem space, the photo sharing site Flickr address users interests in protecting the ownership of their self-published pictures and goes further by empowering users to transgress traditional views of access, use, and ownership by granting specific permissions for reuse, remixing, and derivative works. By utilizing the creative commons licensing standard, Flickr encourages its users to explore new ways to derive value from sharing (in addition to selling) their work. Similarly SNS’s could imagine new ways for users to articulate expectations of privacy and permissions on the rights and reuse of their information.

Currently in SNS’s users can communicate to a particular individual and this information is available to all with that contextual information. Expressing a political view to my mother that others hear is very different than me expressing a political view to the world. This contextual information shapes how others view this information and ultimately, whether or not I can get fired for it. Intelligent processing systems can be used to make the contextual information and expectations for use just as prominent as the information itself. There is potential for creative and transformative ideas from HCI professionals and academics. It is ethically important and consistent with free flow of information that these systems depend on to prioritize developments in these technologies over technologies that block, filter, and hide. Personal information about others identity and beliefs only becomes unwanted “spam” when we are overwhelmed and don’t have adequate processing tools. With the right tools, this information can increase our tolerance and understanding of each other and contribute to shaping a fair, ethical, and humanistic social landscape.

Works Sited

Brooks, Caryn (June 02, 2009). “How to Come Out on Facebook” Time Magazine

Chan, Sewell. Aide Resigns Over Facebook Posts on Harvard Arrest. July 28, 2009 in The New York Times

Friedman, B., Kahn, P. H., Jr., & Borning, A. “Value Sensitive Design and information systems.” In P. Zhang & D. Galletta (eds.), Human-Computer Interaction in Management Information Systems: Foundations, (348-372). Armonk, New York: M.E. Sharpe, 2006.

Jernigan, Carter and Mistree, Behram. Gaydar: Facebook friendships expose sexual orientation in First Monday, Peer Reviewed Journal on the Internet. Volume 14, Number 10 - 5 October 2009.

Marwick, A. (2005). “I’m More Than Just a Friendster Profile: Identity, Authenticity, and Power in Social Networking Services.” Association for Internet Researchers, Chicago, IL.

A. H. Maslow A Theory of Human Motivation(1943) Originally Published in Psychological Review, 50, 370-396.

Thomas, Owen. Bank intern busted by Facebook in Valleywag. October 2007. Link:

Turkle, S. Life on the Screen: Identity in the Age of the Internet. New York: Simon and Schuster, 1996. (1996)

Values Engendered by Revision Control

This paper presents a Value-Sensitive-Design (VSD) conceptual investigation of revision control. It focuses on revision control as it is employed when constructing software applications. Firstly, the sociotechnical problem space is explicated by 1) defining revision control and 2) explaining how organizations can implement revision control through the use of specialized tools. Next, implicated values are identified and defined in terms of interactions with version control tools. Finally, stakeholders are identified and the effect of implicated values on stakeholders is analyzed

Sociotechnical Problem Space
Revision control (also known as version control) is used to manage changes in documents, source code, or any other type of file stored on a computer. Revision control is typically used in software engineering when many developers are making contributions/changes to source code files. As changes to a source code file are committed, a new version of the file is created. Versions of a file are identified either by date or by a sequence number (i.e. – “version 1”, version 2”, etc.). Each version of the file is stored for accountability and stored revisions can be restored, compared, and merged.

The fundamental issue that version control sidesteps is the race condition of multiple developers reading and writing to the same files. For instance, Developer A and Developer B download a copy of a source code file from a server to their local PC. Both developers begin editing their copies of the file. Developer A completes his/her edits and publishes his/her copy of the file to the server. Developer B then completes his/her edits and publishes his/her copy of the file, thereby overwriting Developer A’s file and ultimately erasing all of Developer A’s work.

There are two paradigms that can be used to solve the race condition issue: file locking and copy-modify-merge (Collins-Sussman).

File locking is a simple concept that permits only one developer to modify a file at any given time. To work on a file, Developer A must “check out” the file from the repository and store a copy of the file to his/her local PC. While the file is checked out, Developer B (or any developer for that matter) cannot make edits to the file. Developer B may begin to make edits only after Developer A has “checked in” the file back into the file repository. File locking works but has its drawbacks. File locking can cause administrative problems. For example: a developer may forget to check in a file effectively locking the file out and preventing any other developer from doing work. File locking also causes unnecessary serialization. For example: two developers may want to make edits to different parts of a file that don’t overlap. No problems would arise if both developers could modify the file, and then merge the changes together. File-locking prevents concurrent updates by multiple developers so work has to be done in-turn.

In the copy-modify-merge paradigm, each developer makes a local “mirror copy” of the entire project repository. Developers can work simultaneously and independently from one another on their local copies. Once updates are complete, the developers can push their local copies to the project repository where all changes are merged together into a final version. For example: Developer A and Developer B make changes to the same file within their own copies of the project repository. Developer A saves his/her changes to the global repository first. When Developer B attempts to save his/her changes, the developer is informed that their copy is out of date (i.e. – other changes were committed while he/she was working on the file). Developer B can then request that Developer A’s changes be merged into his/her copy. Once the changes are merged, and if there are no conflicts (i.e. – no changes overlap), Developer B’s copy is then saved into the repository. If there are conflicts, Developer B must resolve them before saving the final copy to the project repository.

A development organization may implement revision control through the use of specialized tools dedicated to source code management. There are several open-source and commercial tools available, each with their advantages and drawbacks. Subversion, an open-source software package, is a well-known and widely used tool (Tigris). Subversion (“SVN”) uses a client-server model. Source code files are stored on the SVN server (aka “repository”) and can be accessed by any PC’s running the SVN client. This allows many developers to work on source code files from different locations/PC’s. Some key features of SVN are: utilization of copy-modify-merge (and file-locking if needed), full directory versioning, atomic commits to the repository, and versioned metadata for each file/directory.

Values Defined
The use of a good revision control methodology engenders several values within a development organization. This section identifies and defines some of these values.

By leveraging revision control, an organization fosters collaboration between its developers. Gray defines collaboration as “a process of joint decision making among key stakeholders of a problem domain about the future of that domain” (Gray, p.11). Source control permits developers to work in teams where each individual can contribute to the overall goal of delivering a quality software product. Each individual makes decisions on which piece of code will work best to reach that goal. The future of the domain, or software release, is defined by the collaborative effort of developers within the workspace.

Revision control usage also engenders accountability. In their book, Friedman et al write: “accountability refers to the properties that ensure the actions of a person, people, or institution may be traced uniquely to the person, people, or institution” (Friedman). Upon change commit (i.e. - submitting a change to the repository), revision control tools record the responsible developer and place a timestamp on the new version of the file. Moreover, the developer can enter comments to describe changes that he/she has made. For these reasons, revision control tools provide a good mechanism for accountability as a complete audit trail of change is recorded.

Another value brought about by revision control is work efficiency. This is especially true when the copy-modify-merge paradigm is utilized. The major advantage of this paradigm is that it allows developers to work individually and concurrently, thereby maximizing available development time. Compare this to the file-lock paradigm where developers can be locked out a file at any given time. Additionally, copy-modify-merge minimizes the coordination effort and expense between developers.

Along with the values stated above, revision control also: enhances communication between developers, prevents loss of work through backups, enables better coordination of efforts, manages code merges, and provides code stability by allowing organizations to rollback to previous versions of the code (O'Sullivan).

The most apparent direct stakeholders are the software developers. Revision control benefits developers by providing them with a more stable work environment. Without revision control, it is very easy to experience loss of work. Race conditions can occur if multiple developers are sharing the same copy of files. The danger of overwriting updates is real, and it increases exponentially as the project size and organization size increase. Moreover, a complete loss of data can be avoided as copies of code files are constantly being generated and backed-up.

Another benefit for developers is comprehensibility of the system code lifecycle. Developers can review the ancestry of files and by reading other developer’s comments they can elicit the reasoning behind code changes. This information helps ensure that they stay the course of the current branch of development.

In a hierarchical organization, the indirect stakeholders are members of management (ex. - IT Team Leaders). IT Team Leaders are rated on how well their teams meet project timeline and budgetary expectations. Development teams have a better chance at hitting targets with a revision control strategy, as pitfalls that cause delays and unexpected costs can be avoided. Consequently, benefits of meeting targets get cascaded up to higher levels of management within the organization.

End users of the constructed software product are also indirect stakeholders. All of the benefits garnered from revision control are ultimately parlayed into building a more usable and functionally accurate software product that is intended for end user consumption.

Collins-Sussman, Ben. "Version Control with Subversion". 10/20/2009 <>.

Friedman, B., Kahn, P. H., Jr., & Borning, A. (2006). Value Sensitive Design and information systems. In P. Zhang & D. Galletta (eds.), Human-Computer Interaction in Management Information Systems: Foundations, (pp. 348-372). Armonk, New York: M.E. Sharpe.

Gray, Barbara. Collaborating: Finding Common Ground for Multiparty Problems. San Francisco: Jossey-Bass, 1989.

O'Sullivan, Bryan. "Making Sense of Revision-control Systems". ACM. 10/20/2009 <>.

Tigris. "Subversion Home Page". 10/19/2009 <>.

Privacy in Social Computing

The computer has evolved tremendously over the last half century, to the point that today’s handheld devices are many times more powerful than the original mainframes. Today’s devices are also infinitely more interconnected, with both the internet and other devices around us. This means that information is flying, so to speak, everywhere at an amazing rate. Combined with humans’ social nature, it is no surprise that this all led to the sprouting and rapid growth of social networking sites. Due to underlying idea behind social networking being constantly updating personal information, privacy in the field is an ever present concern. With today’s networked applications, there is risk of some personal information being shared. This notion should, to a degree, be accepted by users, but the real value sensitive challenge is to determine the degree of acceptability this tradeoff creates for the user and their sense of privacy. The entire realm of privacy is a touchy subject, and will continue to be so as our online information base grows.

The social networking swell started several years ago and has grown remarkably to its current state; the big three networking sites, Facebook, MySpace and Twitter recorded 124.5 million, 50.2 million, and 23.5 million unique visitors, respectively, in September 2009 (1). With this many unique users, many of whom come back frequently (Facebook had 2.3 billion visits in Sept (1)), it clear to see the immense popularity of the networking trend. This networking movement plays on the natural human tendency to crave social interaction: while we are individualistic, the greater draw is to interact with others. Networking sites allow you to make your profile your own to varying degrees; on one extreme, MySpace, with virtually no limit to what can be done to your page, contrasted with the more professional based networks that have stricter limitations, thus catering to our individualistic desires. At the same time, they allow interaction between you and your ‘friends’ by sharing all sorts of personal information: text, pictures, audio, as well as video. With the amount of users frequenting social networking sites, it is easy to imagine the amount of data being created.

Each user profile on a social network (I will specifically be looking at Facebook, as I am most familiar with it) contains all sorts of information about the user: demographic data, interests, hobbies, organizations, jobs and so on. The powerful thing about this data is that it is largely accurate, according to Sree Nagarajan, founder of Colligent, a company that provides our data to marketers (2). This accuracy of data, combined with its abundance is a dream come true for advertisers. It allows for targeted advertising to happen on a page by page basis: the ads that each user sees can be tailored specifically to his or her interests and demographics as well as the actual content on the page they are on. This is all made possible by Facebook’s (and other networks’) very uniform and consistent presentation of data, along with the fact that it is largely public (though the definition of what is truly public is constantly being refined). Facebook makes matters even easier by offering an API that allows for scraping of data from users’ news feeds on the fly. All these factors add up to a platform that is a data-mining wonder.

Of course, privacy is a huge concern when so much personal information is so widely available. Wikipedia defines privacy as “the ability of an individual…to seclude themselves or information about themselves and thereby reveal themselves selectively” (3). This definition works very well with the social network users’ needs to have their information visible and easily accessible to their social contacts and out of the hands of strangers. Like any other web application, networks assure users that “of course your privacy is very important to us” (4). But how true is this statement?

While investigating privacy in location-enhanced computing, Freier et al. developed a set of features that had direct impacts on user privacy, some of which are applicable to social networks as well: interpretability, awareness, control, scope of disclosure and risk and recourse (5). As stated earlier, social network data is very standardized, and is very easily interpreted. This makes it extremely easy for both legal and illegal searching of the data, both reducing privacy. The question to ask here is if this extent of standardization and interpretability is really necessary for the operation of the social network. On the one hand, the networks could make the data harder to mine and less accessible, but they would then be biting the hand that feeds them; advertisers would surely be displeased. A case could surely be made for both sides, though unfortunately, the side with the most money, the advertisers, would surely win.

Awareness of what information is being shared with whom is an important part protecting your privacy and goes hand in hand with the ability to control the flow of information. Freier et al. classify systems into two categories: invisible and transparent. In other words, invisible systems do not bother users with notifications for their awareness, whereas transparent systems disclose all information regarding privacy. On the surface, it would seem as though transparent systems are the correct design choice in terms of value sensitive design and that users would embrace them; systems designed to be invisible to the user would surely fail. However, studies show otherwise. For example, the User Account Control feature introduced in Microsoft Vista was supposed to address user awareness of when system settings were being modified and provide control to allow the change or deny it. The aim was to preserve the security and privacy of users’ computers, both very important values in most users’ minds. However, after launch, many users wound up turning the feature off, despite the fact that it tried to inform the user of an issue and provide control over how to proceed. Perhaps this was due to a poor implementation, but there may be other reasons. Bonneau and Preibusch conducted an extensive study of privacy features in the social networking landscape and found trends that one would not otherwise expect (6). First they split the population into three groups, what they called the marginally concerned, pragmatic majority and the privacy fundamentalists. They discovered that the majority of users, the pragmatic majority, claimed to be concerned with privacy, but given an attractive service or monetary rewards, quickly forgot about it. In addition, it was shown that the more assurance of privacy a social site provided, the less comfortable non-fundamentalists became. In other words, minimizing the sense of privacy in a site, while actually providing it was the best approach to appease all three user groups. In addition, the study found that social network sites (especially Facebook, it was the worst offender) tend to bury privacy settings deep in the site settings. This makes it difficult for users to opt in or out, depending on the situation, and only the dedicated fundamentalist group described above bothers to look at and modify them. All this contradicts the seemingly common sense idea that transparent systems would be more welcomed by users and points to the fact that people are content with invisible systems.

The scope of disclosure is very important in analyzing privacy in social networks. Because of the different classes of people that a user interacts with (direct friends, friends of friends, strangers, etc.) there need to be definitions of what different user groups can see. The different classes defined by Freier et al. applied well to the location-enhanced devices they discussed, but the classes Priebusch et al. defined are much more appropriate. They suggest the data classes that are private, used only internally, group, seen by friends, community, seen by users of the social network regardless of friend status, and public, that can be seen by anyone, regardless of social network status (4). In addition to these definitions, I think we can expand the group definition to reflect the fact that users can have actual friends and people in their network (i.e. RPI), two groups with whom users can have different types of interactions. Within the context of social networks, these classes are the bread and butter of privacy settings: the nature of the sites requires information to be shared and users need power over who sees what parts of their profile. Tied to scope and disclosure is the risk and recourse metric of measuring privacy. This feature deals with the sensitivity of information versus the ability of users to hold accountable those who use their information inappropriately. Unfortunately for social network users, their data is often very sensitive and their potions for recourse very limited. For example, a study found that many users on social network sites accept ‘friend’ requests without any checks (13% on Facebook, 92% on Twitter) and post their address information, as well as their vacation plans (7). The combination of these three factors makes social networks a great new place for burglars to look for homes to hit. Granted, these events are much more severe and rare than most other inappropriate uses of users’ information, but regardless, users have very little recourse against abusers of the system as they are often unknown. However, it brings up an interesting point: how much of the users’ privacy concerns are brought on by uninformed or foolish behavior, and how much should and can social networks do to prevent them? Common sense (and our mothers) tells us not to accept candy from strangers. The same principles apply to social networks, and if users ignore them, then they are asking for trouble. As for the social networks, it would perhaps be possible to create algorithms to analyze suspicious user friend requesting patterns, though the effectiveness and ethics (privacy included) of this would be questionable.

There are several major stakeholders in the system, both direct and indirect. First, the most obvious direct stakeholder is the user base that uses social networks. They are the group around whom the entire system is designed and built, and to whom the advertisers push products. The advertisers and marketers are another large stakeholder, though indirectly. They communicate with the companies that mine the users’ data and sell it to them to provide targeted advertising. The data mining companies are also direct stakeholders. These three stakeholders are on opposite ends of the privacy issue; the users desire more privacy whereas the miners and advertisers want more lax privacy policies. Which side is right is debatable. While user privacy is an important value that designers should embrace in all applications, as the study above showed, most users forgot about their privacy concerns once given a reason, usually an attractive service. The advertisers, on the other hand, stand to benefit greatly from looser restrictions, allowing them to receive more information and allow them to better server targeted advertising. The ethical question is whether they should receive these looser restrictions, given that users would likely still use the services. It would greatly tread on users’ value of privacy, for sure, but would superior ad targeting serve the users’ needs better? Would these ads slowly move from being looked at annoyances to being useful and actually see higher click through rates? These are definite questions to consider and incorporate into future privacy decisions.

Works Cited

1. Facebook vs Myspace vs Twitter. [Online] [Cited: 10 12, 2009.]

2. Buskirk, Eliot Van. Your Facebook Profile Makes Marketers' Dreams Come True. WIRED. [Online] 4 28, 2009. [Cited: 10 12, 2009.]

3. Privacy. Wikipedia. [Online] [Cited: 10 14, 2009.]

4. Preibusch, Soren, et al. Ubiquitous Social Networks - Opporotunities and challanges for privacy-aware user modelling. Corfu : s.n., 2007.

5. Freier, Nathan G., et al. A Value Sensative Design Investigation of Privacy for Location-Enhanced Computing. Seattle : s.n.

6. Bonneau, Joseph and Preibusch, Soren. The Privacy Jungle: On the Market for Data protection in Social Netowrks. Cambridge : s.n.

7. Gonsalves, Antone. Social Networkers Risk More Than Privacy. Information Week. [Online] 8 27, 2009. [Cited: 10 13, 2009.]

Power to the People

The mindset for designing software has evolved considerably over the last several decades from being developer centered to a much more user centered approach. This evolution was, perhaps, partially driven by the successes of ethnographic research that was being conducted in other industries when designing products. The transition to the computing realm seemed only natural as the ubiquity of consumer computing grew, and was moved along even quicker given the incredible uptake of internet applications. Due to the high reliance of users on various applications in different parts of their lives, the need for more usable interfaces became an urgent concern. However, developers were ill trained in usability practices and thus were unable to create interfaces very well. Having a team of designers work on the GUI seemed like the answer, but even then there were inadequacies with the resulting product. The contextual design (CD) methodology was formed to address all these issues and come out with products that were designed specifically around user concerns. The results from projects designed with CD in mind are often very positive, and embrace many aspects of user and participatory design.

As Blomberg & Burrell pointed out, ethnography was brought into the spotlight when the internet boom made creating software for home consumption a big business (1). This brings up an interesting point: why was ethnography and contextual design not as utilized when software was created mostly for business use? Are business users more prone to accept bad UI? The argument could be made that since business users are locked into whatever software the company is providing (be it internal or external) they have no other options or anything to compare it to, no matter how frustrated they are. The company may have chosen the software because of cost, compatibility or any other business reason. In addition, after a certain amount of time, the users likely adjust to the unintuitive interface and system structure, and are forced to accept the strategies and workflow as they are (2). Another argument is that the business user is easier to design for due to the fact that they are more predictable than the wide array of different home users. The tasks and different ways home users use a given program may be much more diverse than the comparatively rigid structure of a company. In most companies many processes are very defined, allowing for much easier analysis, and not necessarily requiring an entire ethnographical study.

Ethnography, in the context of HCI, is a study aimed at identifying opportunities for enhancing experiences (1). Contextual design, on the other hand, can be summed up as the approach to designing software based on an understanding of how the customer works (2). Combining these two practices in designing software has one colossal effect on the project: users are involved at nearly every point. The concept is simple: if you don’t understand what the user does, it doesn’t matter how visually appealing the interface is, the end result will fail to meet the user’s needs. To that end, contextual design a major theme: you need to understand what users do. In this regard, the user is the expert. However, an interesting contradiction appears when one attempts to put this idea to use. It turns out the user is actually often unable to convey exactly what it is they do (3). They give an impression, which in many cases is inaccurate. Because of this, the best way to understand exactly what a user does, is observation. There are different schools of thought on how this should be done: one observer, multiple observers, camera, audio or a combination of several of methods. There are certainly pros and cons to each method, but an interesting rift appears between in person observation, and more passive methods such as video recording. Are user responses and behaviors going to be different if they are observed in person or on film? If so, the passive method would produce a more pure representation of the tasks the user performed, even more so if the user wasn’t informed of exactly when the recording was occurring (ethics concerns naturally surface with such a scenario). The same concept applies to when speaking with people in follow up interviews/debriefs. When in person interviews are involved, users tend to not always tell the entire truth due to being uncomfortable or overly privacy conscious (1). Would a phone interview relieve the pressure and increase the comfort of the interviewee? Would not personally knowing the interviewer put the user at ease and allow them to open up more? If this were the case indirect interviews would seem to be more useful, though the visual element present in face to face interviews would be eliminated, so the best method would need to be evaluated on a case by case basis, perhaps with input from the user.

In interacting with the users through the design process, the goal is to get a grasp of what they do and if there already is a current system, what inadequacies they face with it. When this information is gathered through observations and follow-up interviews, the design team needs to analyze it, and then develop some conclusions. With these, the team can proceed to what Holtzblatt refers to as ‘visioning’ (4). Visioning refers to inventing solutions given the context of the larger practice. With all the information gathered through the observation phase, the design team should have a grip on what the user does. At the visioning stage, the team comes up with a way to modify the way the user goes about their task to incorporate new ideas. Holtzblatt stresses that this stage does not include interface choices (4). This is meant to separate the interface from the actual functionality of the application, allowing the team to perfect the actual functionality before moving onto the GUI that will drive the application.

As the popularity of home computing grew, so did the need for the software presented to consumers to be well thought out and implemented. As Hugh et al. point out, developers are not good at understanding how people work; they write code and design system implementations (3). As such, having them be the only ones responsible for the usability aspect of software made very little to no sense. The users are the ones who know exactly what they do, what workflows they follow, and what problems they encounter. Because of this, involving them at every point in a software project is vital for the end result to represent something a consumer would find useful. Only with user involvement can design teams take workflows and feedback and turn it into software that makes sense for the user. In hindsight, it is unclear why the ethnography and contextual design movements didn’t grab hold earlier. One thing is certain, however: after years of bad software design, it is good to see companies finally recognizing their customer base for the invaluable resource that they are.

Works Cited

1. Blomberg, Jeanette and Burrell, Mark. An Ethnographic Approach to Design. [book auth.] A Sears and J Jacko. The Human-Computer Interaction Handbook: Fundementals, Evolving Technologies and Emerging Applications. New York : Taylor & Francis Group, 2008, pp. 965-988.

2. Contextual Design. Beyer, Hugh and Holtzblatt, Karen. 1997, ACM Interactions, pp. 32-42.

3. Beyer, Hugh, Holtzblatt, Karen and Baker, Lisa. An Agile User-Centered Method: Rapid Contextual Design.

4. Holtzblatt, Karen. Contextual Design. [book auth.] J. Jacko and A. Sears. The Human-Computer Interaction Handbook: Fundamentals, Evolving Technologies and Emerging Applications. New York : Taylor & Francis Group, 2007.

Contact At the Expense of Privacy or: How Google Asked Me to Stop Worrying and Ride the Wave

Brian R Zaik

Problem Space: Synchronous connected communications systems (Google Wave)

Value Implicated: Personal privacy

Direct Stakeholders: Current testers of Google Wave preview, future users of Google Wave, Google Inc., non-Google developers of Wave apps

Indirect Stakeholders: Enterprises and other organizations of people, friends, family, and colleagues of Wave users who are not themselves ‘catching the Wave’

Introduction to the Problem Space

On May 27, 2009, Google introduced what they promised would become the next generation of Internet communications – Wave. This technology was introduced as “equal parts conversation and document,” a merging of email, instant messaging, wikis, and social networking (2). It is a synchronous communications system, initially based on the Web, which focuses on strong collaborative and real-time conversation threads built into “Waves.” These Waves are server-hosted XML documents that allow seamless and low-latency concurrent modifications (5). What that means is by default, users will be able to see the current status of all of their Wave contacts – online, offline, or away – and even engage in synchronous conversations on the Web. I will be able to see my friends as they type out messages, make changes, misspell words, add maps and other widgets, and even play Sudoku with me. This is fundamentally different from email, which is based on the chronological ordering of discrete messages or message threads, built largely as a response to the perceived deficiencies of email and other traditional communications media.

As MC Siegler points out in his recent article on Techcrunch, Google wants to turn Wave into a dominant messaging protocol that would be shared between many different contexts (4). That means that Wave may start on the Web, but may grow to encompass a whole host of “connected” desktop widgets, messaging clients, and other programs – all of which would be at least partially based on the concepts behind Wave.

Why Privacy?

In talking about how privacy as a value is implicated by the design philosophies behind Google Wave, it’s appropriate first to define what I mean by privacy. The formal definition within the Merriam-Webster dictionary defines privacy as “the quality or state of being apart from company or observation” (3). This definition also includes the concept of “unauthorized” access. Recent concerns raised with changes to Facebook have sparked debate about how much end-user control must be designed into systems in order to let users tell the system what they consider to be unauthorized access. Thus it becomes the responsibility of the product designers to ensure that user privacy can be both identified within the context of the product and protected.

How Does Google Wave Implicate Privacy?

It is a cloud service that is based on Google’s central servers, rather than individuals’ own machines.

First, there are the overtly high-level issues with allowing companies like Google to have access to all conversation data. The white paper overview of the synchronous technologies behind Google Wave clearly states that the complete thread of multimedia messages (blips) are located on a central server owned by Google (5). This is already the case with IMAP and Web-based email provided by Google (Gmail) and other vendors, yet now Google will have access to not just the messages being generated by users and the interactions of users within the Wave application. Every move in a Sudoku game will be tracked by Google’s servers, as well as every map marker added by users wanting to share geographical points of interest. Google does not plan on retaining this information outside of communicating it in real-time to other users, but the very nature of the data the company is tracking has changed with the advent of the Wave protocol.

It encourages “always-on” communication by eliminating once intentional barriers.

Second, and more important, are the low-level issues that are harder to realize: that in essence, the Wave paradigm encourages such immediate communications that users enter the personal spaces of other contacts with each Wave-based interaction. Users can see when their acquaintances are online, and they can even see when others type. Email was created in part as a response to the telephone and its sense of immediacy; email intentionally erected a barrier to immediate communication by promoting a design that encouraged users to respond to messages at their own schedules (1). It’s almost like users of email are posting to a newsgroup – who knows when people will read it, if at all? Privacy is bolstered by the deliberate or unintentional barriers to synchronous communication within the medium: message delivery failures, slow networks, “I’m away from my computer,” and others.

What we see here with Wave that was never quite the case with existing forms of communication is the removal of most barriers to communication. Google Wave is a technology built to encourage people to be in sync with one another (4). This sounds on the surface to be a magnificent improvement, yet the design of the system must be sensitive to how users are likely to use (and abuse) Waves. The vision of synchronous communication that Wave promotes can become a threat to personal privacy if appropriate safeguards are left out of the overarching design.

My interest in Wave as a communications medium was born on May 27, 2009, when the Google I/O conference talk and demo spread through the tubes to nearly every major news outlet on the Web (2). But only recently has it peaked with the invitation I received from a friend to enter the closed preview for Google’s Web-based Wave tool. I was included in a big Wave with lots of other people, many of whom I do not myself know (and probably didn’t want to know). While this may have happened with mass emails, with emails I would never have been able to see when people are online and when they are typing a message to which I am attached. While I can’t see their other messages in Wave, the implications of a technology like this are somewhat concerning from a privacy standpoint. The software is designed in such a way as to encourage users to open the door to strangers, even on behalf of others. And when someone else opens my door to strangers without my permission, it starts to feel as if they too are in the room when I am communicating within that Wave. If they demand an urgent response from me, there are few ways to mask whether or not I am online or available (or interested) to respond to them. With asynchronous email, this is not the case.

Though Google has promised to give the user an ability to turn off the software’s transmission of letter-by-letter messaging and online status, these changes have not yet been implemented in the software. Even so, Wave’s default behavior will probably force users to opt out of these features. Given that historically the majority of users are unlikely to change default settings, it seems likely that these potential invasions of personal privacy will remain for many Wave users.

It is designed to act as a ubiquitous, context-merging communications protocol.

Why is Google Wave different from instant messaging and wireless email on mobile devices? Instant messaging clients can be turned off or changed to make it seem as though the user is “away” and unable to communicate, but Google Wave is meant to address general communication, rather than casual, friendly conversation or work communications limited to the office. It is, as Techcrunch author Siegler puts it, “not just a service…[but] perhaps the most complete example yet of a desire to shift the way we communicate once again” (4). In the past, class discussions in the Theory & Research in HCI course have focused on how the contexts of work, play, and casual communication could be merged within a single technology and thus create the expectation that a user will always be within reach for communication. It wouldn’t matter if she is located in the office, at home, or on the road, in work mode or up for casual conversation. The paradigm pushed by Google Wave seems to have the strongest chance of becoming this all-encompassing, merged-contexts communications standard. And that, of course, has privacy implications for the end user.

In many cases, users will create different identities and expectations pertaining to how they communicate with others online. A gaming chat account will belong to a different context and thus follow different rules and expectations from work email. Google Wave is built to merge all of these media: to play games together, talk about work issues, brainstorm collaboratively, and connect to each other casually. It remains to be seen exactly how people will use the Wave platform, but this merging of activities could impact the abilities of users to effectively separate purposes between different contexts in the future, thus impacting their personal privacies.

Direct Stakeholders

Current Testers of Google Wave Preview: These are the front-line fighters in the battle to defend user privacy with Google Wave. These people were invited either by Google or other Wave users to join the exclusive, closed test program for Wave. Many of the features that have been promised by Google have not yet been implemented within the preview, and thus it is the responsibility of these users to ensure that Google remains true to its word. In fact, I believe it is necessary for preview testers to vocalize their concerns with how the design of Wave is progressing. There must be active communication between these people and the Google Wave team. Google, in order to create a product that is more responsive to users and the personal privacies they represent, should implement online town hall meetings or actively seek out feedback to ensure that the final version of Wave limits their abilities to limit what is shown to whom (such as live typing). And these stakeholders should also continue to act as stewards for the much larger group of future users, posting analyses such as these and thought-provoking commentaries to blogs and other online communities.

Future Users of Google Wave (employees, casual users): These are the people who will actively use the final release version of Google Wave; they are the ones who will choose to use the communications protocol within their daily lives. These users may belong to organizations or remain individual users, and thus they will be able to decide how often they rely on the Wave paradigm to work together and communicate. Though Google will hopefully offer extensive feedback mechanisms once the full product is rolled out, these stakeholders will not be able to voice their concerns about Wave and its implications on privacy until then, though they most certainly collectively influence the view society will have of Wave when it’s finally out of beta.

Google Inc.: Google is the company behind Wave, as well as the core developers of the first Wave-based applications within the protocol. Google is most certainly a key stakeholder in this problem space, especially since the public image of the company and future profits will both be impacted by how favorably the public receives Wave and the ideas it represents. Google is promoting the philosophical paradigm for communication behind Wave, and developers must ensure that they keep well in mind the desires and interests of users all across the board, from industry, casual communities, and specific contexts (such as gamers or interest-based groups).

Non-Google Developers of Wave Apps: These developers are springing up here and there, and will be the future of Wave as Google envisions it (2). Wave is meant not just as a Google-initiated project, but as a federated protocol for communication in the future. These developers will steer users in ways that may be different from Google’s interests, and they too must carefully pay attention to the privacy ramifications of the new types of interactions they enable between people.

Indirect Stakeholders

Enterprises and Other Organizations of People: Blackberry introduced “email at your hip” years ago, and the iPhone and other “always connected” devices have changed email to become more of a must-have tool than ever before. These devices have had an impact on how much freedom a person has to stay off the grid and outside of the observation of others, but they’re still depending on email with all of its barriers to communication. Google Wave holds the promise to connect people within organizations more effectively than other forms of communication, and it also could set a strong expectation within groups that users must make themselves available to respond at once. Based on how people respond to the philosophies espoused by Google Wave, company culture will be impacted indirectly.

Friends, Family, and Colleagues of Wave Users Who Are Not Themselves ‘Riding the Wave’: The kind of “communication immediacy” that Wave promotes is a paradigm shift that could easily transcend into the rest of society. Despite the fact that email was created to remove the immediacy of phone calls and intentionally erect a barrier, allowing people to respond to communication at their own schedule, Wave seems to embody the worst of what email has become: it will become immediate and demand presence. As we have seen with mobile phones, texting, Twitter, and Facebook, social networking has permeated into most civilized culture, and more traditional interactions such as letter-writing and face-to-face communication have been shortened, quickened, or otherwise diminished as a result of the new types of ways in which people communicate. With this trend, it’s more than likely that the synchronous, collaborative “me-too” communication at the heart of Wave could affect everyone who interacts with Wave users outside of Wave itself.


  1. Cubrilovic, Nik. "Relevance Over Time." Techcrunch. Techcrunch Inc., 12 Oct. 2009. Web. 21 Oct. 2009.

  2. Google Wave Developer Preview at Google I/O. Perf. Lars Rasmussen. YouTube. Google Inc., 28 May 2009. Web. 20 Oct. 2009.

  3. "privacy." Merriam-Webster Online Dictionary. 2009.
    Merriam-Webster Online. 21 October 2009.

  4. Siegler, MC. "Google Wave And The Dawn Of Passive-Aggressive Communication." Web log post. Techcrunch. Techcrunch Inc., 12 Oct. 2009. Web. 19 Oct. 2009.

  5. Wang, David, and Alex Mah. "Google Wave Operational Transformation." (2009). Google Wave Federation Protocol. Google Inc., 28 May 2009. Web. 21 Oct. 2009.