Looking back often reveals new perspectives and deeper levels of understanding of the present and can provide insights into the future. Why, when computer mediated collaborative communication (cmcc) technology such as the person-to-person (video and shared desktops) tools Douglas Engelbart prototyped in 1962, are we just starting to use the tools today? Has the downturn of the economy been the catalyst for this new paradigm of communication? It’s difficult to not find news of ways enterprises are cutting back on expenses to remain viable. One significant expense shared by many is employee travel. Costs are not limited to the obvious expenses of the hotel, food, transportation, etc. but extend to the opportunity costs of the employee’s lost productivity while traveling. Since the early 60’s is this the first time the economy as a sole factor could drive the wide spread use of cmcc or are there other factors contributing to the popularization of cmcc?
Has the technology been so cost prohibitive that only a few could afford to develop cmcc tools based on deep and developed communication models? Licklider and Taylor suggested that in 1968 most governments could not afford it and continued to argue that there would come a point in time when governments would not be able to NOT afford it. Has that time finally come, close to forty years later? Are there other factors contributing to this lag in the evolution of our communication?
Can the resistance to wide use and adoption of cmcc tools be attributed to other factors such as current communication models that can’t be aligned with the technology? If the notion that for communication to be effective it must be performed in person, face-to-face is released the experience has the potential to transcend beyond imaginable possibilities and to offer the development of synergies not yet conceived. If the “traditional” face-to-face model can be expanded to the use of multi-faceted computer aid, communication can become sigmergic in nature. With cmcc tools being available anytime, anywhere colleagues can collaborate based on their energy, interest and available time. The process to achieve goals is not stalled in waiting for leadership to make assignments and check work. Research can be performed to produce richer contributions translating into more meaningful communications. Models as suggested by Licklider and Taylor would need to be reassessed and redefined.
I am in agreement with Licklider and Taylor; no one knows all information pertaining to a particular subject. Humans for the most part want to communicate and be social. One could also argue that we are natural collaborators. Once experiencing the synergies of real collaboration that can be achieved so eloquently using cmcc tools it’s difficult to digress to more solitary models of working together such as collaboration by staple. We have all seen it, or perhaps even personally experienced it. A task is assigned. The work is then divided amongst the assignees. Each person does their assigned part, on their own in isolation of the others. In the best case, they will meet before completion to merge the individual sections together and discuss the pieces and how best to melt them together. Copy and paste is often the “glue” that pieces the document together.
In contrast, the results of true collaboratively crafted work, it is difficult if not impossible to discern individual work as all participants have taken part in the work as a whole not just their part. This brings back the notion of breaking out of traditional communication models and moving towards models that are enhanced by the technology and promote new ways of working and thinking.
my apologies to everyone for posting this late. ~Jami
Tuesday, September 15, 2009
Augmenting Human Compassion
Augmenting Human Compassion
“Augmenting human intellect” represents one of the continual goals of any well-designed technology. In the course of our readings, three major influences have sprung up time and again as to how we, as a society, may go about using technology to evolve our cognitive capabilities. For Douglas Engelbart (1962), technology may do so by leveraging already existing perceptual mappings or by bringing the mental abilities of a person up to a level of more complex thought through various methods including “streamlined terminology” and “powerful concepts.” Licklider (1960) also defines a similar concept called “man-computer symbiosis”, a system whereby humans and computers work in conjunction to “think as no human brain has ever thought and process data in a way not approached by the information-handling machines we know today.” William Ross Ashby (1956) also wrote of “amplifying intelligence” in his work on cybernetics. However, intelligence needs to be countered and balanced by basic moral and ethical considerations. I would argue that the foundation of these considerations lies squarely with an entity’s capability to feel compassion. Additionally, with a certain level of compassion (and intelligence), ethics, as in a listing of rules or system of conduct, becomes secondary. So, instead of concentrating on the aspect of human cognitive evolution defined as intellect, I would like to find methods for augmenting human compassion using digital media.
Compassion as a component of a healthy mental state and as a necessity for a large societal organization is a sometimes marginalized concept. That may be because it is seen as a responsibility of parents and families to develop compassion in their children. Certain concepts are only slowly adopted into the mainstream’s consideration. As it is, the research into human-computer interaction focuses mainly on functionality and usability. Even more human-centered designs are concerned with business considerations such as turnaround and click-through analytics.
For example, while reading Myers’ A Brief History of Human Computer Interaction Technology, I found that the introduction clearly states that his history only covers the “computer side of HCI” and that “a companion article on the history of the ‘human side,’ discussing the contributions from psychology, design, human factors and ergonomics would also be appropriate.” This “human side” approach would form the basis of my research project for determining how one might augment human compassion.
Discovering what makes one more compassionate would be the first topic for research. Within the context of digital media and within the constraints of one semester, it seems daunting to hone compassion down to a measurable aspect, but I hope that by making an open call for ideas that some epiphany will come about.
From the historical perspective, we can see that in developing his conceptual framework for augmenting human intellect, Engelbart defines the objectives for his study and covers his basic perspective. He promotes leaving room for intuition or a human’s “feel for a situation”. For augmenting compassion, I would also say that one would have to leave room for epiphany as well.
My first avenue for exploration could include researching whether or not any current internet memes act to augment compassion. From cute LOLCATS with funny captions to YouTube videos like Christian the Lion, does sharing these with others help to augment our society’s overall level of compassion? And, conversely, is sharing morbid imagery damaging to compassion? One caveat comes with the level of interaction that might be necessary for long-term effects. If one sees something, is it enough to have a persisting effect? Or must one also be involved somehow to ensure a stable change in mentality?
Given these possibilities for investigation, another avenue for exploration that might prove to engender long-term increases in compassion levels would involve the integration of a participation component through interactive art or music. If a lack of compassion stems from a lack of empathy with others or with a disconnect from humanity or nature, then a key component to developing compassion in others would involve creating a palpable connection to others and, thereby, to humanity in general. With interactive art, the person becomes a component of the creation, a powerful metaphor that might prove helpful for compassion development. However, a connection beyond the computer might also be necessary for augmentation. An association from human to art piece to creator of art piece to humanity would be ideal.
Following Engelbart’s format, the objective of a study taking in the previous suppositions and conjectures would include the following goals: (1) to find the factors that determine a given individual’s level of compassion; and (2) to develop methods that would act to augment human compassion using digital media. Engelbart’s specifications for his framework still fit for this research direction.
Step one would be to find a test for compassion so that quantitative results can verify any changes over time, from before exposure to the stimulus to afterwards. Step two would involve testing non-participatory stimuli such as the YouTube videos for changes in levels of compassion. Step three could then cover participatory situations of varying complexity.
As this blog post/response essay is written in response to our second week’s readings on the subject of History in Perspective, any further reading suggestions along these lines of augmenting compassion, augmenting empathy, or developing emotional intelligence would be greatly appreciated. Any studies that have been performed on the effect of interactive art would also be of great interest to me. Usually I am not one for trying to pin down the exact meaning or relevance of a piece of art, but in the context of a compassionate evolution I would concede the necessity for some formal investigation into the matter.
I believe that the nexus of intelligence and compassion would negate the need for overly strict rules that may be based on a narrow or subjective morality. The ultimate goal for technological society must include room for this augmented compassion.
--Christine Rosakranse, Master's Student: HCI - RPI
Labels:
Augmenting Human Compassion,
communication,
engelbart,
HCI,
IA
Consulting the Memex
A major refrain in Vannevar Bush's As We May Think article is the notion of consulting – how are we to wrangle, sift, or otherwise make sense of the growing the mountains of data humans are creating. Throughout his exploration of possibilities, Bush prophesies the advent of many current and emerging technologies: the digital camera – although he didn't have the terminology (page 4), the photocopier (page 4), the scientific calculator (page 7), the home computer (page 8), credit cards (page 9), the scanner (page 10), a Windows-like operating system (page 10), and arguably more – including the birth of the semantic web (page 11). Seeing how much of the world I know in 2009 was still a dream in the making in 1945, I took a brief survey of the advancements in various technologies as I have experienced them in my 38 years. Bear with me... this is illuminating considering how much of Bush's ideas have since come to exist:
The year before I was born, the 5¼ inch floppy disk was invented followed shortly by the dot-matrix printer and microprocessor. And although the existence of the VCR coincides with my birth, it wasn't until after the late 70's until it hit the commercial market.
1972 marked the appearance of the first word processor – and the first video game, Pong. The next year, Xerox announced the creation of the ethernet. In 1975, the laser printer was invented, followed by the ink-jet printer in 76.
The first spreadsheet was released in 1978, the year before cell phones and the Cray supercomputer were invented. It's no surprise that MS-DOS and the first IBM-PC share a birth year (1981). Three years later, Apple invented the Macintosh, and CD-ROMs hit the streets. Windows is released in 1985, during my last year in middle school. By the time I graduate, digital cellular phones and high-definition television have been invented.
My freshman year of high school, I owned a Kaypro 4-84 with two 5¼ inch floppies, monochrome monitor, internal modem operating at 96k baud rate, and a processor speed of 4mhz. It was a portable computer weighing in at 36 pounds.
The year after I graduated, Time Berners-Lee created the World Wide Web and Internet protocol (HTTP) and WWW language (HTML). Five years later, it is truly world wide. About that time, the DVD came into existence. In 2001, our relationship to information changes when Apple introduces the iPod. Half a decade later YouTube and Twitter both hit big around 2006.
After all that has come to pass – much of it predicted by Vannevar Bush, we are still precisely in the same situation as we were in 1945 in regards to consulting (or navigating) the great storehouse of human knowledge. We've moved past film and microfiche – both on the road to technological extinction. We're in digital land now, and we've got more information than be conceived. Part of our modern quandary is that it isn't just scientists creating and parsing the data. The digital realm is far more egalitarian – anyone with access can create content. More so than at any point in our history does the catastrophe of “truly significant attainments becom[ing] lost in the mass of the inconsequential” become a risk, especially as we move fully into the age of crowdsourcing.
We are now living in the information age, and many great minds are now bent to the task of sorting out how best to deal with the volume of information our age is issuing. As with many of Bush's predicitons, I think he hit the nail on the head in regards to creating associative links between points of knowledge. We're already doing that with tags. Tagging may be the way information is cultivated in the future: “Selection by association, rather than by indexing, may yet be mechanized” (10).
Logging off the memex,
Mark Oppenneer
The year before I was born, the 5¼ inch floppy disk was invented followed shortly by the dot-matrix printer and microprocessor. And although the existence of the VCR coincides with my birth, it wasn't until after the late 70's until it hit the commercial market.
1972 marked the appearance of the first word processor – and the first video game, Pong. The next year, Xerox announced the creation of the ethernet. In 1975, the laser printer was invented, followed by the ink-jet printer in 76.
The first spreadsheet was released in 1978, the year before cell phones and the Cray supercomputer were invented. It's no surprise that MS-DOS and the first IBM-PC share a birth year (1981). Three years later, Apple invented the Macintosh, and CD-ROMs hit the streets. Windows is released in 1985, during my last year in middle school. By the time I graduate, digital cellular phones and high-definition television have been invented.
My freshman year of high school, I owned a Kaypro 4-84 with two 5¼ inch floppies, monochrome monitor, internal modem operating at 96k baud rate, and a processor speed of 4mhz. It was a portable computer weighing in at 36 pounds.
The year after I graduated, Time Berners-Lee created the World Wide Web and Internet protocol (HTTP) and WWW language (HTML). Five years later, it is truly world wide. About that time, the DVD came into existence. In 2001, our relationship to information changes when Apple introduces the iPod. Half a decade later YouTube and Twitter both hit big around 2006.
After all that has come to pass – much of it predicted by Vannevar Bush, we are still precisely in the same situation as we were in 1945 in regards to consulting (or navigating) the great storehouse of human knowledge. We've moved past film and microfiche – both on the road to technological extinction. We're in digital land now, and we've got more information than be conceived. Part of our modern quandary is that it isn't just scientists creating and parsing the data. The digital realm is far more egalitarian – anyone with access can create content. More so than at any point in our history does the catastrophe of “truly significant attainments becom[ing] lost in the mass of the inconsequential” become a risk, especially as we move fully into the age of crowdsourcing.
We are now living in the information age, and many great minds are now bent to the task of sorting out how best to deal with the volume of information our age is issuing. As with many of Bush's predicitons, I think he hit the nail on the head in regards to creating associative links between points of knowledge. We're already doing that with tags. Tagging may be the way information is cultivated in the future: “Selection by association, rather than by indexing, may yet be mechanized” (10).
Logging off the memex,
Mark Oppenneer
Monday, September 14, 2009
A brief history of Human Computer Interaction Technology
The article, “A Brief History of Human-Computer Interaction Technology” is a brief description of the history of HCI technology from 1950s to 1990s. In the article, Brad A. Myers talks about the importance of research at universities’ research labs or at corporate or government supporting research labs in the development of HCI technology. Especially, university research labs have led to many innovative HCI technologies. Figure 1 on p.46 shows that, for the major HCI technologies, university research was started earlier than the corporate research or commercial products development.
Recently, HCI technology has been rapidly developed and diversified, so it seems that we are trying to interact with almost every object around us in a way that wasn’t possible before. However, the basic concepts and ideas about the way to interact with computers or computer equipped objects originated from the research labs in the 1950s. These technologies include Hypertext and Multimedia; the ways to input information with machines, like mouse, tablet, and motion sensing device; and effective representations, like GUI and Three-Dimensionality. Also, these ways to interact with machines change the way we understand and interact with the world.
For some of the innovative foundation technologies, when they were born in a research lab, even developers didn’t know how they would be used in the future and how they would change the world. For example, the idea for hypertext which makes possible today’s internet was initially tried in universities’ research labs like Stanford University, Brown University and The University of Vermont. Then, the hypertext idea was developed into World Wide Web, which was created at the government-funded European Particle Physics Laboratory (CERN) and developed at the University of Illinois’ National Center (NCSA) (p.49). This technology was not only a starting point of the internet but also changed people’s way of thinking from linear and directional to non linear and multidirectional. Also, all these fundamental changes were made possible by creative experiments at research labs, and today’s HCI technology is being developed according to the basic concepts of these changes.
Another important fact that Myers mentions is government funds that make a lot of HCI research possible. Even though there were a lot of technologies and interfaces that were funded by companies like IBM, Xerox, and Apple in the history of HCI, the more fundamental and conceptual changes and experimental trials were started by government funding. For example, The Mouse was developed with funding from ARPA and NASA (p.47). Virtual Reality technology, which is now being developed by many private companies like Microsoft and Nintendo, was also started by government funds from the Air Force and Central Intelligence Agency.
To illustrate the importance of research labs and government support for the research labs, Myers describes what kind of HCI technologies have been created and how the technologies were developed. This helps basic understanding about the field of HCI. Myers explains four different types of technologies: Basic Interactions, Applications, Up-and-coming Areas, and Software Tools and Architectures. For the Basic Interaction technology, he talks about early technologies of basic interface for interaction with computer, such as mouse, windows, and the way to operate objects on screen. For applications, he talks about drawing and text editing programs, hypertext, CAD, and graphical video games. For Up-and-Coming Areas, he talks about technologies for the near future, such as gesture recognition, three dimensionality, VR, AR, etc. For Software Tools and Architectures, he talks about technologies to create interfaces, like UI software tools and interface builders, and Component Architectures.
Reading this article, I thought that the importance and effect of research labs is not limited to the field of HCI. New concepts and technologies in HCI have been an important role in other fields. For instance, the development of electronic art and media art has been largely affected by HCI research. Myron Kruger’s early media art works, like “Videoplace”, would not be possible without his HCI research in the Computer Science lab at the University of Wisconsin-Madison. His early works using interactive technology between art work and participants have become some of the most important and innovative works in Interactive Art. HCI technology brought conceptual changes as well as stylistic changes in art. Interaction and participation technologies like Computer-Supported Cooperative made people think of art itself more as a process by artists’ and audiences’ participation than an object completed by an artist.
Lastly, this article focuses on the importance of a research lab in development of new HCI technologies. However, I think, the role of research labs would be also very important for evaluation of existing technology.
Byul Shin
The article, “A Brief History of Human-Computer Interaction Technology” is a brief description of the history of HCI technology from 1950s to 1990s. In the article, Brad A. Myers talks about the importance of research at universities’ research labs or at corporate or government supporting research labs in the development of HCI technology. Especially, university research labs have led to many innovative HCI technologies. Figure 1 on p.46 shows that, for the major HCI technologies, university research was started earlier than the corporate research or commercial products development.
Recently, HCI technology has been rapidly developed and diversified, so it seems that we are trying to interact with almost every object around us in a way that wasn’t possible before. However, the basic concepts and ideas about the way to interact with computers or computer equipped objects originated from the research labs in the 1950s. These technologies include Hypertext and Multimedia; the ways to input information with machines, like mouse, tablet, and motion sensing device; and effective representations, like GUI and Three-Dimensionality. Also, these ways to interact with machines change the way we understand and interact with the world.
For some of the innovative foundation technologies, when they were born in a research lab, even developers didn’t know how they would be used in the future and how they would change the world. For example, the idea for hypertext which makes possible today’s internet was initially tried in universities’ research labs like Stanford University, Brown University and The University of Vermont. Then, the hypertext idea was developed into World Wide Web, which was created at the government-funded European Particle Physics Laboratory (CERN) and developed at the University of Illinois’ National Center (NCSA) (p.49). This technology was not only a starting point of the internet but also changed people’s way of thinking from linear and directional to non linear and multidirectional. Also, all these fundamental changes were made possible by creative experiments at research labs, and today’s HCI technology is being developed according to the basic concepts of these changes.
Another important fact that Myers mentions is government funds that make a lot of HCI research possible. Even though there were a lot of technologies and interfaces that were funded by companies like IBM, Xerox, and Apple in the history of HCI, the more fundamental and conceptual changes and experimental trials were started by government funding. For example, The Mouse was developed with funding from ARPA and NASA (p.47). Virtual Reality technology, which is now being developed by many private companies like Microsoft and Nintendo, was also started by government funds from the Air Force and Central Intelligence Agency.
To illustrate the importance of research labs and government support for the research labs, Myers describes what kind of HCI technologies have been created and how the technologies were developed. This helps basic understanding about the field of HCI. Myers explains four different types of technologies: Basic Interactions, Applications, Up-and-coming Areas, and Software Tools and Architectures. For the Basic Interaction technology, he talks about early technologies of basic interface for interaction with computer, such as mouse, windows, and the way to operate objects on screen. For applications, he talks about drawing and text editing programs, hypertext, CAD, and graphical video games. For Up-and-Coming Areas, he talks about technologies for the near future, such as gesture recognition, three dimensionality, VR, AR, etc. For Software Tools and Architectures, he talks about technologies to create interfaces, like UI software tools and interface builders, and Component Architectures.
Reading this article, I thought that the importance and effect of research labs is not limited to the field of HCI. New concepts and technologies in HCI have been an important role in other fields. For instance, the development of electronic art and media art has been largely affected by HCI research. Myron Kruger’s early media art works, like “Videoplace”, would not be possible without his HCI research in the Computer Science lab at the University of Wisconsin-Madison. His early works using interactive technology between art work and participants have become some of the most important and innovative works in Interactive Art. HCI technology brought conceptual changes as well as stylistic changes in art. Interaction and participation technologies like Computer-Supported Cooperative made people think of art itself more as a process by artists’ and audiences’ participation than an object completed by an artist.
Lastly, this article focuses on the importance of a research lab in development of new HCI technologies. However, I think, the role of research labs would be also very important for evaluation of existing technology.
Byul Shin
Subscribe to:
Posts (Atom)