Though popular synthesizers, like the Minimoog, had been around since the early 1970s, the advent of standardizations such as MIDI, synthesizer polyphony, and digital interfaces and oscillators (DCOs) allowed for greater flexibility within the field of sound synthesis. These developments also allowed synthesists to store data digitally, eliminating the need for complicated charts that tracked every single parameter change on the knob-laden synthesizers that had been the standard only a decade before. The days of the Rock and Roll keyboardist deftly navigating an array of eight keyboards to acquire a different sound during one song were over.
PROBLEMS
While new technology often allows for new flexibility and greater opportunities to experiment, it inadvertently crafts a problem space in its wake. The new technological breakthroughs in the realm of sound synthesis brought wonderful results, but they also brought about dialogue regarding the way in which the electronic musician could better communicate with his or her peers. One such discussion surrounded the use of "presets," defined as "out of the box" sounds that come programmed into the synthesizer. In an article written by David Wessel for the University of California at Berkeley, "The sad truth is, many musicians never go beyond the factory presets." (Wessel) He continues"
"there are many [synthesizer] programmers who strive for new sounds with more expressive control. These programmers must struggle with various idiosyncratic and awkward front-panel programming systems. Patch editors help, but the whole enterprise lacks coherency, consistency, and expressive power. The time has come for a common programming language to describe the behavior of our synths" (Wessel)
What Wessel realized was that musicians are faced with difficulties just getting their machines to make their desired sounds. He argued for a standardized language, just like computers themselves follow. (Wessel)
It is easy for some to get a grasp of basic synthesis concepts. Oscillators, amplifiers, and filters - all elements of synthesis - require no special electrical knowledge nor do they require advanced music theory. Rather, they often come in the forms of knobs that can be freely tinkered with. Nonetheless, as one progresses up the synthesizer learning curve, things grow more and more complicated. This is especially true when you look at the great diversity of hardware and software synthesizers available. Add on DAWs (digital audio workstations) like Digidesign's Pro Tools, with over 900 pages of reference and how-to's in its manual, and the learning curve becomes staggering. How can we truly get all this wealth of information to users, especially non-professional users who still deserve to be afforded the same opportunities to create and express themselves as professionals?
VALUES IMPLICATED
Ultimately, the question at hand follows one of accessibility. However, the question immediately arise regarding "what is accessibility?" The answer takes on many forms. According to the United States' Rehabilitation Act of 1973, all federal agencies must make their computer and electronic resources accessible to people with disabilities. This amendment, Section 508, has served as the common template for all government agencies, and helps to ensure the rights of the disabled (Section). Most designers, when facing the task of designing for disability accessibility, are confronted with many shortcomings, mainly based on "past experiences and best practice" with little experimental evidence (Stephanidis 1) These techniques also tend to address the issues that very specific users (such as those with visual or motor impairments) (3).
More relevant to a study of synthesizers is the principle of "Universal Usability." According to Batya Friedman and Peter H. Kahn Jr., "universal usability refers to making all people successful users of information technology (Friedman 1253). It is, according to Friedman and Kahn, a sort of freedom from biases that designers may or may not take into consideration when creating a new product. They identify three major areas of research and design that face challenges with respect to universal usability:
1. Technological Variety
2. User Diversity
3. Gaps in User Knowledge (1253)
Through these challenges, Friedman and Kahn assert that universal usability is not always a moral issue -- some things simply do not need to be made accessible. They consider the example of the famed television program "I Love Lucy;" it is not a "moral good" that we make reruns accessible (1254). However, moral imperatives suggest that many things should definitively fall under the domain of universal usability. In conjunction with Section 508, they use the example of federal statistics being available only online; it would be obviously immoral to restrict this information to those who can access a standard computer without special modifications.
Universal accessibility, too, falls into the category of universal usability. Stephandis et. al. find the principles to be much broader than designing for people with "special needs," such as the disabled or the elderly. Rather, they feel that the design implications of new technology have grown to bring together a wider range of users with an even wider range of needs, extending accessibility problems beyond just the traditional views (Stephanidis 3). As designers, when we fail to express these expanding needs, we fail in our moral obligations to recognize users with different "abilities, requirements, and preferences" (3).
Why is this a moral need? As mentioned above, sometimes moral obligations are not part of the creative process or the product itself. However, as Friedman and Kahn stress, it is not only a moral idea, but a good idea to follow. " Moreover, universal access with ethical import often provides increased value to a company" (Friedman 1254). They use examples from a study of a communications company to show how an expanded take on accessibility creates a circumstance in which both the user and designer benefit.
Likewise, Stephandis et. al. apply the principle of universal design or "design for all." They write that universal design "promotes a design perspective that eliminates the need for 'special features'" (Stephanidis 3). Additionally, the researchers do go on to recognize that while one broad solution to encompass everyone is an attractive prospect for designers, this will undoubtedly include "different solutions for different contexts of use" (3).
One thing to remember and acknowledge, especially when dealing with synthesizers (which often have fixed interfaces with few options for customization), is that, as Stephandis et. al. note, "no single interface implementation is likely to suffice for all different users" (6).
Another thing to acknowledge is that users of synthesizers differ greatly. Some are very adept with circuitry while others are more skilled as pianists. Some will tinker with a sound until they have dissected every parameter a dozen times over while still others will rely on the "out of the box" sounds. In fact, "out of the box" presets are a factor that follows the "design for all" concept. When coming to grips with something as subjective as a sound, it becomes difficult to identify which is the best available. However, so long as a user is satisfied with a sound, preset or not, then it is the correct sound for his or her project.
DIRECT STAKEHOLDERS
When we examine direct stakeholders with regards to synthesis, we often look at the producers and musicians themselves. Producers, those who listen to the music as a whole and function for a recording, in many ways, as the conductor of a symphony might during rehearsals and performance, often double their role; they often assume the role of programmers as well - those who construct new sounds with the synthesizer itself. They are hands on people who have a direct interaction with the product. And, if relegated to preset sounds, they run the risk of having their creativity challenged. Thus, it becomes essential for the producer to understand the complex functions of the machines they work with.
Musicians often use different elements of the synthesizer, elements that carry a different learning curve. Most synthesizers use a piano-keyboard interface, so it often requires many years of practice to become adept at the instrument itself. Adding in "expression" tools, such as mod wheels and pitch bends, forces one to use additional practice time mastering these non-pianist skills. Ultimately, if the synthesizer is well designed, the tools for expression will find themselves in logically situated places on the interface, but in actuality, this is not always the case, again forcing the user to adapt to the limits of his or her hardware.
INDIRECT STAKEHOLDERS
The clearest indirect stakeholder with regards to the synthesizer, is the listener. Electronic music has grown in popularity across the 20th century and into the 21st century. Once the domain of classical musicians, it has become an element of popular music. And when users are unable to perform or program their synthesizers with ease, the listener's experience suffers.
CONCLUSION
In the end, using a broadest user approach to synthesizers may feel like it mitigates the skills required to use the machines, but as we can see with the direct stakeholders, the opposite is true. Because of the generally high learning curve for complex synthesis, designing a more usable interface that promotes broad accessibility will satisfy the needs of all the stakeholders and solve many of the accessibility problems that arise.
WORKS CITED
Friedman, Batya, and Peter H. Kahn Jr. "Human Values, Ethics, and Design." The Human-Computer Interaction Handbook, Second Edition. Ed. Andrew Sears and Julie A. Jacko. Lawrence Erlbaum Associates: New York/London. 2008. 1241- 1266.
"Section 508." Section 508. 30 Apr 2008. United States Government, Web. 21 Oct 2009.
Stephanidis, C, D Akoumianakis, M Sfyrakis, and A Paramythis. "Universal Accessibility in HCI: Process-Oriented Design Guidelines and Tool Requirements." (1998): 1-15. Web. 19 Oct 2009.
Wessel, David. "Let's Develop a Common Language for Synth Programming." Center for New Music and Audio Technologies 01 Aug 1991: n. pag. Web. 20 Oct 2009.
No comments:
Post a Comment