20 February 2010

Sound and Geographic Visualization

Originally Published in Alan MacEachren and D.R.F. Taylor (eds.). 1994 Visualization in Modern Cartography. New York: Pergamon. pp. 149-166.

"Who the hell wants to hear actors talk?"
- Harry Warner on being confronted with the prospect of the sound movie.


The issue of sound in the context of visualization may at first seem incongruous. There is, however, evidence to support the claim that sound is a viable means of representing and communicating information and can serve as a valuable addition to visual displays. Abstracted two-dimensional space and the visual variables - the traditional purview of cartography - may not always be adequate for meeting the visualization needs of geographers and other researchers interested in complex dynamic and multivariate phenomena. The current generation of computer hardware and software gives cartographers access to a broadened range of design options: three-dimensionality, time (animation), interactivity, and sound. Sound - used alone or in tandem with two-or three-dimensional abstract space, the visual variables, time, and interactivity - provides a means of expanding the representational repertoire of cartography and visualization.

This chapter discusses the use of realistic and abstract sound for geographic visualization applications. Examples of how and why sound may be useful are developed and discussed. Uses of sound in geographic visualization include sound as vocal narration, as a mimetic symbol, as a redundant variable, as a means of detecting anomalies, as a means of reducing visual distraction, as a cue to reordered data, as an alternative to visual patterns, as an alarm or monitor, as a means of adding non-visual data dimensions to interactive visual displays, and for representing locations in a sound space. The chapter concludes with research issues concerning sound and its use in geographic visualization.

Experiencing and Using Sound to Represent Data

Our sense of vision often seems much more dominant than our sense of hearing. Yet one only has to think about the everyday environment of sound surrounding us to realize that the sonic aspects of space have been undervalued in comparison to the visual (Ackerman 1990, Tuan 1993). Consider the experience of the visually impaired to appreciate the importance of sound and how it aids in understanding our environment. Also consider that human communication is primarily carried out via speech and that we commonly use audio cues in our day to day lives - from the honk of a car horn to the beep of a computer to the snarl of a angry dog as we approach it in the dark (Baecker and Buxton 1987).

There are several perspectives which can contribute to understanding the use of sound for representing data. Acoustic and psychological perspectives provide insights into the physiological and perceptual possibilities of hearing (Truax 1984, Handel 1989). An environmental or geographical perspective on sound can be used to examine our day to day experience with sound and to explore how such experiential sound can be applied to geographic visualization (Ohlson 1976, Schafer 1977, Schafer 1985, Porteous and Mastin 1985, Gaver 1988, Pocock 1989). Understanding how sound and music is used in non-western cultures may inform our understanding of communication with sound (Herzog 1945, Cowan 1948). Knowledge about music composition and perception provides a valuable perspective on the design and implementation of complicated, multivariate sound displays (Deutsch 1982). Many of these different perspectives have coalesced in the cross-disciplinary study of sound as a means of data representation, referred to as sonification, acoustic visualization, auditory display, and auditory data representation (Frysinger 1990). Within this context both realistic and abstract uses of sound are considered.

Using Realistic Sounds

Vocal narration is an obvious and important use of realistic sound. (note 2) Details about the physiological, perceptual, and cognitive aspects of speech are well known (Truax 1984, Handel 1989) and film studies offer insights into the nature and application of vocal narration (Stam, Burgoyne, and Flitterman-Lewis 1992).

Another use of realistic sounds is as mimetic sound icons, or "earcons" (Gaver 1986, Gaver 1988, Gaver 1989, Blattner et al. 1989, Mountfort and Gaver 1990). Earcons are sounds which resemble experiential sound. Gaver, for example, has developed an interface addition for the Macintosh computer which uses earcons. An example of an earcon is a "thunk" sound when a document is successfully dragged into the trash can in the computer interface.

Using Abstract Sounds

Abstract sounds can be used as cues to alert or direct the attention of users or can be mapped to actual data. Early experiments by Pollack and Ficks (1954) were successful in revealing the ability of sound to represent multivariate data. Yeung (1980) investigated sound as a means of representing the multivariate data common in chemistry after finding few graphic methods suitable for displaying his data. He designed an experiment in which seven chemical variables were matched with seven variables of sound: two with pitch, one each with loudness, damping, direction, duration, and rest (silence between sounds). His test subjects (professional chemists) were able to understand the different patterns of the sound representations and correctly classify the chemicals with a 90% accuracy rate before training and a 98% accuracy rate after training. Yeung's study is important in that it reveals how motivated expert users can easily adapt to complex sonic displays.

Bly ran three discriminant analysis experiments using sound and graphics to represent multivariate, time-varying, and logarithmic data (Bly 1982a). In the first experiment she presented subjects with two sets of multivariate data represented with different variables of sound (pitch, volume, duration, attack, waveshape, and two harmonics) and asked subjects to classify a third, unknown set of data as being similar to either the first or second original data set. The test subjects were able to successfully classify the sound sets. In a second part of the experiment she tested three groups in a similar manner but compared the relative accuracy of classification among sound presentation only (64.5%), graphic presentation only (62%), and a combination of sound and graphic presentation (69%). She concluded that sound is a viable means of representing multivariate, time-varying, and logarithmic data - especially in tandem with graphic displays.

Mezrich, Frysinger, and Slivjanovski confronted the problem of representing multi-variable, time-series data by looking to sound and dynamic graphics (Mezrich et al. 1984). They had little success finding the graphic means to deal with eight-variable time series data. An experiment was performed where subjects were presented with separated static graphs, static graphs stacked atop each other (small multiples), overlaid static graphs, and redundant dynamic visual and sound (pitch) graphs. The combination of dynamic visual and sound representation was found to be the most successful of the four methods.

An ongoing project at the University of Massachusetts at Lowell seeks to expand the use of sound for representing multivariate and multidimensional data. The "Exvis" project uses a one-, two-, and three-dimensional sound space to represent data (Smith and Williams 1989, Smith et al. 1990, Williams et al. 1990, Smith et al. 1991). The project is based upon the idea of an icon: "an auditory and graphical unit that represents one record of a database" (Williams et al. 1990, 44). The visual attributes of the icon are "stick-figures" which can vary in "length, width, angle, and color" (Williams et al. 1990, 45). The sonic attributes of the icons are "pitch, attack rate, decay rate, volume, and depth of frequency modulation" (Williams et al. 1990, 45). An experimental Exvis workstation has been set up to run various human factors experiments, and initial tests of subjects have been completed. The results reveal that using visual and sonic textures together improves performance.

Two-dimensional sound displays which locate sounds up/down, right/left via stereo technology, and three-dimensional sound displays which add front/back to two dimensional displays are also being developed. A three-dimensional virtual sound environment has been developed at the NASA-Ames Research Center (Wenzel et al. 1988a, Wenzel et al. 1988b, Wenzel et al. 1990). The ability to locate sound in a multidimensional "sound space" will undoubtedly be important for representing spatial relationships...

More here.

No comments:

Post a Comment