Rapid Pulse

kevincassell.com/blog

Technologies of the Blind

Even sighted people can benefit from assistive and adaptive technologies

December 29th, 2009, 8:14 pm

In "What Are We?", a chapter from Natural-Born Cyborgs: Minds, Technologies, and the Future of Human Intelligence, Andy Clark touches on the potentials of assistive/adaptive technologies for non-sighted ("blind") people. His discussion of neuroelectronic interfaces brings us into the world of wearable technology as it’s being designed to help this people navigate a world they cannot see.

One of his examples is the Tactile Visual Sensory Substitution (TVSS) device, an “artifical vision system” that allows wearers “to experience coarse quasi-visual sensations” (125-6) which are rendered tactilely. I’d like to build on Clark’s one-paragraph overview of this technology by looking at a very similar device currently in development. Brainport is an experimental technology based on the TVSS device that uses tiny cameras for eyes. These cameras are embedded in a device worn by nonsighted people on their foreheads and connected by wires to stimulatory grid held in their mouths. The cameras take pictures of the space in front of an individual and transform those images into electrical impulses that are felt on the surface of the tongue. The nerves of the tongue act as (to use one of Clark’s terms) a biotechnological interface between the tiny electrodes on the device and the brain. Once stimulated by the device, the nerves send signals not to the visual cortex—which is what allows for sight in sighted people—but to the part of the brain that deals with touch. The nonsighted person doesn’t “see” these images, of course; s/he learns to interpret the sensory impression made on the surface of the tongue as “images.”

Such a process is likened to learning a new language by BrainPort researcher Aimee Arnoldussen, one which requires the nonsighted person to “translate” the stimulation into a kind of visual perception of the objects registered. "At first you might need to take a long time thinking about what the translation is,” Arnoldussen told reporters for CBS News/Time. “I might feel stimulation in the right front part of my tongue, (but) what does that mean?” The answer: “[V]ery rapidly, like learning a language, you might learn a few quick vocabulary (words), and eventually you become so fluent that you don't need to think about it anymore." A nonsighted participant in a 2006 BrainPort study, Roger Behm, compared the experience to having someone draw images on your skin: “You know when you're a kid and . . . one kid would draw on your back and you'd try to guess what it is? That's what it's like.” The device allowed Behm to walk unassisted in complex environments and identify images—such as a logo on a football jersey—simply by “reading” the sensations rendered on his tongue.

What the TVSS and Brainport technologies teach us is that, as Clark contends, “our brains are amazingly adept at learning to exploit new types of channels of input” (126). Contradicting much of the brain research of the early 1990s, which offered a deterministic view of the brain as “hard-wired” and basically nonmalleable, these technologies show that indeed our brains are by far more “plastic” than concrete. We also see how this particular technology is “reinventing” the notion of interface in a biotechnological matrix that breaks through “the old biological borders of skin and skull” (24).

A similar sight-enabling technology called the Eyeborg system, developed by cybernetics scientist Adam Montandon, has allowed a color-blind artist, Niel Harbisson, to paint using a full palette of colors. As with the TVSS and BrainPort, this device is mounted on the head where a small digital camera records colors directly in front of the painter and sends images of those colors to a laptop computer. Unlike these devices, however, the Eyeborg doesn’t translate visual imagery through the sense of touch. Instead, the laptop is equipped with a program that slows down the light wave frequency of each color to the frequency of sound waves. The computer’s audio system then translates these colors into sounds, which the artist must learn as aural representations of color. After hearing these “colors” through an earpiece fitted in his ear, he knows which paints are which. As of January, 2008, Montandon was working on a device that would be as small as an MP3 player—and one step closer toward the kind of “wearable computing” Clark discusses throughout his book.

Technologies made to accommodate visually disabled and nonsighted people can be employed for other purposes. Georgia Tech's Accessible Aquarium Project has set the movements of fish to music. A recognition camera is set up next to an aquarium and tracks each fish by shape and color. This visual imagery is then sent to a computer that links their movements to different instruments, which change in both tempo and pitch as the fish swim about. When fish move toward the surface, the pitch associated with them goes up—just as in musical notation. Slower-moving fish are represented in slow-tempo; when they speed up, so does the tempo. The purpose-—which sounds odd to sighted people-—is to allow nonsighted individuals at public aquariums “to experience the animals through their ears.” This requires, of course, that the nonsighted person learn to “see” the shapes, colors, and movements of the fish by learning how these representations are translated musically.

The concept of “’seeing’ with ears” is caught quite effectively in a technology known as vOICe, developed by Netherlands research scientist Peter Meijer. vOICe is similar to the technology that allowed the color-blind artist to paint—but it goes much further by streaming video images of one’s surrounding into a laptop, carried by a nonsighted person in a knapsack, and converting the entire physical environment into a “soundscape.” The scene to your left is converted into auditory information that is sounded through an earpiece in the left ear, and the same with the right. Brightness is translated as volume, pitch as height, and the image refreshes every second.

Perhaps one day soon some hip designers will transform these technologies into games that all people can play. I believe that, like StarLogo and SimCity, these adaptive technologies can provide a kind of aural “designer environment” which “can help biological brains learn to get to grips with decentralized emergent order,” developing skills for understanding “the kinds of complex systems of which we ourselves are one striking instance” (Clark, 160). The placement of stereo sound systems in aquariums can do more than just “assist” nonsighted people in experiencing the appearance and movement of marine life. Sighted people as well will benefit; in fact, the multidimensionality of seeing and hearing the movements and colors of fish would help to integrate them more fully into the kind of “hybrid, extended architecture” (33) of which all natural-born cyborgs are part.

As promising as these technologies are, they have limitations. Costs and accessibility are a couple of them, but perhaps most problematic is they seem to be constructed around a transmission view of communication. Clark’s criticism of the “notion that our perceptual experience is determined by the passive reception of information,” which he calls “seductive, but deeply misleading,” could be leveled at these technologies. “Our brains are not at all like radio or television receivers,” he argues, “which simply take incoming signals and turn them into some kind of visual or auditory display” (95). Unfortunately, the assistive technologies discussed above do exactly that-—and not much more. However, there is some work in adaptive technology research that is moving in a direction which conceives of perception as “bound up with the business of acting upon, and intervening in, our worlds” (95).

T.V. Raman, a nonsighted computer scientist and engineer at Google whom the American Foundation for the Blind calls “a leading thinker on accessibility issues,” is developing a touch-screen phone that he one day hopes will help nonsighted people navigate the world. One component of the technology is described this way:

Since he cannot precisely hit a button on a touch screen, Mr. Raman created a dialer that works based on relative positions. It interprets any place where he first touches the screen as a 5, the center of a regular telephone dial pad. To dial any other number, he simply slides his finger in its direction — up and to the left for 1, down and to the right for 9, and so on. If he makes a mistake, he can erase a digit simply by shaking the phone, which can detect motion.

This technology is still in its infancy, but Raman hopes that it will soon develop to the point where it-—along with screen-reading and reliable voice-recognition software programs-—is transferred to the mobile world, “a real-life changer” for nonsighted people. Not only will mobile telephones be able to “read” (through a digital camera lens) the physical environment, it will be able to “speak” what it reads, translating signs for example, just as a screen-reader does on a PC. But it also will allow the nonsighted user-—or anyone who is not looking at the screen-—to enter text, numbers, and commands, opening spaces that allow not only for the individual to receive information about the world around him or her, but to interact and participate with that world.

This research serves as an example of the positive effects cybernetic technology can have not only a visually disabled people but also on the larger culture-—effects Andy Clark goes to great lengths to articulate in his book.

Respond to this blog           Read comments
(No comments have been posted yet.)


.