Editorials > Unclassified > Sounding Off

Sounding Off

<< Click to Display Table of Contents >>

Navigation:  Editorials > Unclassified >

Sounding Off

Peter Vogel          

Rebecca Riordan’s article Improving Data Entry Feedback with Sound in this month’s issue on using sound made me think of other places where sound is used effectively. Sound is effective in those situations where you can’t see what’s going on. Parents and spouses are the most familiar with these conditions. As a parent, I can’t have eyes everywhere (though my parents seemingly could) and, as a driver, I can only look in one direction at a time.

Parents of small children know, instinctively, to respond to the sound upstairs of a crash followed by a yell (especially if the yell is “I’m alright!”). When my wife says “Honey” in that voice that has a distinct edge to it, I know that I’m not seeing the point of the current discussion. As drivers, we know that the sound of a honking horn (especially from behind or beside us) is critical. Changes in sound are also important. If that sound of the honking horn is getting louder and closer, for instance, I’m more concerned than if the sound is at a constant distance. A repetition of “Honey” with a more distinct edge is also a good warning.

Rebecca’s article also emphasizes how important it is for developers to understand the real world of their users. While it might be natural to assume that any person working at a computer is looking at the screen, that isn’t necessarily the case. Rebecca describes the situation for a typical data entry person, often referred to as “heads down” data entry. Though these people are working at a computer, they aren’t looking at the screen. Instead, they’re concentrating on whatever document they’re pulling information from. Unfortunately, as developers, we’re so seldom in the data entry scenario that it’s easy for us to forget that not all computer users are looking at the screen. As with the driver in my previous example, the user in Rebecca’s scenario isn’t able to look at the place where a problem may be occurring.

An important tool for any technical writer on software topics is a screen capture utility—something for taking screenshots. You don’t actually need a separate tool for this: Pressing the Print Screen button on any computer copies your current screen onto the Windows clipboard in bitmap format. From the clipboard you can paste the bitmap into any picture editing utility (Paint, for instance). However, one of the key problems with this method is feedback. After all, after you press the Print Screen button how do you know that it worked? Since a screen capture utility is designed to help you get a picture of the screen as you see it, the one thing that the screen capture utility can’t do is appear on the screen. The tool that I use is CaptureEze, though there are a number of other equally good tools. When I take a screenshot with CaptureEze, it emits a “click-clickswoosh” kind of sound that indicates that it has taken the picture, letting me know that all is well.

The sound, for me at any rate, evokes the sound made by an old Brownie camera when I clicked its mechanical shutter. As my kids grow up with more sophisticated picture-taking devices I wonder if they’ll make the same association. However, that association isn’t necessary. The sound that CaptureEze makes can be arbitrary. On occasion, user interface designers can get too wrapped up in trying to make their user interfaces look like something else (what is called a “visual metaphor”) than in meeting the needs of their users. When my wife says “Honey” using that special edge, it’s the tone of her voice that matters, not the words that she uses (sometimes she says “Sweetie”—either way, I’m being dense). Rebecca also discusses this essential characteristic of sound feedback: distinctiveness.

In many ways, user interface design is all about feedback. The key factor is letting your user know what’s going on in a way that’s appropriate and distinctive to what the user is doing. Those of you using Palms, Visors, or other Personal Digital Assistants that let you write on the screen can see the impact of this right away. Many people who start working with PDAs get 70-80 percent accuracy in handwriting recognition almost immediately. Unfortunately, they find that they never seem to get much better.

The problem, I think, is that PDA users look in the wrong place for feedback. Most of these devices require you to write in one place (the input area) while your text appears in another place (the screen). This is different from working with pen and paper, where the end of your pen is where your words appear.

With pen and paper, when you write, you stare at the end of your pen. I’ve always assumed that I was looking to see what I’ve written. So, when working with a PDA, I always looked at the screen to see how my letters appeared. About a year ago I discovered that with my PDA, if I watched the end of my pen/stylus when writing, my accuracy improved tremendously. It turns out that we stare at the end of a pen in order to get the necessary feedback to write well, not to see what we’ve written.

 

See all the Editorials   or ALL THE ONLINE ARTICLES