The Demise of ‘Form Follows Function’

The New York Times
Printer Friendly Format Sponsored By

June 1, 2009

If there was a (booby) prize for the most misused design cliché, a firm favorite would be “form follows function,” with “less is more” coming a close second.

Not only is “form follows ...” often quoted incorrectly, it is not even accurate: the original wording was “form ever follows function.” It is also routinely misattributed, mostly to 20th-century modernist grandees, like Le Corbusier and Mies van der Rohe, but was actually coined by the less famous American architect, Louis Sullivan.

Misused though Sullivan’s quote has been, his point, that the style of architecture should reflect its purpose, made sense at the time, and continued to do so for much of the last century, not just for buildings, but objects too. That was then. Thanks to digital technology, designers can squeeze so many functions into such tiny containers that there is more computing power in a basic cellphone (not a fancy model, like a BlackBerry or iPhone, just a cheap one) than at NASA’s headquarters when it began in 1958. That is why the appearance of most digital products bears no relation to what they do.

Take the iPod Shuffle (left). How could you be expected to guess what that tiny metal box does by looking at it? There are no clues to suggest that it might play music. Like most other digital devices, the Shuffle is (literally) an inscrutable box of tricks. Apple’s designers conceived the latest model as a subtle joke on the demise of “form follows ...” It is so small, half the size of its predecessor, that they could make it in the same shape as one of those pins that clip on to clothing. This means the Shuffle’s form does reflect one of its functions, albeit the very minor one of attaching itself to a jacket, but gives no hint as to its more important role of storing and playing hundreds of songs.

Joking aside, the dislocation of form and function has set a new challenge for designers: how to help us to operate ever more complex digital products. In ye olden days when form did follow function, you could guess roughly how to use an object from its appearance. But our ability to work out how to download and play music on a Shuffle is largely determined by the design quality of the software that operates it — the “user interface” in geek-speak, or “U.I.” If the “U.I.” is well designed, you should be able to use the device so intuitively that you will not have to think about it. But if it is badly designed, the process will seem so confusing that you will probably blame yourself for doing something wrong.

That is why the first wave of U.I. designs sought to reassure us by using visual references to familiar objects to help us to operate digital ones. Take the typewriter keyboards on computers, and video game controllers modeled on TV remote control pads. As our confidence has grown, U.I. design has become more sophisticated, increasingly relating to our physical behavior, rather than objects.

One landmark is Nintendo’s Wii games system, which is operated by replicating the movements we would make if playing for real: from firing a “gun,” to whacking a “tennis ball” with a “racquet.” Another is Apple’s iPhone, which replaced the traditional keyboard with a touch-sensitive screen that achieves a similar effect to the Wii by responding to the natural movements of our hands. The same goes for the thousands of applications, or “apps,” invented for the iPhone, mostly by amateur programmers. Over a billion apps have been downloaded in the last nine months, and one reason for their popularity is that they feel so instinctive. An example is “Brushes,” the $4.99 app with which the artist Jorge Colombo “drew” the cover of the June 1 edition of The New Yorker on his iPhone by creating digital layers of “paint” with his fingers, just as if he was making brushstrokes on a canvas.

The next phase of U.I. design will take this further. John Maeda, the software designer and president of the Rhode Island School of Design, believes that our current “awkward mechanical dance” with computers will be replaced by an intuitive approach. “It will need to be more improvisational,” he said. “There will be a need for more subtlety and grayness.”

One possibility is what techies call human interaction systems. An example is g-speak, which is now being developed by Los Angeles-based Oblong Industries as a means of operating computers through physical movements and gestures, rather than keyboards and mice. Think of how Tom Cruise “controlled” computers remotely in the 2002 movie “Minority Report.” The students at the Rhode Island School of Design did that this spring in experiments with g-speak (left).

Another option is to swap physical means of controlling technology with voice recognition systems, which are already used in some devices, or pure intuition.

San Francisco-based Emotiv Systems worked with the IDEO design group to develop the Epoc, a headset that enables you to play video games by monitoring electrical activity in your brain. It literally reads your mind through 16 sensors, which then relay your instructions to the console. “People are always ready for new or better or more sophisticated experiences — digital and physical,” said Kara Johnson, a material scientist at the IDEO design group. “The role of the designer is to make them simple and meaningful.”

left: Tan Le, co-founder and president of Emotiv, with the Epoc headset.


Others are skeptical that voice recognition and brain sensor technology will ever be reliable enough to replace physical controls. “They work ‘sometimes,’ but ‘sometimes’ isn’t enough for most people,” explained Mr. Maeda. “But I’m the guy who thought in the early 1990s that making home pages on something called the World Wide Web was a silly idea, which would never catch on.”


from:
The New York Times


No comments: