Personal computers are expanding into the mass consumer market, from the specialist hi-tech market. But this new market consists of the more conservative late adopters, who want practicality and ease-of-use, not the technophilic early adopters who were willing to put up with how over-complicated PCs are.
Norman explains how these markets differ, why the bare over-general purpose PC can probably never become the kind of product the average consumer wants, how the answer is to move the appropriate computational power inside smart special-purpose "information appliances", and why this is going to be a painful process for the industry.
The flow of the argument is a little jerky, and rather repetitive in places. But Norman makes his customary trenchant points, and draws some good parallels with other similar technology changes and industrial blind-spots in the past -- such as electric motors, and the phonograph.
In The Psychology of Everyday Things Norman showed us how functionality is so important in design, and why so many of our artefacts don't work well. It was revolutionary, but, as he admits here, told only part of the story. What was missing is our emotional reaction to our things.
Here he build a three layer theory of that reaction: visceral (first impressions, the "wow, I want one!" reaction, conditioned mostly by our evolutionary heritage), behavioural (our response to how well the thing in question works, from our point of view), and reflective (mediated by cognition, and culture, but still emotional, such as pride in our ability to master the thing). These three levels all contribute to our emotional reaction, possibly in contradictory ways (such as an acquired taste for viscerally repugnant bitter flavours). There are no easy answers, but lots of fascinating discussions.
In the second half of the book, Norman concentrates on emotions applied to robotics: how robots would need to discern our emotions (to get feedback on how well they are performing, and to anticipate our desires), and why they would need emotions of their own (to function in a complex world of partial information). There are some interesting ideas (I particularly like the "pantry robot"), and some links with Rosalind Picard's and Antonio Damasio's work. I also like the way Norman takes science fictional "scenario building" seriously. An interesting romp through emotions for robots, with some compelling ideas.
Norman continues his study of how people interact with their engineered devices. here the emphasis is on "future" devices, that are smart, but not really smart enough. They can do a lot, so much that we are tempted to believe they are truly intelligent, but they are not. Or they can do a lot, but with a "psychology" so opaque and alien to us, that we cannot make best use of them, we cannot work together. What to do?
It comes down to design, of course. Here the design needs to encompass both the task, and the human capabilities. Don't automate just because you can, maybe leading to complex and expensive maintenance requirements; automate to enhance the overall productivity. Provide the user a clear simple conceptual model, to reduce the alienness of the device's behaviour. Concentrate on communication between human and machine (in both directions), including natural feedback (no meaningless and annoying beeping) and "reassurance". And maybe, design so that the humans, who are the more flexible partners, can make modest adaptations of behaviour to enhance the interaction.
You will never look at the PUSH 'instruction manual' on a door the same way again
This is a collection of essays about design, all the way from what to call "5-past-midnight", to how to design aeroplane cockpits so that pilots don't make mistakes. All the faults are pointed out with such clarity, that you wonder why everyone doesn't design things in the obviously correct way.
One major theme running through the essays is that our machines should communicate with us, let us know what they are doing, so we can tell if something is going wrong, and not be surprised by sudden changes in behaviour. (The tale of the auto-pilot correcting ever more strong for a yaw caused by a fuel leak, until it just suddenly "gave up" and the plane fell several miles, is certainly food for thought.)
These essays are over a decade old. Is design any better nowadays? Rarely, unfortunately, although some lessons have been learned.
Norman inveighs against the way our cognitive artifacts are currently designed. Technology is good at a lot of the things we are bad at -- it is orderly, logical, methodical. Yet rather than exploiting this to complement our strengths of creativity and flexibility, rather than supporting our way of working, we often design it in a way that highlights our weaknesses, forcing us to conform to its way of working.
He also points out that we have two main modes of thought: experiential and reflective, each requiring different kinds of interaction with technology. Ironically, the technology often seems to be designed to break the experiential mode, forcing us to be reflective, to ponder choices, when that is the inappropriate mode, yet also to disallow that reflection just when we need it.
Norman makes excellent points. It has been a decade since the book was written. Have things got any better? Not really. But it is important for people to keep plugging away at this issue. Remember, every time there is a disaster caused by "human error", it is more likely that the fault lies in our technology, not in ourselves.