The idea that we should make computers and other devices easy to use is so ingrained, it’s difficult to imagine that there’s anything wrong with it. And at the level of the individual device, perhaps there isn’t. But when you scale up beyond the individual task or screen, and think about the design issues involved in creating a pervasive environment or a system that users will interact with all the time– in other words, the kind of environment that we’re headed towards, even if we build it piecemeal– things become problematic.
It becomes problematic when “ease of use” is translated into “ease of life:” when we go from assuming that a device should not throw up unintentional obstacles to its effective use, to assuming that people want their lives to be equally frictionless, vibrantly-colored, and genie-like. They don’t, and a computing model that takes a long view of human capabilities or skills– or that has space for users to take such a view for themselves– would leave space for challenges, for a few reasons.
First, there’s good evidence that people actually like challenges. Think of all the number of people who devote dozens of hours to mastering video games. Or the research on flow, that shows that people value difficult (even physically stressful or painful) activities more than sybaritic ones, and describe greater levels of personal satisfaction during work than during leisure.
There’s also a moral argument to be made against it. MIT/Northeastern professor Stephen Intille made an elegant case against the vision of the automated smart home, that winds together ethical and practical elements. We may think it’s easy to create systems that anticipate user needs, but it’s actually pretty hard: a ventilation system that automatically opens and closes windows in your house, for example, might not know that it’s more important to clear the smoke away from last night’s party than to keep the house cool, or that the tax documents spread out on the table should not under any circumstances be blown around by opening the adjacent window. In other words, systems are great at handling everything but exceptions, and life is mainly exceptions.
Second, particularly for seniors, doing things for people– if they’re capable of doing those things for themselves– is bad for them. “Losing a sense of control has been shown to be psychologically and physically debilitating” among the elderly, Intille notes in a 2006 article; you can bet that having that happen in one’s own home would be especially disorienting. Technologies, he continues, shouldn’t “strip people of their sense of control over their environment.” Lots of smart home research has focused on really extreme use cases, like supporting people with dementia or serious physical disabilities; but the levels of automation that might make it possible for someone with greatly reduced capacities to continue to live independently can, ironically, reduce the capacities of a healthier person. Consequently, Intille argues, “Technology should require human effort in ways that keep life as mentally and physically challenging as possible as people age.” (81)
Likewise, making things too easy inhibits learning over the long run. This isn’t just a rehearsal of the old instant usability versus power user argument; my IBM career programmer father-in-law knows how to make the case that command lines and modes offer great power to those who take the time to learn them, and I can have that conversation at any family event. Rather, there’s some interesting work that suggests that systems that offer users lots of help in solving problems actually weaken problem-solving abilities over the long run. Christof van Nimwegen and his colleagues, for example, have designed computer puzzles that users have to solve; some of the puzzles are presented in a bare-bones interface (and thus requires more initial learning and internalization of information), while others are presented in an interface with more support (allowing more externalization of information). They found that
Internalization resulted in longer thinking times before starting to work on the problem and to more time between moves. It indicates that when information has to be internalized, more contemplation is provoked and users ponder longer before acting.
No surprise here: internalization means a longer learning curve. What they found next, though, was really interesting:
Internalization subjects solved the problems with fewer superfluous moves, thus with greater economy….
we found positive effects of internalization on problem-solving behavior: it led to more plan-based behavior, smarter solution paths and better declarative knowledge. Externalization led to a more display-based approach resulting in less economic solutions and shallower thinking. It is worthwhile to reflect on what was externalized and visualized. The interface showed legal actions, the outcome of the application of the rules, a common feature in a broad range of software applications. We showed that this had undesirable effects. One has to be careful with providing interface cues that give away too much and must design in such a manner that the way users think and act is optimally supported. Designers could consider making interactions “less assisted” to persuade users into specific behavior. This issue is beyond plain usability issues and focuses on more meta cognitive aspects of interface-induced behavior such as planfulness and user engagement…. usability (including user satisfaction) should not be optimized at the cost of learning performance.
What’s the bigger moral here? If your purpose is to help people become smarter, it’s not necessarily in their best interests to design something that seems highly “efficient” in an immediate micro sense, because it can degrade intelligence and performance in the long run. There are times when users just want to find out whether the 3:10 to Yuma is running on time, but the default assumption that efficiency and ease are always the right thing for users is not so.
Again, this is not the sort of thing we had to worry so much about when computers were only encountered in the workplace, or when computers were rare or expensive enough to only be deployed in use contexts where the primacy of efficiency could be taken for granted; but when you’re designing things that people use constantly, throughout a day and for years, it’s necessary to think differently.