The Memory Network has published a new essay by Nick Carr on computer versus human memory. This is a subject I’ve followed with great interest, and when I was at Microsoft Research Cambridge I had the good fortune to be down the hall from Abigail Sellen, whose thinking about the differences between human and computer memory is far subtler than my own.
Carr himself makes points about how human memory is imaginative, creative in both good and bad ways, changes with experience, and has a social and ethical dimension. This isn’t new: Viktor Mayer-Schönberger’s book Delete: The Virtue of Forgetting in the Digital Age is all about this (though how successful it is is a matter of argument), and Liam Bannon likewise argues that we should regard forgetting as a feature, not a bug.
The one serious problem I have with the piece comes after a discussion of Betsy Sparrow’s work on Internet use and transactive memory:
We humans have, of course, always had external, or “transactive,” information stores to supplement our biological memory. These stores can reside in the brains of other people we know (if your friend Julie is an expert on gardening, then you know you can use her knowledge of plant facts to supplement your own memory) or in media technologies such as maps and books and microfilm. But we’ve never had an “external memory” so capacious, so available and so easily searched as the Web. If, as this study suggests, the way we form (or fail to form) memories is deeply influenced by the mere existence of outside information stores, then we may be entering an era in history in which we will store fewer and fewer memories inside our own brains.
To me this paragraph exemplifies both the insights and shortcomings of Carr’s approach: in particular, with the conclusion that “we may be entering an era in history in which we will store fewer and fewer memories inside our own brains,” he ends on a note of technological determinism that I think is both incorrect and counterproductive. Incorrect because we continue to have, and to make, choices about what we memorize, what we entrust to others, and what we leave to books or iPhones or the Web. Counterproductive because thinking we can’t resist the overwhelming wave of Google (or technology more generally) disarms our ability to see that we still can choose to use technology in ways that suit us, rather than using it ways that Larry and Sergei, or Tim Cook, or Bill Gates, want us to use it.
The question of whether we should memorize something is, in my view, partly practical, partly… moral, for lack of a better word. Once I got a cellphone, I stopped memorizing phone numbers, except for my immediate family’s: in the last decade, the only new numbers I’ve committed to memory are my wife’s and kids’. I carry my phone with me all the time, and it’s a lot better than me at remembering the number of the local taqueria, the pediatrician, etc.. However, in an emergency, or if I lose my phone, I still want to be able to reach my family. So I know those numbers.
Remembering the numbers of my family also feels to me like a statement that these people are different, that they deserve a different space in my mind than anyone else. It’s like birthdays: while I’m not always great at it, I really try to remember the birthdays of relatives and friends, because that feels to me like something that a considerate person does.
The point is, we’re still perfectly capable of making rules about what we remember and don’t, and make choices about where in our extended minds we store things. Generally I don’t memorize things that I won’t need after the zombie apocalypse. But I do seek to remember all sorts of other things, and despite working in a job that invites perpetual distraction, I can still do it. We all can, despite the belief that Google makes it harder. Google is a search engine, not a Free Will Destruction Machine. Unless we act like it’s one.