Making an accessible inquiry, but avoiding breezy neurobabble. . .

Excerpt from a recent cover story in ACM Interactions.

What is it like to make an amateur inquiry into attention? If you’re a not a Zen master or a neuroscientist, how do you come away with meaningful results and not just more overload? In particular, how do you avoid neurobabble on how technology makes “us” think this way or that? What if you just want to know a bit more about attention in order to ground yourself in an age of superabundant and increasingly ambient information?

Superabundance is the word for it. You hardly meet anyone who wants to go back to life as it was before the Net. Let us rejoice: all the world’s information at your fingertips, and no need to clutter your head with it! Well, information of a kind. And except when it gets in the way socially, or when corporations try to fence it off as their own. Abuses of superabundance do exist, and only time will tell which of those are correctable, much as it took 50 years to see what was wrong with car-based transportation. Incidentally, hindsight also seems helpful for remembering that people have often cursed at overload. Maybe always. For example, more than 400 years have passed since Erasmus, the first modern editor, famously lamented, “[I]s there no place to hide from this great flood of books?” [1] Today’s debates on overload often begin with something like this. If you believe the old truism about the ability to keep no more than seven things in mind at once, then there must have been overload ever since there were eight.

The world itself has been both the cause and the cure of overload. People of any era have been fascinated, intimidated, irritated, or numbed by the world. Media may of course up the ante on all of this. Just about any stage in the history of information technology—such as newspapers or recorded music—has been accused of cutting people off from the world, and from each other. Yet the natural world has always saturated the senses, and even without technological assistance, the senses mediate the world. Indeed, if they ever stopped filtering, you would quickly go mad.

The counterargument has its merits. The human mind has always loved to wander, but it never had quite such exquisite means of doing so. Never before did so much experiential saturation come from artifice, with such purposeful design of interface. What has changed is how much of the world offers appeal and not menace, novelty and not tedium, immediacy and not heartbreaking distance. Never before has such a spectrum of the perceptual field been so deliberately placed, or so deliberately engineered for cognition.

An inquiry into attention thus has to question capacity. How does a brain that evolved for one set of stimuli deal with a world now made of quite another? Despite humanity’s remarkable capacity for adaptive learning, how can the workings of attention ever adapt half as quickly as technology changes? How can attention adapt and change across the lifetime of an individual, a culture, or the species? For example, the case of food shows some lag. Humans have innate preferences to take on salt, sweets, and fat, since those were always rare in the past. Except now those are plentiful (at least for the luckiest billion humans) and people consume too much of them. Likewise with information it becomes possible, even likely, to take on too much of what you want in search of what you actually need. There now exists an argument about information obesity, a state reached by relentless feeding on the equivalent of empty calories [2].

So while superabundance may be the best word overall, it seems better to talk of overconsumption than simply of overload. This puts a little more responsibility on the individual, so to speak. Superabundance makes it mandatory to know more about the workings of attention.

One usual debate on overconsumption concerns multitasking. OMG, is there anything humans do that has yet to be done while also texting? Is there any work people do that has yet to be done while also watching a movie? Misconceptions of multitasking must be costing somebody something. Of course, capacity varies. A soldier can walk and chew gum, but almost nobody can safely text and drive. At the proverbial cocktail party, you can monitor many conversations to choose how to move among them. But if two friends each speak to you simultaneously, especially if one speaks into each ear, you are going to miss something, if not everything, that each has said. No amount of practice seems to change that one.

In many cases, practice does help, of course. The question of habit seems central. To what extent does whatever you grow up with become normal? Self-described “digital natives” claim that this can be almost anything. If you grew up with technology in a growing number of contexts and formats, they are just part of the world, and not so much of a distraction as they would be to people who learned the world without all that stuff. Cognitive scientists agree that many complex brain pathways are emergent and do adapt, especially through habit. “Neuroplasticity” definitely exists.

Not all pathways adapt, however. The workings of attention involve distinctly fixed processes for allocating mental resources. Amid the hierarchy by which the brain assembles cognition, switching costs and bottlenecks inevitably occur, especially when executing tasks. Moreover, according to some famous studies, they may do so especially for people who take pleasure in switching [3]. In this regard, leading cognitive scientists contend that effective multitasking is no more than a myth. It may feel good, but switching costs a lot, as executive processes must queue and load [4]. Instead, a more true productivity might involve alertness, at some less describable level, to tasks whose perceptual frames one is already in, and that might consist of recognizing and engaging more features of context. For, of course, not all attention is deliberative.

Also easy to know but somehow difficult to remember: Not all attention is visual. For example, there are strong effects of interpersonal distance—stand a couple of inches closer to or farther from someone to see those at work. The ever-increasing use of media has made vision seem more dominant, however. (Who first said that the look and feel of technology is almost all look and almost no feel?) The jumpy nature of vision has made attention seem jumpy too. For example, one of the oldest metaphors in cognition is that of a spotlight. As vision keeps shifting, its selective focus does seem to illuminate. And the gaze does usually indicate where deliberative attention has been directed, finding things more quickly where it is expecting to find something. Because many such visual processes are relatively practical to study clinically—more so than situational awareness in the field, at least—early cognition literature may have had a bias toward them. But embodied frames of reference also matter. Vision alone does not explain how attention gets assembled; nor does it explain attention’s aspects of orientation and habit, or the importance of context. More recent cognitive research thus goes beyond the spotlight metaphor, and beyond selective attention, to understand fuller roles of embodiment [5].

Form informs. To inhabit habituates. Interaction designers who have studied activity theory understand those word-plays well. Contingencies of form and context affect what can be done with them, and therefore how they are known. Perhaps people from every era have had thoughts about how the world seems manifest, and how life and especially work have assumed a particular form. But now some interaction designers spend all day at this. To them it seems axiomatic that the intrinsic structure of a situation shapes what happens there. Not all that informs has been encoded and sent. Not all action requires procedures and names. The mind is not just a disembodied linguistic processor: Neuroscience has had a paradigm shift toward embodied cognition. For an expression of that shift, cognitive scientist and roboticist Andy Clark observed in the 1990s: “In general, evolved creatures will neither store nor process information in costly ways when they can use the structure of the environment and their operations upon it as a convenient stand-in for the information-processing operations concerned.” On the nature of engagement, Clark summarized: “[M]emory as pattern re-creation instead of data retrieval; problem solving as pattern completion and transformation; the environment as an active resource, and not just a domain problem; and the body as part of the computational loop, and not just an input device” [6]. This use of props and structures assists with the processes of externalization and internalization, which activity theorists show is important to learning and tacit knowledge. Amid masterful, habitual, embodied actions, not only do technologies become more usable, but indeed attention may seem effortless [10].

So as an initial summary on attention itself, several common misconceptions seem easy enough to identify: Not all attention is visual or selective like a spotlight. Not all debates on attention concern multitasking. Overload has always existed, but overload isn’t so much the problem as overconsumption. Superabundance is welcome, but it makes better attention practices more vital. Surroundings play a part in those practices. Not all attention is fragmented or paid; sometimes it just flows. Embodiment and orientation can be important components of attention. Not all attention involves thought. Not all interaction needs procedures and names. You don’t have to be a yoga teacher to say all of this. For, as interaction designers know, affordances shape knowing. You can sense that a surface might work as a step or a table without it having been designed or declared as such.

Notes: as the word “inquiry” suggests, almost all of these ideas began somewhere else. To consider attention itself, and especially attention to surroundings, I have long been mining the literature, as an amateur but in a persistent and organized way, and with the benefit of excellent research library data databases that complement the open Net. A feature article such as this one simultaneously increases the need to make generalizations and decreases the space to cite their very many origins. Please accept a disclaimer that I do know where so many of these ideas have come from, and only their juxtaposition and editorial synthesis is my own.

1. On history of early overload, search the work of Ann Blair or Geoff Nunberg.
2. On overload versus overconsumption, search the work of Linda Stone or Sherry Turkle.
3. On limits of multitasking, see the famous study: Ophir, E., Nass, C., and Wagner, A.D. Cognitive control in media multitaskers. Proc. of the National Academy of Sciences 106, 33 (2009).
4. On switching costs and the fallacies of multitasking, search the work of David Meyer.
5. On embodied cognition, search the work of Andy Clark, Lou Barsalou, Anthony Chemero, or George Lakoff and Mark Johnson.
6. Fo defining embodied cognition, the classic might be: Andy Clark, Being There: Putting Brain, Body and World Together Again (1997).