Attention and Gesture
“To create successful animation, you must understand why an object moves before you can figure out how it should move. Character animation isn’t the fact that an object looks like a character or has a face or hands. Character animation is when an object moves like it is alive, when it looks like it is thinking and all of its movements are generated by its own thought process. It is the change of shape that shows that a character is thinking. It is the thinking that gives the illusion of life. It is the life that gives meaning to the expression. As Saint-Exupéry wrote, “It’s not the eyes, but the glance – not the lips, but the smile… “John Lasseter, Chief Creative Officer at Pixar, from his Siggraph course on animation.
I’ve been thinking about how technologies manage our attention a lot recently, and I’ve found this quote from John Lasseter and particularly the reference to Saint-Exupéry helpful in designing technologies and interfaces that are more attention aware.
We’ve been stuck in a phase of designing ‘shouty’ things, things that don’t take into account how contexts change, and that our attention shouldn’t be assumed. Technologies have been designed, in large part, around having our sole attention, anytime, anywhere, when, this full-attention mode is actually the exception rather than the rule. The side effects of this are that we struggle to find strategies to manage our attention, creating what has been term an “attention deficit”, an inability to focus or “continuous partial attention”. Heavy users of email, Twitter and Facebook are no strangers to such poverty of attention and notification services like Growl only heighten this now, now, now design imperative.
An economy of gestures by “users” is well understood now, and gaining widespread adoption (‘swipe’) mainly thanks to Apple, but technologies haven’t been so good at their own gestures, at ways of feeding back. Sound sensitive car stereos that change volume depending on ambient noise and / or if you take a call, is one fairly mainstream example of a context sensitive feedback loop. But I’ve struggled to find examples of software doing this. It’s either “on” or “off”, full power or no power, all your attention or none. Using human gestures we’re programmed to understand, such as a ‘glance’ or eye-contact when talking, provide possible cues to design. Creating focal points which change the more we engage with an application, perhaps? Creating friction in the form of delays to an app responding after it’s been left for a long time, or perhaps if it is continuously opened that mimic the kind of responses we’d get from social contact could create more useful ways to manage attention. To return to the original quote, maybe we’re focusing on the lips, the UI, when we should, perhaps, be looking at the smile.