Gestures Are Not “Natural”

I’m sitting in Helsinki Airport on my way home from NordiCHI. It’s been a great conference and I’ve had a lot of fun exploring Helsinki too. Despite a very grey start to the week, the sun eventually came out and illuminated the bright colours of Helsinki’s beautiful architecture.

In this post I’m going to explain why I think gesture interaction is not “natural” or “intuitive”. A few talks this week justified using gestures because they were “natural” and I think that’s not really true. There are many practical realities that mean this isn’t the case and so long as people keep thinking of gestures as being “natural”, we won’t overcome those issues. Nothing of what I’m saying is new, we’ve known it for years; heck, Don Norman said the same thing (about “natural user interfaces”) and made many of the same points. This post is inspired by a discussion over coffee at NordiCHI!

Why Gesture Interaction Isn’t “Natural”

The Midas Touch Problem

In gesture interaction, the Midas Touch problem is that any sensed movements may be treated as input. Since gestures are “natural” movements which we perform in everyday life, that means that everyday movements may be treated as gestures! Obviously this is undesirable. If I’m being sensed by one or more interfaces in the surrounding environment then I don’t want hand movements I use in conversation, for example, to be treated as input to some interface.

Many solutions exist to addressing the Midas Touch problem, including clutch actions. A clutch action is one which begins or ends part of an interaction. A familiar example from speech input is saying “OK Google” to activate voice input for Google Now or Google Glass. In gesture interaction, a clutch may be a particular gesture (often called activation gestures) or body pose (like the Teapot gesture in StrikeAPose). Other alternatives include activation zones or using some other input modality as a clutch, like pressing a button or using a voice command.

Regardless of how you address the Midas Touch problem, you’re moving further away from something people do “naturally”. It’s necessary to have to perform some action specific to user interaction.

The Address Problem

In their Making Sense of Sensing Systems paper, Bellotti et al. (2002) described the problem of how to address a sensing system. Users need to be able to direct their input towards the system they intend to interact with; that is, they must be able to address the desired interface. This is more of a problem in environments with more than one sensing interface. Given that the HCI community is using gestures in more and more contexts, it’s reasonable to assume that we’ll eventually have many gesture interfaces in our environments. We need to be able to address interfaces to avoid our movements accidentally affecting others (another variant of the Midas Touch problem).

In conversation and other human interactions, we typically address each other using cues such as body language or by making eye contact. This isn’t necessarily possible in interaction, as detecting intention to interact implicitly is challenging. Instead we’ll need more explicit interaction techniques to help users address interfaces. As with the Midas Touch problem, this is not “natural”.

The Sensing Problem

Unlike the gestures we make (unintentionally or intentionally) in everyday life, gestures for gesture input need to meet certain conditions. Depending on the sensing method, users have to perform a gesture that will be understood by the system. For example, if users must move their hand across their body from right to left then this movement must be done in a way that can be recognised. This may involve directly facing a gesture sensor, slowing the movement down, moving in a perfectly horizontal line or exaggerating aspects of the gesture. In trying to make gestures understood by sensors, users perform more rigid and forced movements. Again, this is not “natural”.

Implications for HCI

Although there are more reasons that gesture interaction should not be considered a “natural” or “intuitive” input modality, I think these are the three most important ones. All result in users performing hand or body movements which are very specific to interaction; much like how we speak to computers differently than we speak to other people. I think speech input is another modality which is considered “natural” but which suffers similar problems.

I’m not sure if we’ll ever solve these problems from the computer side of gesture interaction. It would be nice, but it’s asking for a lot. Instead, we should embrace the fact that gestures are not “natural” and do what we’re good at in HCI: finding solutions for overcoming problems with technology. We need to design interaction techniques which acknowledge the unnatural aspects of gesture input in order for gestures to be usable outside of our lab studies and in an intelligent world filled with sensing interfaces and devices.

NordiCHI ’14: Beyond the Switch and Nokia

At the moment I’m in Helsinki for NordiCHI 2014. Yesterday I was taking part in the Beyond the Switch workshop, where we discussed interactive lighting and interaction – both implicit and explicit – with light sources.

It was interesting to learn more about how people interact with light and hear about how others are using light in their own research and products. I’m hoping that I also brought an interesting point of view, as an “outsider” in this community. In our research we use interactive light as an output modality, exploiting the increasing connectivity of “smart” light sources. As new and existing light sources are developed with interactivity in mind, I think we’ll start to see others using light in new ways too.

In the first half of the workshop we presented our position papers and identified some interesting topics and challenges which arose in these presentations. After some discussion – and lots of post-it notes – we arranged these topics into themes. Two of the bigger themes which emerged in the workshop were (in my own words): semantics and “natural” interaction with light.

Lots of questions which emerged from the discussion were about what light actually means and how we can use light to represent information. This was relevant to what we do in our research as we use interactive light to encode information about gesture interaction. I think an interesting area for future research would be to understand what properties of light best represent what types of information and what makes a “good” interactive light encoding. There is already research which has started to look at some of these design challenges although more is needed.

In the second half of the workshop we thought more about “natural” interaction. I put quotes around the word natural because it, along with “intuitive”, is a bad word in HCI (according to Steve Brewster, anyway)! As the workshop was about interaction beyond the switch, however, there was a lot of interest in how else we could interact with light. We split up into teams and each focused on different aspects of interaction with light. My team looked at explicit control of light, whilst the other focused on implicit interaction with light. Overall, it was a fun workshop. Lots of really cool demos of Philips Hue, too!

Earlier today I went to visit Vuokko and give a talk about my PhD research, at Nokia Technologies in Otaniemi. After two years of working with and being funded by Nokia, it was nice to finally visit them in Finland! My slides from my talk are available here.

Now that my workshop and presentation at Nokia are finished, I have the rest of the week to enjoy the conference. I’m looking forward to exploring Helsinki more. I’ve walked quite a lot since I got here – mostly at night – and it’s been fun to see the city. Tomorrow morning is the start of the main conference program, starting with a keynote by Don Norman.

NordiCHI Workshop Paper on Interactive Light Feedback

Our position paper, “Illuminating Gesture Interfaces with Interactive Light Feedback” [1], was accepted to the NordiCHI Beyond the Switch: Explicit and Implicit Interaction with Light workshop. In it, we discuss examples of light being used in gesture interfaces and we discuss how we use interactive light for feedback about gesture interactions in one of our own gesture interfaces. The paper will be available on the workshop website by the end of September.

An example of interactive light feedback from an early prototype.
An example of interactive light feedback from an early prototype.

[1] Unknown bibtex entry with key [InteractiveLight]