Mobile HCI ’14: Why would I use around-device gestures?

Toronto is a fantastic city, which has made this conference so enjoyable.
Toronto is a fantastic city, which has made this conference so enjoyable.

At the Mobile HCI poster session I had some fantastic discussions with some great people. There’s been a lot of around-device interaction research presented at the conference this week and a lot of people who I spoke to when presenting my poster asked: why would I want to do this?

That’s a very important question and the reason it gets asked can maybe give some insight into when around-device gestures may and may not be useful. A lot of people said that if they were already holding their phone, they would just use the touchscreen to provide input. Others said they would raise the device to their mouth for speech input or would even use the device itself for performing a gesture (e.g. shaking it).

In our poster and its accompanying paper, we focused on above-device gestures. We focus on a particular area of the around-device space – directly over the device – as we think this is where users are mostly likely to benefit from using gestures. People typically keep their phones on flat surfaces – Pohl et al. found this in their around-device device paper [link], Wiese et al. [link] found that in their CHI ’13 study, and Dey et al. [link] found that three years ago. As such, gestures are very likely to be used over a phone.

Enjoying some local pilsner to wrap up the conference!
Enjoying some local pilsner to wrap up the conference!

So, why would we want to gesture over our phones? My favourite example, and one which really seems to resonate with people, is using gestures to read recipes while cooking in the kitchen. Wet and messy hands, the risks of food contamination, the need for multitasking – these are all inherent parts of preparing food which can motivate using gestures to interact with mobile devices. Gestures would let me move through recipes on my phone while cooking, without having to first wash my hands. Gestures would let me answer calls while I multitask in the kitchen, without having to stop what I’m doing. Gestures would let me dismiss interruptions while I wash the dishes afterwards, without having to dry my hands.

This is just one scenario where we envisage above-device gestures being useful. Gestures are attractive for a variety of reasons in this context: touch input is inconvenient (I need to wash my hands first); touch input requires more engagement (I need to stop what I’m doing to focus); and touch input is unavailable (I need to dry my hands). I think the answer to why we would want to use these gestures is that they let us interact when other input is inconvenient. Our phones are nearby on surfaces so let’s interact with them while they’re there.

In summary, our work focuses on gestures above the device as this is where we see them being most commonly used. There are many reasons people would want to use around-device gestures but we think the most compelling ones motivate using above-device gestures.

Mobile HCI ’14: “Are you comfortable doing that?”

OCAD University, who are one of the Mobile HCI '14 hosts, have some fantastic architecture on campus.
OCAD University, who are one of the Mobile HCI ’14 hosts, have some fantastic architecture on campus.

One of my favourite talks from the third day of Mobile HCI ’14 was Ahlstrom et al.’s paper on the social acceptability of around-device gestures [link]. In short: they asked users if they were comfortable doing around-device gestures. I think this is a timely topic because we’re now seeing around-device interfaces added to commercial smartphones. Samsung’s Galaxy S4 had hover gestures over the display and Google’s Project Tango added depth sensors to the smartphone form factor. I feel that now we’ve established ways of detecting around-device gestures, it’s now time to look at what around-device gestures should be and if users are willing to use them.

In Ahlstrom’s paper, which was presented excellently by Pourang Irani, they did three studies looking at different aspects of the social acceptability of around-device gestures. They looked mainly at aspects of gesture mechanics: gesture size, gesture duration, position relative to device, distance from the device. When asking users if they were comfortable doing gestures, they found that users were most happy to gesture near the device (biased towards the side of their dominant hand) and found shorter interactions more acceptable.

They also looked at how spectators perceived these gestures, by opportunistically asking onlookers what they thought of someone who was using gestures nearby. What surprised me was that spectators found around-device gestures more acceptable in a wider variety of social situations than the users from the first studies. Does seeing other people perform gestures make those types of gesture input seem more acceptable?

Tonight I presented my poster [paper link] on our design studies for above-device gesture design. There were some similarities between our work and Ahlstrom’s; purely by coincidence, we both asked users if they were comfortable and willing to use certain gestures. However, we focused on what the gestures were, whereas they focused on other aspects of gesturing (e.g. gesture duration).

In our poster and paper we present design recommendations for creating around-device interactions which users think are more usable and more acceptable. I think the next big step for around-device research is looking at how to map potential gestures to actions and identifying ways of making around-device input better. My PhD research is focusing on the output side of things, looking at how we can design feedback to help users as they gesture using the space near devices. If you saw my poster tonight or had a chat with me, there’s more about the research in our poster here; tonight was fun so thanks for stopping by!

Mobile HCI ’14: Using Ordinary Surfaces for Interaction

Mobile HCI '14 day one: a wee bit of Toronto and Henning Pohl's idea of around-device devices.
Mobile HCI ’14 day one: a wee bit of Toronto and Henning Pohl’s idea of around-device devices.

Today was the first day of the papers program at Mobile HCI ’14 and amongst the great talks was one I particularly liked on the idea of “around-device devices” by Pohl et al. [link]. I’ve written before about around-device interaction, above-device interaction, and how the space around mobile devices can be used for gesturing. What’s novel about interaction using around-device devices, however, is that interaction in the around-device space is not just limited to free-hand gestures relative to the device. Instead, nearby objects can become potential inputs in the user interface. One of the motivations for using nearby objects for interaction is that mobile devices are very commonly kept on surfaces – tables, desks, kitchen worktops – which are also used for storing objects. In this post title I call these ordinary surfaces to distance this idea from interactive surfaces.

The example Henning Pohl gives in the paper title is “my coffee mug is a volume dial“. I think this example captures the idea of around-device devices well: mugs, being cylindrical objects, afford certain interactions. In this case, turning them around. There’s implicit physical feedback from interacting with a tangible object which could make interaction easier. Also, using nearby objects provides many of the benefits which around-device gestures give: larger interaction space, unoccluded content on the device screen, potential for more expressive input, etc.

Exploring Toronto during the lunch break. Looking out across downtown from the Blue Jay's stadium.
Exploring Toronto during today’s lunch break. Looking out across downtown from the Blue Jays’ stadium.

Another interesting paper from today was about Toffee, by Xiao et al. [link]. Sticking with the around-device interaction theme, they looked at if it would be possible to use piezo actuators to localise taps and knocks on surrounding table surfaces. Like with around-device devices, this was another way of making use of nearby ordinary surfaces for input. They found that taps could be most reliably localised when given using more solid objects, like touch styluses or knuckles. Softer points, like fingertips, were more difficult to localise. Toffee would be ideal for radial input around devices, due to the characteristics of the tap localisation approach.

I like both of these papers because they push the around-device interaction space a little beyond mid-air free-hand gestures, in both cases using ordinary surfaces as part of the interaction. I know this has been done before with interfaces like SideSight and Qian Qin’s Dynamic Ambient Lighting for Mobile Devices, but I think it’s important that others are exploring this space further.

ICMI ’14 Paper Accepted

My full paper, “Tactile Feedback for Above-Device Gesture Interfaces: Adding Touch to Touchless Interactions”, was accepted to ICMI 2014. It was also accepted for oral presentation rather than poster presentation, so I’m looking forward to that!

Tactile Feedback for Above-Device Interaction.
Tactile Feedback for Above-Device Interaction.

In this paper we looked at tactile feedback for above-device interaction with a mobile phone. We compared direct tactile feedback to distal tactile feedback from wearables (rings, smart-watches) and ultrasound haptic feedback. We also looked at different feedback designs and investigated the impact of tactile feedback on performance, workload and preference.

ultrasound array
Array of Ultrasound Transducers for Ultrasound Haptic Feedback.

We found that tactile feedback had no impact on input performance but did improve workload significantly (making it easier to interact). Users also significantly preferred tactile feedback to no tactile feedback. More details are in the paper [1] along with design recommendations for above- and around-device interface designers. I’ve written a bit more about this project here.

Video

The following video (including awful typo on the last scene!) shows the two gestures we used in these studies.

References

[1] Tactile Feedback for Above-Device Gesture Interfaces: Adding Touch to Touchless Interactions
E. Freeman, S. Brewster, and V. Lantz.
In Proceedings of the International Conference on Multimodal Interaction – ICMI ’14, 419-426. 2014.

What Is Around-Device Interaction?

One of my biggest research interests is gesture interaction with mobile devices, also known as around-device interaction because users interact in the space around the device rather than on the device itself. In this post I’m going to give a brief overview of what around-device interaction is, how gestures can be sensed from mobile devices and how these interactions are being realised in commercial devices.

Why Use Around-Device Interaction?

Why would we want to gesture with mobile devices (such as phones or smart watches) anyway? These devices typically have small screens which we interact with in a very limited fashion; using the larger surrounding space lets us interact in more expressive ways and lets the display be utilised fully, rather than our hand occluding content as we reach out to touch the screen. Gestures also let us interact without having to first lift our device, meaning we can interact casually from a short distance. Finally, gesture input is non-contact so we can interact when we would not want to touch the screen, e.g. when preparing food and wanting to navigate a recipe but our hands are messy.

Sensing Around-Device Input

Motivated by the benefits of expressive non-contact input, HCI researchers have developed a variety of approaches for detecting around-device input. Early approaches used infrared proximity sensors, similar to the sensors used in phones to lock the display when we hold our phone to our ear. SideSight (Butler et al. 2008) placed proximity sensors around the edges of a mobile phone, letting users interact in the space beside the phone. HoverFlow (Kratz and Rohs 2009) took a similar approach, although their sensors faced upwards rather than outwards. This let users gesture above the display. Although this meant gesturing occluded the screen, users could interact in 3D space; a limitation of SideSight was that users were more or less restricted to a flat plane around the phone.

Abracadabra (Harrison and Hudson 2009) used magnetic sensing to detect input around a smart-watch. Users wore a magnetic ring which affected the magnetic field around the device, letting the watch determine finger position and detect gestures. This let users interact with a very small display in a much larger area (an example of what Harrison called “interacting with small devices in a big way” when he gave a presentation to our research group last year) – something today’s smart-watch designers should consider. uTrack (Chen et al. 2013) built on this approach with additional wearable sensors. MagiTact (Ketabdar et al. 2010) used a similar approach to Abracadabra for detecting gestures around mobile phones.

So far we’ve looked at two approaches for detecting around-device input: infrared proximity sensors and magnetic sensors. Researchers have developed camera-based approaches for detecting input. Most mobile phone cameras can be used to detect around-device gestures within the camera field of view, which can be extended using approaches such as Surround-see (Yang et al. 2013). Surround-see placed an omni-directional lens over the camera, giving the phone a complete view of its surrounding environment. Users could then gesture from even further away (e.g. across the room) because of the complete field of view.

Others have proposed using depth cameras for more accurate camera-based hand tracking. I was excited when Google revealed Project Tango earlier this year because a mobile phone with a depth sensor and processing resources dedicated to computer vision is a step closer to realising this type of interaction. While mobile phones can already detect basic gestures using their magnetic sensors and cameras, depth cameras, in my opinion, would allow more expressive gestures without having to wear anything (e.g. magnetic accessories).

We’re also now seeing low-powered alternative sensing approaches, such as AllSee (Kellogg et al. 2014) which can detect gestures using ambient wireless signals. These approaches could be ideal for wearables which are constrained by small battery sizes. Low-power sensing could also allow always-on gesture sensing; this is currently too demanding with some around-device sensing approaches.

Commercial Examples

I have so far discussed a variety of sensing approaches found in research; this is by no means a comprehensive survey of around-device gesture recognition although it shows the wide variety of approaches possible and identifies some seminal work in this area. Now I will look at some commercial examples of around-device interfaces to show that there is an interest in moving interaction away from the touch-screen and into the around-device space.

Perhaps the best known around-device interface is the Samsung Galaxy S4. Samsung included features called Air View and Air Gesture which let users gesture above the display without having to touch it. Users could hover over images in a gallery to see a larger preview and could navigate through a photo album by swiping over the display. A limitation of the Samsung implementation was that users had to be quite close to the display for gestures to be detected – so close that they may as well have used touch input!

Nokia also included an around-device gesture in an update for some of their Lumia phones last year. Users could peek at their notifications by holding their hand over the proximity sensor briefly. While just a single gesture, this let users check their phones easily without unlocking them. With young smartphone users reportedly checking their phones more than thirty times per day (BBC Newsbeat, 4th April 2014), this is a gesture that could get a lot of use!

There are also a number of software libraries which use the front-facing camera to detect gesture input, allowing around-device interaction on typical mobile phones.

Conclusion

In this post we took a quick look at around-device interaction. This is still an active research area and one where we are seeing many interesting developments – especially as researchers are now focusing on issues other than sensing approaches. With smartphone developers showing an interest in this modality, identifying and overcoming interaction challenges is the next big step in around-device interaction research.