Interactive Light Demo at Interact ’15

This week I’ve been in Bamberg, in Germany, presenting a poster and an interactive demo at Interact 2015. If you’ve stumbled across this website via my poster, or if you tried my demo at the conference, then it was nice meeting you and I hope you had some fun with it! If you’re looking for more information about the research then I’ve written a little about it here: http://euanfreeman.co.uk/interactive-light-feedback/

For some earlier research, where we looked at using tactile feedback for in-air gestures, see: http://euanfreeman.co.uk/projects/above-device-tactile-feedback/

Gestures

Hand gestures. Photo by Charles Haynes: CC BY-SA.
Hand gestures. Photo by Charles Haynes: CC BY-SA.

I want to make gesture interaction – interacting with computers through hand movements in mid-air – easier and more enjoyable to use. My PhD research focused on helping users address gesture systems, especially when those systems only have limited capabilities for providing feedback. I’ve written a lot about gestures and problems with gesture interaction, so this page brings that information together to give an overview of why gestures are difficult and how we might make them better.

Addressing Gesture Systems

When users want to interact with an in-air gesture system, they must first address it. This involves finding where to perform gestures, so that they can be sensed, and finding out how to direct input towards the system you want to interact with. During my PhD, I developed and evaluated interaction techniques for addressing in-air gesture systems. You can read more about this here. A related challenge is finding where to put your hands for mid-air interfaces, especially when mid-air haptics are used. I developed a system called HaptiGlow that helped users find a good hand position for mid-air input.

Above- and Around-Device Interaction

My research often looks at gestures in close proximity to small devices, either above (for example, gesturing over a phone on a table) or around (for example, gesturing behind a device you are holding with your other hand) those devices. I give an introduction to around-device interaction here, present some research and guidelines for above-device interaction with phones here, and discuss our work on above-device tactile feedback here. I also explain why we would want to use these types of gestures here.

Gestures Are Not “Natural”

In this post I outline three gesture interaction problems (the Midas Touch problem, the address problem and the sensing problem) and what implications these have for gesture interaction. In short, we should not think of gestures as being “natural” because there are many practical issues we must overcome to make them usable.

Novel Gesture Feedback

My PhD research looks at how we can move feedback about gestures off the screen and into the space around devices instead. I’ve written about tactile feedback for gestures here. I’ve also written about interactive light feedback, a novel type of display, for gestures here.

Gestures In Multimodal Interaction

Here I talk about two papers from 2014 where gestures are considered as part of multimodal interactions. While this idea was notably demonstrated in the 1980s, it still hasn’t reached mainstream computing. Perhaps this is about to change with new technologies.

Gestures With Touch

I’ve always seen gestures as an alternative interaction technique, available when others like speech or touch are unavailable or less convenient. For example, gestures could be used to browse recipes without touching your tablet and getting it messy, or could be used for short ‘micro-interactions’ where gestures from a distance are better than approaching and touching something.

Lately, two papers at UIST ’14 looked at using gestures alongside touch, rather than instead of. I really like this idea and I’m going to give a short overview of those papers here. Combining hand gestures with other interaction techniques isn’t new though, an early and notable example from 1980 was Put That There, where users interacted using voice and gesture together.

In Air+Touch, Chen and others look at how fingers may move and gesture over touchscreens while also providing touch input. They grouped interactions into three types: gestures happening before touch, gestures happening between touches and gestures happening after touch. They also identified various finger movements which can be used over touchscreens but which are distinct from incidental movements. These include circular paths, sharp turns and jumps into a higher than normal space over the screen. In Air+Touch, users gestured and touched with one finger. This lets users provide more expressive input than touch alone provides.

In contrast to unimodal input (meaning one hand rather than one input modality, in this case) is bimodal input, which Song and others looked at. They focused on gestures in the wider space around the device, using the non-touching hand for gestures. As users interacted with mobile devices using touch with one hand, the other hand could gesture nearby to access other functionalities. For example, they describe how users may browse maps with touch, while using gestures to zoom in or out from the map.

While each of these papers take different approaches to combining touch and gesture, both have some similarities. Touch can be used to help segment input. Rather than detecting gestures at all times, interfaces can just look for gestures which occur around touch events; touch is implicitly used as a clutch mechanism. Clutching helps avoid accidental input and saves power as gesture sensing doesn’t need to happen all the time.

Both also demonstrate using gestures for easier context switching and secondary tasks. Users may gesture with their other hand to switch map mode while browsing or may lift their finger between swipes to change map mode. Gestures are mostly used for discrete secondary input, rather than as continuous primary input; although this is certainly available. There are similarities between these concepts and AD-Binning from Hasan and others. They used around-device gestures for accessing content, while interacting with that content using touch with their other hand.

References

Mobile HCI ’14: Why would I use around-device gestures?

Toronto is a fantastic city, which has made this conference so enjoyable.
Toronto is a fantastic city, which has made this conference so enjoyable.

At the Mobile HCI poster session I had some fantastic discussions with some great people. There’s been a lot of around-device interaction research presented at the conference this week and a lot of people who I spoke to when presenting my poster asked: why would I want to do this?

That’s a very important question and the reason it gets asked can maybe give some insight into when around-device gestures may and may not be useful. A lot of people said that if they were already holding their phone, they would just use the touchscreen to provide input. Others said they would raise the device to their mouth for speech input or would even use the device itself for performing a gesture (e.g. shaking it).

In our poster and its accompanying paper, we focused on above-device gestures. We focus on a particular area of the around-device space – directly over the device – as we think this is where users are mostly likely to benefit from using gestures. People typically keep their phones on flat surfaces – Pohl et al. found this in their around-device device paper [link], Wiese et al. [link] found that in their CHI ’13 study, and Dey et al. [link] found that three years ago. As such, gestures are very likely to be used over a phone.

Enjoying some local pilsner to wrap up the conference!
Enjoying some local pilsner to wrap up the conference!

So, why would we want to gesture over our phones? My favourite example, and one which really seems to resonate with people, is using gestures to read recipes while cooking in the kitchen. Wet and messy hands, the risks of food contamination, the need for multitasking – these are all inherent parts of preparing food which can motivate using gestures to interact with mobile devices. Gestures would let me move through recipes on my phone while cooking, without having to first wash my hands. Gestures would let me answer calls while I multitask in the kitchen, without having to stop what I’m doing. Gestures would let me dismiss interruptions while I wash the dishes afterwards, without having to dry my hands.

This is just one scenario where we envisage above-device gestures being useful. Gestures are attractive for a variety of reasons in this context: touch input is inconvenient (I need to wash my hands first); touch input requires more engagement (I need to stop what I’m doing to focus); and touch input is unavailable (I need to dry my hands). I think the answer to why we would want to use these gestures is that they let us interact when other input is inconvenient. Our phones are nearby on surfaces so let’s interact with them while they’re there.

In summary, our work focuses on gestures above the device as this is where we see them being most commonly used. There are many reasons people would want to use around-device gestures but we think the most compelling ones motivate using above-device gestures.

Mobile HCI ’14: “Are you comfortable doing that?”

OCAD University, who are one of the Mobile HCI '14 hosts, have some fantastic architecture on campus.
OCAD University, who are one of the Mobile HCI ’14 hosts, have some fantastic architecture on campus.

One of my favourite talks from the third day of Mobile HCI ’14 was Ahlstrom et al.’s paper on the social acceptability of around-device gestures [link]. In short: they asked users if they were comfortable doing around-device gestures. I think this is a timely topic because we’re now seeing around-device interfaces added to commercial smartphones. Samsung’s Galaxy S4 had hover gestures over the display and Google’s Project Tango added depth sensors to the smartphone form factor. I feel that now we’ve established ways of detecting around-device gestures, it’s now time to look at what around-device gestures should be and if users are willing to use them.

In Ahlstrom’s paper, which was presented excellently by Pourang Irani, they did three studies looking at different aspects of the social acceptability of around-device gestures. They looked mainly at aspects of gesture mechanics: gesture size, gesture duration, position relative to device, distance from the device. When asking users if they were comfortable doing gestures, they found that users were most happy to gesture near the device (biased towards the side of their dominant hand) and found shorter interactions more acceptable.

They also looked at how spectators perceived these gestures, by opportunistically asking onlookers what they thought of someone who was using gestures nearby. What surprised me was that spectators found around-device gestures more acceptable in a wider variety of social situations than the users from the first studies. Does seeing other people perform gestures make those types of gesture input seem more acceptable?

Tonight I presented my poster [paper link] on our design studies for above-device gesture design. There were some similarities between our work and Ahlstrom’s; purely by coincidence, we both asked users if they were comfortable and willing to use certain gestures. However, we focused on what the gestures were, whereas they focused on other aspects of gesturing (e.g. gesture duration).

In our poster and paper we present design recommendations for creating around-device interactions which users think are more usable and more acceptable. I think the next big step for around-device research is looking at how to map potential gestures to actions and identifying ways of making around-device input better. My PhD research is focusing on the output side of things, looking at how we can design feedback to help users as they gesture using the space near devices. If you saw my poster tonight or had a chat with me, there’s more about the research in our poster here; tonight was fun so thanks for stopping by!

Mobile HCI ’14: Using Ordinary Surfaces for Interaction

Mobile HCI '14 day one: a wee bit of Toronto and Henning Pohl's idea of around-device devices.
Mobile HCI ’14 day one: a wee bit of Toronto and Henning Pohl’s idea of around-device devices.

Today was the first day of the papers program at Mobile HCI ’14 and amongst the great talks was one I particularly liked on the idea of “around-device devices” by Pohl et al. [link]. I’ve written before about around-device interaction, above-device interaction, and how the space around mobile devices can be used for gesturing. What’s novel about interaction using around-device devices, however, is that interaction in the around-device space is not just limited to free-hand gestures relative to the device. Instead, nearby objects can become potential inputs in the user interface. One of the motivations for using nearby objects for interaction is that mobile devices are very commonly kept on surfaces – tables, desks, kitchen worktops – which are also used for storing objects. In this post title I call these ordinary surfaces to distance this idea from interactive surfaces.

The example Henning Pohl gives in the paper title is “my coffee mug is a volume dial“. I think this example captures the idea of around-device devices well: mugs, being cylindrical objects, afford certain interactions. In this case, turning them around. There’s implicit physical feedback from interacting with a tangible object which could make interaction easier. Also, using nearby objects provides many of the benefits which around-device gestures give: larger interaction space, unoccluded content on the device screen, potential for more expressive input, etc.

Exploring Toronto during the lunch break. Looking out across downtown from the Blue Jay's stadium.
Exploring Toronto during today’s lunch break. Looking out across downtown from the Blue Jays’ stadium.

Another interesting paper from today was about Toffee, by Xiao et al. [link]. Sticking with the around-device interaction theme, they looked at if it would be possible to use piezo actuators to localise taps and knocks on surrounding table surfaces. Like with around-device devices, this was another way of making use of nearby ordinary surfaces for input. They found that taps could be most reliably localised when given using more solid objects, like touch styluses or knuckles. Softer points, like fingertips, were more difficult to localise. Toffee would be ideal for radial input around devices, due to the characteristics of the tap localisation approach.

I like both of these papers because they push the around-device interaction space a little beyond mid-air free-hand gestures, in both cases using ordinary surfaces as part of the interaction. I know this has been done before with interfaces like SideSight and Qian Qin’s Dynamic Ambient Lighting for Mobile Devices, but I think it’s important that others are exploring this space further.

Mobile HCI ’14 Poster

This time next week I’ll be boarding a plane to fly to Toronto for Mobile HCI! I’ll be presenting a poster there on above-device gesture design and I’m also participating in the doctoral consortium. I’ve set up a page to accompany my poster and demonstrate our above-device gestures: see here. My poster is also finished, printed and ready to go!

Mobile HCI '14 Poster
Mobile HCI ’14 Poster

Above-Device Gestures

Contents

What is Above-Device Interaction?
Our User-Designed Gestures
Design Recommendations
Mobile HCI ’14 Poster

What is Above-Device Interaction?

Gesture interfaces let users interact with technology using hand movements and poses. Unlike touch input, gestures can be performed away from devices in the larger space around them. This allows users to provide input without reaching out to touch a device or without picking it up. We call this type of input above-device interaction, as users gesture over devices which are placed on a flat surface, like a desk or table. Above-device gestures may be useful when users are unable to touch a device (when their hands are messy, for example) or when touching a device would be less convenient (when wanting to interact quickly from a distance, for example).

Our research focuses on above-device interaction with mobile devices, such as phones. Most research in this area has focused on sensing gesture interactions. Little is known about how to design above-device gestures which are usable and acceptable to users, which is where our research comes in. We ran two studies to look at above-device gesture design further: we gathered gesture ideas from users in a guessability study and then ran an online survey to evaluate some of these gestures further. You can view this survey here.

The outcomes of these studies are a set of evaluated above-device gestures and design recommendations for designing good above-device interactions. This work was presented at Mobile HCI ’14 as a poster [1].

 

Our User-Designed Gestures

We selected two gestures for each mobile phone task from our first study. Gestures were selected based on popularity (called agreement by others) and consistency. Rather than select based on agreement alone, we wanted gestures which could be combined with other gestures in a coherent way. Agreement alone is not a good way of selecting gestures, as our online evaluation actually found that some of the most popular gestures were not as socially acceptable as their alternatives.

We now describe our gestures and link to videos describing them. See our paper [1] for evaluation results. Click on the gesture names to see a video demonstration.

Check Messages

Swipe: User swipes quickly over the device. Can be from left-to-right or from right-to-left.
Draw Rectangle: User extends their finger and traces a rectangle over the device. Imitates the envelope icon used for messages.

Select Item

Finger Count: User selects from numbered targets by extending their fingers.
Point and Tap: User points over the item to be selected then makes a selection by “tapping” with their finger.

Note: We also used these gestures in [2] (see here for more information).

Move Left and Right

Swipe: User swipes over the device to the left or right.
Flick: User holds their hand over the device and flicks their whole hand to the left or right.

Note: We did not look at any specific mapping of gesture direction to navigation behaviour. This seems to be a controversial subject. If a user flicks their hand to the left, should the content move left (i.e. navigate right) or should the viewport move left (i.e. navigate left)?

Delete Item

Scrunch: User holds their hand over the device then makes a fist, as though scrunching up a piece of paper.
Draw X: User extends their finger and draws a cross symbol, as though scoring something out.

Place Phone Call

Phone Symbol: User makes a telephone symbol with their hand (like the “hang loose” gesture).
Dial: User extends their finger and draws a circle, as though dialling an old rotary telephone.

Dismiss / Close Item

Brush Away: User gestures over the device as though they were brushing something away.
Wave Hand: User waves back and forth over their device, as though waving goodbye.

Answer Incoming Call

Swipe: As above.
Pick Up: User holds their hand over the device then raises it, as though picking up a telephone.

Ignore Incoming Call

Brush Away: As above.
Wave Hand: As above.

Place Call on Hold

One Moment: User extends their index finger and holds that pose, as though signalling “one moment” to someone.
Lower Hand: User lowers their hand with their fingers fully extended, as though holding something down.

End Current Call

Wave Hand: As above.
Place Down: Opposite of “Pick Up”, described above.

Check Calendar / Query

Thumb Out: User extends their thumb and alternates between thumbs up and thumbs down.
Draw ? Symbol: User extends their finger and traces a question mark symbol over the device.

Accept and Reject

Thumb Up and Down: User makes the “thumb up” or “thumb down” gesture.
Draw Tick and Cross: User extends their finger and draws a tick or a cross symbol over the device.

 

Design Recommendations

Give non-visual feedback during interaction

Feedback during gestures is important because it shows users that the interface is responding to their gestures and it helps them gesture effectively. However, above-device gestures take place over a phone so visual feedback will not always be visible. Instead, other modalities (like audio or tactile [2]) should be used.

Make non-visual feedback distinct from notifications

Some participants suggested that they may be confused if feedback during gesture interaction was like feedback used for other mobile phone notifications. Gesture feedback should be distinct from other notification types. Continuous feedback which responds to input would let users know that feedback is being given for their actions.

Emphasise that gestures are directed towards a device

Some participants in our studies were concerned about people thinking they were gesturing at them rather than at a device. Above-device interactions should emphasise gesture target by using the device as a referent for gestures and letting users gesture in close proximity.

Support flexible gesture mechanics

During our guessability study, some participants gestured with whole hand movements whereas others performed the same gestures with one or two fingers. Gestures also varied in size; for example, some participants swiped over a large area and others swiped with subtle movements over the display only. Above-device interfaces should be flexible, letting users gesture in their preferred way using either hand. Social situation may influence gesture mechanics. For example, users in public places may use more subtle versions of gestures than they would at home.

Enable complex gestures with a simple gating gesture

Our participants proposed a variety of gestures, from basic movements with simple sensing requirements, to complex hand poses requiring more sophisticated sensors. Always-on sensing with complex sensors will affect battery. Sensors with low power consumption (like the proximity sensor, for example) could be used to detect a simple gesture which then enables more sophisticated sensors. Holding a hand over the phone or clicking fingers, for example, could start a depth camera which could track the hand in greater detail.

Use simple gestures for casual interactions

Casual interactions (such as checking for notifications) are low-effort and imprecise so should be easy to perform and sense. Easily sensed gestures lower power requirements for input sensing and allow for variance in performance when gesturing imprecisely. Users may also use these gestures more often when around others so allowing variance lets users gesture discreetly, in an acceptable way.

 

Mobile HCI ’14 Poster

Mobile HCI '14 Poster
Mobile HCI ’14 Poster

References

[1] Towards Usable and Acceptable Above-Device Interactions
E. Freeman, S. Brewster, and V. Lantz.
In Mobile HCI ’14 Posters, 459-464. 2014.

[2] Tactile Feedback for Above-Device Gesture Interfaces: Adding Touch to Touchless Interactions
E. Freeman, S. Brewster, and V. Lantz.
In Proceedings of the International Conference on Multimodal Interaction – ICMI ’14, 419-426. 2014.

Acknowledgements

This research was part funded by Nokia Research Centre, Finland. We would also like to thank everyone who participated in our studies.

Above-Device Tactile Feedback

Introduction

My PhD research looks at improving gesture interaction with small devices, like mobile phones, using multimodal feedback. One of the first things I looked at in my PhD was tactile feedback for above-device interfaces. Above-device interaction is gesture interaction over a device; for example, users can gesture at a phone on a table in front of them to dismiss unwanted interruptions or could gesture over a tablet on the kitchen counter to navigate a recipe. I look at above-device gesture interaction in more detail in my Mobile HCI ’14 poster paper [1], which gives a quick overview of some prior work on above-device interaction.

Tactile Feedback for Above-Device Interaction

In two studies, described in my ICMI ’14 paper [2], we looked at how above-device interfaces could give tactile feedback. Giving tactile feedback during gestures is a challenge because users don’t touch the device they are gesturing at; tactile feedback would go unnoticed unless users were holding the device while they gestured. We looked at ultrasound haptics and distal tactile feedback from wearables. In our studies, users interacted with a mobile phone interface (pictured above) which used a Leap Motion to track two selection gestures.

Gestures

An illustration of the count gesture. The user has extended four fingers on their right hand, thus selecting the fourth target.

Our studies looked at two selection gestures: Count (above) and Point (below). These gestures were from our user-designed gesture study [1]. With Count, users select from numbered targets by extending the appropriate number of fingers. When there’s more than five targets, we partition targets into groups. Users can select from a group by moving their hand. In the image above, the palm position is closest to the bottom half of the screen so we activate the lower group of targets. If users moved their hands towards the upper half of the screen, we would activate the upper group of four targets. Users had to hold a Count gesture for 1000 ms to make a selection.

Illustration of the point gesture. A hand with an extended index finger is selecting one of the on-screen targets, using a circular cursor.

With Point, users controlled a cursor which was mapped to their finger position relative to the device. We used the space beside the device to avoid occluding the screen while gesturing. Users made selections by dwelling the cursor over a target for 1000 ms.

For a video demo of these gestures, see:

Tactile Feedback

In our first study we looked at different ways of giving tactile feedback. We compared feedback directly from the device when held, ultrasound haptics (using an array of ultrasound transducers, below) and distal feedback from wearable accessories. We used two wearable tactile feedback prototypes: a “watch” and a “ring” (vibrotactile actuators affixed to a watch strap and an adjustable velcro ring). We found that all were effective for giving feedback, although participants had divided preferences.

A photograph of an ultrasound haptics device.

Some preferred feedback directly from the phone because it was familiar, although this is an unlikely case in above-device interaction because an advantage of this interaction modality is that users don’t need to first lift the phone or reach out to touch it. Some participants liked feedback from our ring prototype because it was close to the point of interaction (when using Point) and others preferred feedback from the watch (pictured below) because it was a more acceptable accessory than a vibrotactile ring. An advantage of ultrasound haptics is that users do not need to wear any accessories and participants appreciated this, although the feedback was less noticeable than vibrotactile feedback. This was partly because of the small ultrasound array used (similar size to a mobile phone) and partly because of the nature of ultrasound haptics.

Tactile Watch Prototype

In a second study we focused on feedback given on the wrist using our watch prototype. We were interested to see how tactile feedback affected interaction using our Point and Count gestures. We looked at three tactile feedback designs in addition to just visual feedback. Tactile feedback had no impact on performance (possibly because selection was too easy) although it had a significant positive effect on workload. Workload (measured using NASA-TLX) was significantly lower when dynamic tactile feedback was given. Users also preferred to receive tactile feedback to no tactile feedback.

A more detailed qualitative analysis and the results of both studies appear in our ICMI 2014 paper [2]. A position paper [3] from the CHI 2016 workshop on mid-air haptics and displays describes this work in the broader context of research towards more usable mid-air widgets.

Tactile Feedback Source Code

A Pure Data patch for generating our tactile feedback designs is available here.

References

[1] Towards Usable and Acceptable Above-Device Interactions
E. Freeman, S. Brewster, and V. Lantz.
In Mobile HCI ’14 Posters, 459-464. 2014.

[2] Tactile Feedback for Above-Device Gesture Interfaces: Adding Touch to Touchless Interactions
E. Freeman, S. Brewster, and V. Lantz.
In Proceedings of the International Conference on Multimodal Interaction – ICMI ’14, 419-426. 2014.

[3] Towards Mid-Air Haptic Widgets
E. Freeman, D. Vo, G. Wilson, G. Shakeri, and S. Brewster.
In CHI 2016 Workshop on Mid-Air Haptics and Displays: Systems for Un-instrumented Mid-Air Interactions. 2016.

ICMI ’14 Paper Accepted

My full paper, “Tactile Feedback for Above-Device Gesture Interfaces: Adding Touch to Touchless Interactions”, was accepted to ICMI 2014. It was also accepted for oral presentation rather than poster presentation, so I’m looking forward to that!

Tactile Feedback for Above-Device Interaction.
Tactile Feedback for Above-Device Interaction.

In this paper we looked at tactile feedback for above-device interaction with a mobile phone. We compared direct tactile feedback to distal tactile feedback from wearables (rings, smart-watches) and ultrasound haptic feedback. We also looked at different feedback designs and investigated the impact of tactile feedback on performance, workload and preference.

ultrasound array
Array of Ultrasound Transducers for Ultrasound Haptic Feedback.

We found that tactile feedback had no impact on input performance but did improve workload significantly (making it easier to interact). Users also significantly preferred tactile feedback to no tactile feedback. More details are in the paper [1] along with design recommendations for above- and around-device interface designers. I’ve written a bit more about this project here.

Video

The following video (including awful typo on the last scene!) shows the two gestures we used in these studies.

References

[1] Tactile Feedback for Above-Device Gesture Interfaces: Adding Touch to Touchless Interactions
E. Freeman, S. Brewster, and V. Lantz.
In Proceedings of the International Conference on Multimodal Interaction – ICMI ’14, 419-426. 2014.