Addressing In-Air Gesture Systems

Introduction

Users must address a system in order to interact with it: this means discovering where and how to direct input towards the system. Sometimes this is a simple problem. For example: when using a touchscreen for input, users know to reach out and touch (how) the screen (where); when using a keyboard for input, they know to press (how) the keys (where). However, sometimes addressing a system can be complicated. When using mid-air gestures, it is not always clear where users should perform gestures and it is not clear how they direct them towards the system. My PhD thesis [1] looked at the problem of addressing in-air gesture systems and investigated interaction techniques that help users do this.

Finding where to perform gestures

Users need to know where to perform gestures, so that their actions can be sensed by the input device. If users’ movements can not be sensed, then their actions will have no effect on the system. Some gesture systems help users find where to gesture by showing them what the input device can see. For example, the images below show feedback that shows the user what the input device can “see”. If the user starts gesturing outside of the sensor range, he or she knows to move their body so that the input device can see them properly.

silhouette2silhouette3

Not all systems are able to give this type of feedback. For example, if a system has no screen or only has a small display, then it may not be able to give such detailed feedback. My thesis investigated an interaction technique (sensor strength) that uses light, sound and vibration to help users find where to gesture. This means that systems do not need to show feedback on a screen. Sensor strength tells users how close their hand is to a “sweet spot” in the sensor space, which is a location where they can be seen easily by the sensors. If users are close to the sweet spot, they are more likely to be sensed correctly. If they are not close to it, their gestures may not be detected by the input sensors. My thesis [1] and my CHI 2016 paper [2] describe an evaluation of this technique. The results show that users could find the sweet spot with 51-80mm accuracy.

Discovering how to direct input

Users have to discover how to direct their gestures towards the system they want to interact with. This might not be necessary if there is only one gesture-sensing device in the room. However, in future there may be many devices which could all be detecting gestures at the same time; the image below illustrates this type of situation. Already, our mobile phones, televisions, game systems and laptops have the ability to sense gestures, so this problem may be closer than expected.

Address_wide

If gestures are being sensed by many devices at the same time, users may affect one (or more) systems unintentionally. This is called the Midas Touch problem. Other researchers have developed techniques designed to overcome the Midas Touch problem, although these are not practical if there is more than one gesture system at a time. My thesis investigated an alternative technique, called rhythmic gestures, which allows users to direct their input towards the one system they wish to interact with. This can be used by many systems at once with little interference.

Rhythmic gestures

Rhythmic gestures are simple hand movements that are repeated in time with an animation, shown to the user on a visual display. These gestures consist of a movement and an interval: for example, a user may move their hand repeatedly from side to side, every 500ms, or they may move their hand up and down, every 750ms. The image below shows how an interactive light display could be used to show users a side-to-side gesture movement. The animations could also be shown on the screen, if necessary. Users can perform a rhythmic gesture by following the animation, in time, with their hand.

Rhythmic gesture display

This interaction technique can be used to direct input. If every gesture-sensing system looks for a different rhythmic gesture, then users can use that gesture to show which system they want to interact with. This overcomes the Midas Touch problem, as it informs the other systems that input is not intended for them. My thesis [1] and CHI 2016 paper [2] describe evaluations of this interaction technique. I found that users could successfully perform rhythmic gestures, even without feedback about their hand movements.

Rhythmic micro-gestures

I extended the rhythmic gesture concept to use micro-gesture movements, very small movements of the hand. For example, tapping the thumb off the side of the hand or opening and closing all fingers at once. These could be especially useful for interacting discreetly with mobile devices while in public or on-the-go. My ICMI 2017 paper [3] describes a user study into their performance, finding that users could use them successfully with just audio feedback to convey the rhythm.

Videos

A video demonstration of the interaction techniques described here:

The 30-second preview video for the CHI 2016 paper:

Summary

To address an in-air gesture system, users need to know where to perform gestures and they need to know how to direct their input towards the system. My research investigated two techniques (sensor strength and rhythmic gestures) that help users solve these problems. I evaluated the techniques individually and found them successful. I also combined them, creating a single technique which shows users how to direct their input while also helping them find where to gesture. Evaluation of the combined technique found that performance was also good, with users rating the interaction as easy to use. Together, these techniques could be used to help users address in-air gesture systems.

References

[1] Interaction techniques with novel multimodal feedback for addressing gesture-sensing systems
E. Freeman.
In PhD Thesis, University of Glasgow. 2016.

 PDF       Website      [Bibtex]

@inproceedings{Thesis,
  author = {Freeman, Euan},
  title = {{Interaction techniques with novel multimodal feedback for addressing gesture-sensing systems}},
  booktitle = {PhD Thesis, University of Glasgow},
  year = 2016,
  month = 3,
  url = {http://theses.gla.ac.uk/7140/},
  pdf = {http://theses.gla.ac.uk/7140/},
}

[2] Do That, There: An Interaction Technique for Addressing In-Air Gesture Systems
E. Freeman, S. Brewster, and V. Lantz.
In Proceedings of the 34th Annual ACM Conference on Human Factors in Computing Systems – CHI ’16, pp. 2319-2331. 2016.

 PDF       DOI       Website       Video      [Bibtex]

@inproceedings{CHI2016,
    author = {Freeman, Euan and Brewster, Stephen and Lantz, Vuokko},
    booktitle = {{Proceedings of the 34th Annual ACM Conference on Human Factors in Computing Systems - CHI '16}},
    title = {{Do That, There: An Interaction Technique for Addressing In-Air Gesture Systems}},
    year = {2016},
    publisher = {ACM Press},
    pages = {2319--2331},
    doi = {10.1145/2858036.2858308},
  pdf = {http://research.euanfreeman.co.uk/papers/CHI_2016.pdf},
  url = {http://euanfreeman.co.uk/gestures/},
  video = {{https://www.youtube.com/watch?v=6_hGbI_SdQ4}},
}

[3] Rhythmic Micro-Gestures: Discreet Interaction On-the-Go
E. Freeman, G. Griffiths, and S. Brewster.
In Proceedings of 19th ACM International Conference on Multimodal Interaction – ICMI ’17, to appear. 2017.

 PDF       DOI      [Bibtex]

@inproceedings{ICMI2017,
    author = {Freeman, Euan and Griffiths, Gareth and Brewster, Stephen},
    booktitle = {{Proceedings of 19th ACM International Conference on Multimodal Interaction - ICMI '17}},
    title = {{Rhythmic Micro-Gestures: Discreet Interaction On-the-Go}},
    year = {2017},
    publisher = {ACM Press},
    pages = {to appear},
    doi = {10.1145/3136755.3136815},
  url = {},
  pdf = {http://research.euanfreeman.co.uk/papers/ICMI_2017.pdf},
}

CHI 2016, ABBI, and other things

My CHI 2016 submission, “Do That There: An Interaction Technique for Addressing In-Air Gesture Systems“, has been conditionally accepted! The paper covers the final three studies in my PhD, where I developed and evaluated a technique for addressing in-air gesture systems.

To address a gesture system is to direct input towards it; this involves finding where to perform gestures and how to specify the system you intend to interact with (so that other systems do not act upon your gestures). Do That There (a play on one of HCI’s most famous gesture papers, Put That There) allows both of these things: it shows you where to perform gestures, using multimodal feedback (there) and it shows you how to identify the system you want to gesture at (do that).

Three months ago I started working on the ABBI (Audio Bracelet for Blind Interaction) project as a post-doctoral researcher. The ABBI project is developing wearable technology for blind and visually impaired children. Our role at Glasgow is to investigate sound design and novel interactions which use the technology, focusing on helping visually impaired kids. Recently, we’ve presented our research and ideas to the RNIB TechShare conference and to members of SAVIE, an association focusing on the education of visually impaired children.

Finally, I submitted my PhD thesis in September although I’m still waiting for my final examination. Unfortunately it’s not going to be happening in 2015 but I’m looking forward to getting that wrapped up soon.

Interactive Light Demo at Interact ’15

This week I’ve been in Bamberg (below), in Germany, presenting a poster and an interactive demo at Interact 2015. If you’ve stumbled across this website via my poster, or if you tried my demo at the conference, then it was nice meeting you and I hope you had some fun with it! If you’re looking for more information about the research then I’ve written a little about it here: http://euanfreeman.co.uk/interactive-light-feedback/

BambergIf you’re interested in learning more about in-air gestures or want to know what my demo was about, I’ve written about gestures and their usability problems here: http://euanfreeman.co.uk/gestures/

For some earlier research, where we looked at using tactile feedback for in-air gestures, see: http://euanfreeman.co.uk/projects/above-device-tactile-feedback/

If you have any other questions, my email address is on the left.

Gestures

Hand gestures. Photo by Charles Haynes: CC BY-SA.
Hand gestures. Photo by Charles Haynes: CC BY-SA.

I want to make gesture interaction – interacting with computers through hand movements in mid-air – easier and more enjoyable to use. My PhD research focuses on improving gesture interaction with small devices like phones and wearable computers, although the problems I deal with are not unique to these types of device. I’ve written a lot about gestures and problems with gesture interaction, so this page attempts to bring that information together to give an overview of why gestures are difficult and how we might make them better.

Addressing Gesture Systems

When users want to interact with an in-air gesture system, they must first address it. This involves finding where to perform gestures, so that they can be sensed, and finding out how to direct input towards only the system you want to interact with, so that other systems do not act upon your movements as well. During my PhD, I developed and evaluated interaction techniques for addressing in-air gesture systems. You can read more about this here.

Above- and Around-Device Interaction

My research often looks at gestures in close proximity to devices, which is often above (for example, gesturing over a phone on a table) or around (for example, gesturing behind a device you are holding with your other hand) those devices. I give an introduction to around-device interaction here, present research and guidelines for above-device interaction with phones here, and discuss our work on above-device tactile feedback here. I also explain why we would want to use these types of gestures here.

Gestures Are Not “Natural”

In this post I outline three gesture interaction problems (the Midas Touch problem, the address problem and the sensing problem) and what implications these have for gesture interaction. In short, we should not think of gestures as being “natural” because there are many practical issues we must overcome to make them usable.

Novel Gesture Feedback

My PhD research looks at how we can move feedback about gestures off the screen and into the space around devices instead. I’ve written about tactile feedback for gestures here. I’ve also written about interactive light feedback, a novel type of display, for gestures here.

Gestures In Multimodal Interaction

Here I talk about two papers from 2014 where gestures are considered as part of multimodal interactions. While this idea was notably demonstrated in the 1980s, it still hasn’t reached mainstream computing. Perhaps this is about to change with new technologies.

Gestures With Touch

I’ve always seen gestures as an alternative interaction technique, available when others like speech or touch are unavailable or less convenient. For example, gestures could be used to browse recipes without touching your tablet and getting it messy, or could be used for short ‘micro-interactions’ where gestures from a distance are better than approaching and touching something.

Lately, two papers at UIST ’14 looked at using gestures alongside touch, rather than instead of. I really like this idea and I’m going to give a short overview of those papers here. Combining hand gestures with other interaction techniques isn’t new though, an early and notable example from 1980 was Put That There, where users interacted using voice and gesture together.

In Air+Touch, Chen and others look at how fingers may move and gesture over touchscreens while also providing touch input. They grouped interactions into three types: gestures happening before touch, gestures happening between touches and gestures happening after touch. They also identified various finger movements which can be used over touchscreens but which are distinct from incidental movements. These include circular paths, sharp turns and jumps into a higher than normal space over the screen. In Air+Touch, users gestured and touched with one finger. This lets users provide more expressive input than touch alone provides.

In contrast to unimodal input (meaning one hand rather than one input modality, in this case) is bimodal input, which Song and others looked at. They focused on gestures in the wider space around the device, using the non-touching hand for gestures. As users interacted with mobile devices using touch with one hand, the other hand could gesture nearby to access other functionalities. For example, they describe how users may browse maps with touch, while using gestures to zoom in or out from the map.

While each of these papers take different approaches to combining touch and gesture, both have some similarities. Touch can be used to help segment input. Rather than detecting gestures at all times, interfaces can just look for gestures which occur around touch events; touch is implicitly used as a clutch mechanism. Clutching helps avoid accidental input and saves power as gesture sensing doesn’t need to happen all the time.

Both also demonstrate using gestures for easier context switching and secondary tasks. Users may gesture with their other hand to switch map mode while browsing or may lift their finger between swipes to change map mode. Gestures are mostly used for discrete secondary input, rather than as continuous primary input; although this is certainly available. There are similarities between these concepts and AD-Binning from Hasan and others. They used around-device gestures for accessing content, while interacting with that content using touch with their other hand.

References

Gestures Are Not “Natural”

I’m sitting in Helsinki Airport on my way home from NordiCHI. It’s been a great conference and I’ve had a lot of fun exploring Helsinki too. Despite a very grey start to the week, the sun eventually came out and illuminated the bright colours of Helsinki’s beautiful architecture.

In this post I’m going to explain why I think gesture interaction is not “natural” or “intuitive”. A few talks this week justified using gestures because they were “natural” and I think that’s not really true. There are many practical realities that mean this isn’t the case and so long as people keep thinking of gestures as being “natural”, we won’t overcome those issues. Nothing of what I’m saying is new, we’ve known it for years; heck, Don Norman said the same thing (about “natural user interfaces”) and made many of the same points. This post is inspired by a discussion over coffee at NordiCHI!

Why Gesture Interaction Isn’t “Natural”

The Midas Touch Problem

In gesture interaction, the Midas Touch problem is that any sensed movements may be treated as input. Since gestures are “natural” movements which we perform in everyday life, that means that everyday movements may be treated as gestures! Obviously this is undesirable. If I’m being sensed by one or more interfaces in the surrounding environment then I don’t want hand movements I use in conversation, for example, to be treated as input to some interface.

Many solutions exist to addressing the Midas Touch problem, including clutch actions. A clutch action is one which begins or ends part of an interaction. A familiar example from speech input is saying “OK Google” to activate voice input for Google Now or Google Glass. In gesture interaction, a clutch may be a particular gesture (often called activation gestures) or body pose (like the Teapot gesture in StrikeAPose). Other alternatives include activation zones or using some other input modality as a clutch, like pressing a button or using a voice command.

Regardless of how you address the Midas Touch problem, you’re moving further away from something people do “naturally”. It’s necessary to have to perform some action specific to user interaction.

The Address Problem

In their Making Sense of Sensing Systems paper, Bellotti et al. (2002) described the problem of how to address a sensing system. Users need to be able to direct their input towards the system they intend to interact with; that is, they must be able to address the desired interface. This is more of a problem in environments with more than one sensing interface. Given that the HCI community is using gestures in more and more contexts, it’s reasonable to assume that we’ll eventually have many gesture interfaces in our environments. We need to be able to address interfaces to avoid our movements accidentally affecting others (another variant of the Midas Touch problem).

In conversation and other human interactions, we typically address each other using cues such as body language or by making eye contact. This isn’t necessarily possible in interaction, as detecting intention to interact implicitly is challenging. Instead we’ll need more explicit interaction techniques to help users address interfaces. As with the Midas Touch problem, this is not “natural”.

The Sensing Problem

Unlike the gestures we make (unintentionally or intentionally) in everyday life, gestures for gesture input need to meet certain conditions. Depending on the sensing method, users have to perform a gesture that will be understood by the system. For example, if users must move their hand across their body from right to left then this movement must be done in a way that can be recognised. This may involve directly facing a gesture sensor, slowing the movement down, moving in a perfectly horizontal line or exaggerating aspects of the gesture. In trying to make gestures understood by sensors, users perform more rigid and forced movements. Again, this is not “natural”.

Implications for HCI

Although there are more reasons that gesture interaction should not be considered a “natural” or “intuitive” input modality, I think these are the three most important ones. All result in users performing hand or body movements which are very specific to interaction; much like how we speak to computers differently than we speak to other people. I think speech input is another modality which is considered “natural” but which suffers similar problems.

I’m not sure if we’ll ever solve these problems from the computer side of gesture interaction. It would be nice, but it’s asking for a lot. Instead, we should embrace the fact that gestures are not “natural” and do what we’re good at in HCI: finding solutions for overcoming problems with technology. We need to design interaction techniques which acknowledge the unnatural aspects of gesture input in order for gestures to be usable outside of our lab studies and in an intelligent world filled with sensing interfaces and devices.

Mobile HCI ’14: Why would I use around-device gestures?

Toronto is a fantastic city, which has made this conference so enjoyable.
Toronto is a fantastic city, which has made this conference so enjoyable.

At the Mobile HCI poster session I had some fantastic discussions with some great people. There’s been a lot of around-device interaction research presented at the conference this week and a lot of people who I spoke to when presenting my poster asked: why would I want to do this?

That’s a very important question and the reason it gets asked can maybe give some insight into when around-device gestures may and may not be useful. A lot of people said that if they were already holding their phone, they would just use the touchscreen to provide input. Others said they would raise the device to their mouth for speech input or would even use the device itself for performing a gesture (e.g. shaking it).

In our poster and its accompanying paper, we focused on above-device gestures. We focus on a particular area of the around-device space – directly over the device – as we think this is where users are mostly likely to benefit from using gestures. People typically keep their phones on flat surfaces – Pohl et al. found this in their around-device device paper [link], Wiese et al. [link] found that in their CHI ’13 study, and Dey et al. [link] found that three years ago. As such, gestures are very likely to be used over a phone.

Enjoying some local pilsner to wrap up the conference!
Enjoying some local pilsner to wrap up the conference!

So, why would we want to gesture over our phones? My favourite example, and one which really seems to resonate with people, is using gestures to read recipes while cooking in the kitchen. Wet and messy hands, the risks of food contamination, the need for multitasking – these are all inherent parts of preparing food which can motivate using gestures to interact with mobile devices. Gestures would let me move through recipes on my phone while cooking, without having to first wash my hands. Gestures would let me answer calls while I multitask in the kitchen, without having to stop what I’m doing. Gestures would let me dismiss interruptions while I wash the dishes afterwards, without having to dry my hands.

This is just one scenario where we envisage above-device gestures being useful. Gestures are attractive for a variety of reasons in this context: touch input is inconvenient (I need to wash my hands first); touch input requires more engagement (I need to stop what I’m doing to focus); and touch input is unavailable (I need to dry my hands). I think the answer to why we would want to use these gestures is that they let us interact when other input is inconvenient. Our phones are nearby on surfaces so let’s interact with them while they’re there.

In summary, our work focuses on gestures above the device as this is where we see them being most commonly used. There are many reasons people would want to use around-device gestures but we think the most compelling ones motivate using above-device gestures.

Mobile HCI ’14: “Are you comfortable doing that?”

OCAD University, who are one of the Mobile HCI '14 hosts, have some fantastic architecture on campus.
OCAD University, who are one of the Mobile HCI ’14 hosts, have some fantastic architecture on campus.

One of my favourite talks from the third day of Mobile HCI ’14 was Ahlstrom et al.’s paper on the social acceptability of around-device gestures [link]. In short: they asked users if they were comfortable doing around-device gestures. I think this is a timely topic because we’re now seeing around-device interfaces added to commercial smartphones. Samsung’s Galaxy S4 had hover gestures over the display and Google’s Project Tango added depth sensors to the smartphone form factor. I feel that now we’ve established ways of detecting around-device gestures, it’s now time to look at what around-device gestures should be and if users are willing to use them.

In Ahlstrom’s paper, which was presented excellently by Pourang Irani, they did three studies looking at different aspects of the social acceptability of around-device gestures. They looked mainly at aspects of gesture mechanics: gesture size, gesture duration, position relative to device, distance from the device. When asking users if they were comfortable doing gestures, they found that users were most happy to gesture near the device (biased towards the side of their dominant hand) and found shorter interactions more acceptable.

They also looked at how spectators perceived these gestures, by opportunistically asking onlookers what they thought of someone who was using gestures nearby. What surprised me was that spectators found around-device gestures more acceptable in a wider variety of social situations than the users from the first studies. Does seeing other people perform gestures make those types of gesture input seem more acceptable?

Tonight I presented my poster [paper link] on our design studies for above-device gesture design. There were some similarities between our work and Ahlstrom’s; purely by coincidence, we both asked users if they were comfortable and willing to use certain gestures. However, we focused on what the gestures were, whereas they focused on other aspects of gesturing (e.g. gesture duration).

In our poster and paper we present design recommendations for creating around-device interactions which users think are more usable and more acceptable. I think the next big step for around-device research is looking at how to map potential gestures to actions and identifying ways of making around-device input better. My PhD research is focusing on the output side of things, looking at how we can design feedback to help users as they gesture using the space near devices. If you saw my poster tonight or had a chat with me, there’s more about the research in our poster here; tonight was fun so thanks for stopping by!

Above-Device Gestures

Contents

What is Above-Device Interaction?
Our User-Designed Gestures
Design Recommendations
Mobile HCI ’14 Poster

What is Above-Device Interaction?

Gesture interfaces let users interact with technology using hand movements and poses. Unlike touch input, gestures can be performed away from devices in the larger space around them. This allows users to provide input without reaching out to touch a device or without picking it up. We call this type of input above-device interaction, as users gesture over devices which are placed on a flat surface, like a desk or table. Above-device gestures may be useful when users are unable to touch a device (when their hands are messy, for example) or when touching a device would be less convenient (when wanting to interact quickly from a distance, for example).

Our research focuses on above-device interaction with mobile devices, such as phones. Most research in this area has focused on sensing gesture interactions. Little is known about how to design above-device gestures which are usable and acceptable to users, which is where our research comes in. We ran two studies to look at above-device gesture design further: we gathered gesture ideas from users in a guessability study and then ran an online survey to evaluate some of these gestures further. You can view this survey here.

The outcomes of these studies are a set of evaluated above-device gestures and design recommendations for designing good above-device interactions. This work was presented at Mobile HCI ’14 as a poster [1].

 

Our User-Designed Gestures

We selected two gestures for each mobile phone task from our first study. Gestures were selected based on popularity (called agreement by others) and consistency. Rather than select based on agreement alone, we wanted gestures which could be combined with other gestures in a coherent way. Agreement alone is not a good way of selecting gestures, as our online evaluation actually found that some of the most popular gestures were not as socially acceptable as their alternatives.

We now describe our gestures and link to videos describing them. See our paper [1] for evaluation results. Click on the gesture names to see a video demonstration.

Check Messages

Swipe: User swipes quickly over the device. Can be from left-to-right or from right-to-left.
Draw Rectangle: User extends their finger and traces a rectangle over the device. Imitates the envelope icon used for messages.

Select Item

Finger Count: User selects from numbered targets by extending their fingers.
Point and Tap: User points over the item to be selected then makes a selection by “tapping” with their finger.

Note: We also used these gestures in [2] (see here for more information).

Move Left and Right

Swipe: User swipes over the device to the left or right.
Flick: User holds their hand over the device and flicks their whole hand to the left or right.

Note: We did not look at any specific mapping of gesture direction to navigation behaviour. This seems to be a controversial subject. If a user flicks their hand to the left, should the content move left (i.e. navigate right) or should the viewport move left (i.e. navigate left)?

Delete Item

Scrunch: User holds their hand over the device then makes a fist, as though scrunching up a piece of paper.
Draw X: User extends their finger and draws a cross symbol, as though scoring something out.

Place Phone Call

Phone Symbol: User makes a telephone symbol with their hand (like the “hang loose” gesture).
Dial: User extends their finger and draws a circle, as though dialling an old rotary telephone.

Dismiss / Close Item

Brush Away: User gestures over the device as though they were brushing something away.
Wave Hand: User waves back and forth over their device, as though waving goodbye.

Answer Incoming Call

Swipe: As above.
Pick Up: User holds their hand over the device then raises it, as though picking up a telephone.

Ignore Incoming Call

Brush Away: As above.
Wave Hand: As above.

Place Call on Hold

One Moment: User extends their index finger and holds that pose, as though signalling “one moment” to someone.
Lower Hand: User lowers their hand with their fingers fully extended, as though holding something down.

End Current Call

Wave Hand: As above.
Place Down: Opposite of “Pick Up”, described above.

Check Calendar / Query

Thumb Out: User extends their thumb and alternates between thumbs up and thumbs down.
Draw ? Symbol: User extends their finger and traces a question mark symbol over the device.

Accept and Reject

Thumb Up and Down: User makes the “thumb up” or “thumb down” gesture.
Draw Tick and Cross: User extends their finger and draws a tick or a cross symbol over the device.

 

Design Recommendations

Give non-visual feedback during interaction

Feedback during gestures is important because it shows users that the interface is responding to their gestures and it helps them gesture effectively. However, above-device gestures take place over a phone so visual feedback will not always be visible. Instead, other modalities (like audio or tactile [2]) should be used.

Make non-visual feedback distinct from notifications

Some participants suggested that they may be confused if feedback during gesture interaction was like feedback used for other mobile phone notifications. Gesture feedback should be distinct from other notification types. Continuous feedback which responds to input would let users know that feedback is being given for their actions.

Emphasise that gestures are directed towards a device

Some participants in our studies were concerned about people thinking they were gesturing at them rather than at a device. Above-device interactions should emphasise gesture target by using the device as a referent for gestures and letting users gesture in close proximity.

Support flexible gesture mechanics

During our guessability study, some participants gestured with whole hand movements whereas others performed the same gestures with one or two fingers. Gestures also varied in size; for example, some participants swiped over a large area and others swiped with subtle movements over the display only. Above-device interfaces should be flexible, letting users gesture in their preferred way using either hand. Social situation may influence gesture mechanics. For example, users in public places may use more subtle versions of gestures than they would at home.

Enable complex gestures with a simple gating gesture

Our participants proposed a variety of gestures, from basic movements with simple sensing requirements, to complex hand poses requiring more sophisticated sensors. Always-on sensing with complex sensors will affect battery. Sensors with low power consumption (like the proximity sensor, for example) could be used to detect a simple gesture which then enables more sophisticated sensors. Holding a hand over the phone or clicking fingers, for example, could start a depth camera which could track the hand in greater detail.

Use simple gestures for casual interactions

Casual interactions (such as checking for notifications) are low-effort and imprecise so should be easy to perform and sense. Easily sensed gestures lower power requirements for input sensing and allow for variance in performance when gesturing imprecisely. Users may also use these gestures more often when around others so allowing variance lets users gesture discreetly, in an acceptable way.

 

Mobile HCI ’14 Poster

Mobile HCI '14 Poster
Mobile HCI ’14 Poster

References

[1] Unknown bibtex entry with key [AboveDevicePoster]
[Bibtex]

[2] Unknown bibtex entry with key [TactileFeedback]
[Bibtex]

Acknowledgements

This research was part funded by Nokia Research Centre, Finland. We would also like to thank everyone who participated in our studies.

Above-Device Tactile Feedback

Introduction

My PhD research looks at improving gesture interaction with small devices, like mobile phones, using multimodal feedback. One of the first things I looked at in my PhD was tactile feedback for above-device interfaces. Above-device interaction is gesture interaction over a device; for example, users can gesture at a phone on a table in front of them to dismiss unwanted interruptions or could gesture over a tablet on the kitchen counter to navigate a recipe. I look at above-device gesture interaction in more detail in my Mobile HCI ’14 poster paper [1], which gives a quick overview of some prior work on above-device interaction.

Tactile Feedback for Above-Device Interaction

In two studies, described in my ICMI ’14 paper [2], we looked at how above-device interfaces could give tactile feedback. Giving tactile feedback during gestures is a challenge because users don’t touch the device they are gesturing at; tactile feedback would go unnoticed unless users were holding the device while they gestured. We looked at ultrasound haptics and distal tactile feedback from wearables. In our studies, users interacted with a mobile phone interface (pictured above) which used a Leap Motion to track two selection gestures.

Gestures

Count gesture

Our studies looked at two selection gestures: Count (above) and Point (below). These gestures were from our user-designed gesture study [1]. With Count, users select from numbered targets by extending the appropriate number of fingers. When there’s more than five targets, we partition targets into groups. Users can select from a group by moving their hand. In the image above, the palm position is closest to the bottom half of the screen so we activate the lower group of targets. If users moved their hands towards the upper half of the screen, we would activate the upper group of four targets. Users had to hold a Count gesture for 1000 ms to make a selection.

Point gesture

With Point, users controlled a cursor which was mapped to their finger position relative to the device. We used the space beside the device to avoid occluding the screen while gesturing. Users made selections by dwelling the cursor over a target for 1000 ms.

For a video demo of these gestures, see:

Tactile Feedback

In our first study we looked at different ways of giving tactile feedback. We compared feedback directly from the device when held, ultrasound haptics (using an array of ultrasound transducers, below) and distal feedback from wearable accessories. We used two wearable tactile feedback prototypes: a “watch” and a “ring” (vibrotactile actuators affixed to a watch strap and an adjustable velcro ring). We found that all were effective for giving feedback, although participants had divided preferences.

ultrasound array

 

Some preferred feedback directly from the phone because it was familiar, although this is an unlikely case in above-device interaction because an advantage of this interaction modality is that users don’t need to first lift the phone or reach out to touch it. Some participants liked feedback from our ring prototype because it was close to the point of interaction (when using Point) and others preferred feedback from the watch (pictured below) because it was a more acceptable accessory than a vibrotactile ring. An advantage of ultrasound haptics is that users do not need to wear any accessories and participants appreciated this, although the feedback was less noticeable than vibrotactile feedback. This was partly because of the small ultrasound array used (similar size to a mobile phone) and partly because of the nature of ultrasound haptics.

Tactile Watch Prototype

In a second study we focused on feedback given on the wrist using our watch prototype. We were interested to see how tactile feedback affected interaction using our Point and Count gestures. We looked at three tactile feedback designs in addition to just visual feedback. Tactile feedback had no impact on performance (possibly because selection was too easy) although it had a significant positive effect on workload. Workload (measured using NASA-TLX) was significantly lower when dynamic tactile feedback was given. Users also preferred to receive tactile feedback to no tactile feedback.

A more detailed qualitative analysis and the results of both studies appear in our ICMI 2014 paper [2]. A position paper [3] from the CHI 2016 workshop on mid-air haptics and displays describes this work in the broader context of research towards more usable mid-air widgets.

Tactile Feedback Source Code

A Pure Data patch for generating our tactile feedback designs is available here.

References

[1] Towards Usable and Acceptable Above-Device Interactions
E. Freeman, S. Brewster, and V. Lantz.
In Mobile HCI ’14 Posters, pp. 459-464. 2014.

 PDF       DOI      [Bibtex]

@inproceedings{MobileHCI2014Poster,
  author = {Freeman, Euan and Brewster, Stephen and Lantz, Vuokko},
  booktitle = {Mobile HCI '14 Posters},
  pages = {459--464},
  title = {Towards Usable and Acceptable Above-Device Interactions},
  pdf = {http://research.euanfreeman.co.uk/papers/MobileHCI_2014_Poster.pdf},
  doi = {10.1145/2628363.2634215},
  year = {2014},
}

[2] Tactile Feedback for Above-Device Gesture Interfaces: Adding Touch to Touchless Interactions
E. Freeman, S. Brewster, and V. Lantz.
In Proceedings of the International Conference on Multimodal Interaction – ICMI ’14, pp. 419-426. 2014.

 PDF       DOI       Website       Video      [Bibtex]

@inproceedings{ICMI2014,
    author = {Freeman, Euan and Brewster, Stephen and Lantz, Vuokko},
    booktitle = {Proceedings of the International Conference on Multimodal Interaction - ICMI '14},
    pages = {419--426},
    publisher = {ACM Press},
    title = {{Tactile Feedback for Above-Device Gesture Interfaces: Adding Touch to Touchless Interactions}},
    pdf = {http://research.euanfreeman.co.uk/papers/ICMI_2014.pdf},
    doi = {10.1145/2663204.2663280},
    year = {2014},
    url = {http://euanfreeman.co.uk/projects/above-device-tactile-feedback/},
    video = {{https://www.youtube.com/watch?v=K1TdnNBUFoc}},
}

[3] Towards Mid-Air Haptic Widgets
E. Freeman, D. Vo, G. Wilson, G. Shakeri, and S. Brewster.
In CHI 2016 Workshop on Mid-Air Haptics and Displays: Systems for Un-instrumented Mid-Air Interactions. 2016.

 PDF      [Bibtex]

@inproceedings{MidAirHapticsWorkshop,
    author = {Freeman, Euan and Vo, Dong-Bach  and Wilson, Graham and Shakeri, Gozel and Brewster, Stephen},
    booktitle = {CHI 2016 Workshop on Mid-Air Haptics and Displays: Systems for Un-instrumented Mid-Air Interactions},
    title = {{Towards Mid-Air Haptic Widgets}},
    year = {2016},
    pdf = {http://research.euanfreeman.co.uk/papers/MidAirHaptics.pdf},
}