This week I’ve been in Bamberg, in Germany, presenting a poster and an interactive demo at Interact 2015. If you’ve stumbled across this website via my poster, or if you tried my demo at the conference, then it was nice meeting you and I hope you had some fun with it! If you’re looking for more information about the research then I’ve written a little about it here: http://euanfreeman.co.uk/interactive-light-feedback/
I want to make gesture interaction – interacting with computers through hand movements in mid-air – easier and more enjoyable to use. My PhD research focused on helping users address gesture systems, especially when those systems only have limited capabilities for providing feedback. I’ve written a lot about gestures and problems with gesture interaction, so this page brings that information together to give an overview of why gestures are difficult and how we might make them better.
Addressing Gesture Systems
When users want to interact with an in-air gesture system, they must first address it. This involves finding where to perform gestures, so that they can be sensed, and finding out how to direct input towards the system you want to interact with. During my PhD, I developed and evaluated interaction techniques for addressing in-air gesture systems. You can read more about this here. A related challenge is finding where to put your hands for mid-air interfaces, especially when mid-air haptics are used. I developed a system called HaptiGlow that helped users find a good hand position for mid-air input.
Above- and Around-Device Interaction
My research often looks at gestures in close proximity to small devices, either above (for example, gesturing over a phone on a table) or around (for example, gesturing behind a device you are holding with your other hand) those devices. I give an introduction to around-device interaction here, present some research and guidelines for above-device interaction with phones here, and discuss our work on above-device tactile feedback here. I also explain why we would want to use these types of gestures here.
Gestures Are Not “Natural”
In this post I outline three gesture interaction problems (the Midas Touch problem, the address problem and the sensing problem) and what implications these have for gesture interaction. In short, we should not think of gestures as being “natural” because there are many practical issues we must overcome to make them usable.
Novel Gesture Feedback
My PhD research looks at how we can move feedback about gestures off the screen and into the space around devices instead. I’ve written about tactile feedback for gestures here. I’ve also written about interactive light feedback, a novel type of display, for gestures here.
Gestures In Multimodal Interaction
Here I talk about two papers from 2014 where gestures are considered as part of multimodal interactions. While this idea was notably demonstrated in the 1980s, it still hasn’t reached mainstream computing. Perhaps this is about to change with new technologies.
I’ve always seen gestures as an alternative interaction technique, available when others like speech or touch are unavailable or less convenient. For example, gestures could be used to browse recipes without touching your tablet and getting it messy, or could be used for short ‘micro-interactions’ where gestures from a distance are better than approaching and touching something.
Lately, two papers at UIST ’14 looked at using gestures alongside touch, rather than instead of. I really like this idea and I’m going to give a short overview of those papers here. Combining hand gestures with other interaction techniques isn’t new though, an early and notable example from 1980 was Put That There, where users interacted using voice and gesture together.
In Air+Touch, Chen and others look at how fingers may move and gesture over touchscreens while also providing touch input. They grouped interactions into three types: gestures happening before touch, gestures happening between touches and gestures happening after touch. They also identified various finger movements which can be used over touchscreens but which are distinct from incidental movements. These include circular paths, sharp turns and jumps into a higher than normal space over the screen. In Air+Touch, users gestured and touched with one finger. This lets users provide more expressive input than touch alone provides.
In contrast to unimodal input (meaning one hand rather than one input modality, in this case) is bimodal input, which Song and others looked at. They focused on gestures in the wider space around the device, using the non-touching hand for gestures. As users interacted with mobile devices using touch with one hand, the other hand could gesture nearby to access other functionalities. For example, they describe how users may browse maps with touch, while using gestures to zoom in or out from the map.
While each of these papers take different approaches to combining touch and gesture, both have some similarities. Touch can be used to help segment input. Rather than detecting gestures at all times, interfaces can just look for gestures which occur around touch events; touch is implicitly used as a clutch mechanism. Clutching helps avoid accidental input and saves power as gesture sensing doesn’t need to happen all the time.
Both also demonstrate using gestures for easier context switching and secondary tasks. Users may gesture with their other hand to switch map mode while browsing or may lift their finger between swipes to change map mode. Gestures are mostly used for discrete secondary input, rather than as continuous primary input; although this is certainly available. There are similarities between these concepts and AD-Binning from Hasan and others. They used around-device gestures for accessing content, while interacting with that content using touch with their other hand.
At the Mobile HCI poster session I had some fantastic discussions with some great people. There’s been a lot of around-device interaction research presented at the conference this week and a lot of people who I spoke to when presenting my poster asked: why would I want to do this?
That’s a very important question and the reason it gets asked can maybe give some insight into when around-device gestures may and may not be useful. A lot of people said that if they were already holding their phone, they would just use the touchscreen to provide input. Others said they would raise the device to their mouth for speech input or would even use the device itself for performing a gesture (e.g. shaking it).
In our poster and its accompanying paper, we focused on above-device gestures. We focus on a particular area of the around-device space – directly over the device – as we think this is where users are mostly likely to benefit from using gestures. People typically keep their phones on flat surfaces – Pohl et al. found this in their around-device device paper [link], Wiese et al. [link] found that in their CHI ’13 study, and Dey et al. [link] found that three years ago. As such, gestures are very likely to be used over a phone.
So, why would we want to gesture over our phones? My favourite example, and one which really seems to resonate with people, is using gestures to read recipes while cooking in the kitchen. Wet and messy hands, the risks of food contamination, the need for multitasking – these are all inherent parts of preparing food which can motivate using gestures to interact with mobile devices. Gestures would let me move through recipes on my phone while cooking, without having to first wash my hands. Gestures would let me answer calls while I multitask in the kitchen, without having to stop what I’m doing. Gestures would let me dismiss interruptions while I wash the dishes afterwards, without having to dry my hands.
This is just one scenario where we envisage above-device gestures being useful. Gestures are attractive for a variety of reasons in this context: touch input is inconvenient (I need to wash my hands first); touch input requires more engagement (I need to stop what I’m doing to focus); and touch input is unavailable (I need to dry my hands). I think the answer to why we would want to use these gestures is that they let us interact when other input is inconvenient. Our phones are nearby on surfaces so let’s interact with them while they’re there.
In summary, our work focuses on gestures above the device as this is where we see them being most commonly used. There are many reasons people would want to use around-device gestures but we think the most compelling ones motivate using above-device gestures.
One of my favourite talks from the third day of Mobile HCI ’14 was Ahlstrom et al.’s paper on the social acceptability of around-device gestures [link]. In short: they asked users if they were comfortable doing around-device gestures. I think this is a timely topic because we’re now seeing around-device interfaces added to commercial smartphones. Samsung’s Galaxy S4 had hover gestures over the display and Google’s Project Tango added depth sensors to the smartphone form factor. I feel that now we’ve established ways of detecting around-device gestures, it’s now time to look at what around-device gestures should be and if users are willing to use them.
In Ahlstrom’s paper, which was presented excellently by Pourang Irani, they did three studies looking at different aspects of the social acceptability of around-device gestures. They looked mainly at aspects of gesture mechanics: gesture size, gesture duration, position relative to device, distance from the device. When asking users if they were comfortable doing gestures, they found that users were most happy to gesture near the device (biased towards the side of their dominant hand) and found shorter interactions more acceptable.
They also looked at how spectators perceived these gestures, by opportunistically asking onlookers what they thought of someone who was using gestures nearby. What surprised me was that spectators found around-device gestures more acceptable in a wider variety of social situations than the users from the first studies. Does seeing other people perform gestures make those types of gesture input seem more acceptable?
Tonight I presented my poster [paper link] on our design studies for above-device gesture design. There were some similarities between our work and Ahlstrom’s; purely by coincidence, we both asked users if they were comfortable and willing to use certain gestures. However, we focused on what the gestures were, whereas they focused on other aspects of gesturing (e.g. gesture duration).
In our poster and paper we present design recommendations for creating around-device interactions which users think are more usable and more acceptable. I think the next big step for around-device research is looking at how to map potential gestures to actions and identifying ways of making around-device input better. My PhD research is focusing on the output side of things, looking at how we can design feedback to help users as they gesture using the space near devices. If you saw my poster tonight or had a chat with me, there’s more about the research in our poster here; tonight was fun so thanks for stopping by!
Today was the first day of the papers program at Mobile HCI ’14 and amongst the great talks was one I particularly liked on the idea of “around-device devices” by Pohl et al. [link]. I’ve written before about around-device interaction, above-device interaction, and how the space around mobile devices can be used for gesturing. What’s novel about interaction using around-device devices, however, is that interaction in the around-device space is not just limited to free-hand gestures relative to the device. Instead, nearby objects can become potential inputs in the user interface. One of the motivations for using nearby objects for interaction is that mobile devices are very commonly kept on surfaces – tables, desks, kitchen worktops – which are also used for storing objects. In this post title I call these ordinary surfaces to distance this idea from interactive surfaces.
The example Henning Pohl gives in the paper title is “my coffee mug is a volume dial“. I think this example captures the idea of around-device devices well: mugs, being cylindrical objects, afford certain interactions. In this case, turning them around. There’s implicit physical feedback from interacting with a tangible object which could make interaction easier. Also, using nearby objects provides many of the benefits which around-device gestures give: larger interaction space, unoccluded content on the device screen, potential for more expressive input, etc.
Another interesting paper from today was about Toffee, by Xiao et al. [link]. Sticking with the around-device interaction theme, they looked at if it would be possible to use piezo actuators to localise taps and knocks on surrounding table surfaces. Like with around-device devices, this was another way of making use of nearby ordinary surfaces for input. They found that taps could be most reliably localised when given using more solid objects, like touch styluses or knuckles. Softer points, like fingertips, were more difficult to localise. Toffee would be ideal for radial input around devices, due to the characteristics of the tap localisation approach.
I like both of these papers because they push the around-device interaction space a little beyond mid-air free-hand gestures, in both cases using ordinary surfaces as part of the interaction. I know this has been done before with interfaces like SideSight and Qian Qin’s Dynamic Ambient Lighting for Mobile Devices, but I think it’s important that others are exploring this space further.
My PhD research looks at improving gesture interaction with small devices, like mobile phones, using multimodal feedback. One of the first things I looked at in my PhD was tactile feedback for above-device interfaces. Above-device interaction is gesture interaction over a device; for example, users can gesture at a phone on a table in front of them to dismiss unwanted interruptions or could gesture over a tablet on the kitchen counter to navigate a recipe. I look at above-device gesture interaction in more detail in my Mobile HCI ’14 poster paper [1], which gives a quick overview of some prior work on above-device interaction.
In two studies, described in my ICMI ’14 paper [2], we looked at how above-device interfaces could give tactile feedback. Giving tactile feedback during gestures is a challenge because users don’t touch the device they are gesturing at; tactile feedback would go unnoticed unless users were holding the device while they gestured. We looked at ultrasound haptics and distal tactile feedback from wearables. In our studies, users interacted with a mobile phone interface (pictured above) which used a Leap Motion to track two selection gestures.
Gestures
Our studies looked at two selection gestures: Count (above) and Point (below). These gestures were from our user-designed gesture study [1]. With Count, users select from numbered targets by extending the appropriate number of fingers. When there’s more than five targets, we partition targets into groups. Users can select from a group by moving their hand. In the image above, the palm position is closest to the bottom half of the screen so we activate the lower group of targets. If users moved their hands towards the upper half of the screen, we would activate the upper group of four targets. Users had to hold a Count gesture for 1000 ms to make a selection.
With Point, users controlled a cursor which was mapped to their finger position relative to the device. We used the space beside the device to avoid occluding the screen while gesturing. Users made selections by dwelling the cursor over a target for 1000 ms.
For a video demo of these gestures, see:
Tactile Feedback
In our first study we looked at different ways of giving tactile feedback. We compared feedback directly from the device when held, ultrasound haptics (using an array of ultrasound transducers, below) and distal feedback from wearable accessories. We used two wearable tactile feedback prototypes: a “watch” and a “ring” (vibrotactile actuators affixed to a watch strap and an adjustable velcro ring). We found that all were effective for giving feedback, although participants had divided preferences.
Some preferred feedback directly from the phone because it was familiar, although this is an unlikely case in above-device interaction because an advantage of this interaction modality is that users don’t need to first lift the phone or reach out to touch it. Some participants liked feedback from our ring prototype because it was close to the point of interaction (when using Point) and others preferred feedback from the watch (pictured below) because it was a more acceptable accessory than a vibrotactile ring. An advantage of ultrasound haptics is that users do not need to wear any accessories and participants appreciated this, although the feedback was less noticeable than vibrotactile feedback. This was partly because of the small ultrasound array used (similar size to a mobile phone) and partly because of the nature of ultrasound haptics.
In a second study we focused on feedback given on the wrist using our watch prototype. We were interested to see how tactile feedback affected interaction using our Point and Count gestures. We looked at three tactile feedback designs in addition to just visual feedback. Tactile feedback had no impact on performance (possibly because selection was too easy) although it had a significant positive effect on workload. Workload (measured using NASA-TLX) was significantly lower when dynamic tactile feedback was given. Users also preferred to receive tactile feedback to no tactile feedback.
A more detailed qualitative analysis and the results of both studies appear in our ICMI 2014 paper [2]. A position paper [3] from the CHI 2016 workshop on mid-air haptics and displays describes this work in the broader context of research towards more usable mid-air widgets.
Tactile Feedback Source Code
A Pure Data patch for generating our tactile feedback designs is available here.
References
[1]Towards Usable and Acceptable Above-Device Interactions E. Freeman, S. Brewster, and V. Lantz. In Mobile HCI ’14 Posters, 459-464. 2014.
@inproceedings{MobileHCI2014Poster,
author = {Freeman, Euan and Brewster, Stephen and Lantz, Vuokko},
booktitle = {Mobile HCI '14 Posters},
pages = {459--464},
publisher = {ACM},
title = {Towards Usable and Acceptable Above-Device Interactions},
pdf = {http://research.euanfreeman.co.uk/papers/MobileHCI_2014_Poster.pdf},
doi = {10.1145/2628363.2634215},
year = {2014},
}
[2]Tactile Feedback for Above-Device Gesture Interfaces: Adding Touch to Touchless Interactions E. Freeman, S. Brewster, and V. Lantz. In Proceedings of the International Conference on Multimodal Interaction – ICMI ’14, 419-426. 2014.
@inproceedings{ICMI2014,
author = {Freeman, Euan and Brewster, Stephen and Lantz, Vuokko},
booktitle = {Proceedings of the International Conference on Multimodal Interaction - ICMI '14},
pages = {419--426},
publisher = {ACM},
title = {{Tactile Feedback for Above-Device Gesture Interfaces: Adding Touch to Touchless Interactions}},
pdf = {http://research.euanfreeman.co.uk/papers/ICMI_2014.pdf},
doi = {10.1145/2663204.2663280},
year = {2014},
url = {http://euanfreeman.co.uk/projects/above-device-tactile-feedback/},
video = {{https://www.youtube.com/watch?v=K1TdnNBUFoc}},
}
[3]Towards Mid-Air Haptic Widgets E. Freeman, D. Vo, G. Wilson, G. Shakeri, and S. Brewster. In CHI 2016 Workshop on Mid-Air Haptics and Displays: Systems for Un-instrumented Mid-Air Interactions. 2016.
@inproceedings{MidAirHapticsWorkshop,
author = {Freeman, Euan and Vo, Dong-Bach and Wilson, Graham and Shakeri, Gozel and Brewster, Stephen},
booktitle = {CHI 2016 Workshop on Mid-Air Haptics and Displays: Systems for Un-instrumented Mid-Air Interactions},
title = {{Towards Mid-Air Haptic Widgets}},
year = {2016},
pdf = {http://research.euanfreeman.co.uk/papers/MidAirHaptics.pdf},
}
My full paper, “Tactile Feedback for Above-Device Gesture Interfaces: Adding Touch to Touchless Interactions”, was accepted to ICMI 2014. It was also accepted for oral presentation rather than poster presentation, so I’m looking forward to that!
In this paper we looked at tactile feedback for above-device interaction with a mobile phone. We compared direct tactile feedback to distal tactile feedback from wearables (rings, smart-watches) and ultrasound haptic feedback. We also looked at different feedback designs and investigated the impact of tactile feedback on performance, workload and preference.
We found that tactile feedback had no impact on input performance but did improve workload significantly (making it easier to interact). Users also significantly preferred tactile feedback to no tactile feedback. More details are in the paper [1] along with design recommendations for above- and around-device interface designers. I’ve written a bit more about this project here.
Video
The following video (including awful typo on the last scene!) shows the two gestures we used in these studies.
References
[1]Tactile Feedback for Above-Device Gesture Interfaces: Adding Touch to Touchless Interactions E. Freeman, S. Brewster, and V. Lantz. In Proceedings of the International Conference on Multimodal Interaction – ICMI ’14, 419-426. 2014.
@inproceedings{ICMI2014,
author = {Freeman, Euan and Brewster, Stephen and Lantz, Vuokko},
booktitle = {Proceedings of the International Conference on Multimodal Interaction - ICMI '14},
pages = {419--426},
publisher = {ACM},
title = {{Tactile Feedback for Above-Device Gesture Interfaces: Adding Touch to Touchless Interactions}},
pdf = {http://research.euanfreeman.co.uk/papers/ICMI_2014.pdf},
doi = {10.1145/2663204.2663280},
year = {2014},
url = {http://euanfreeman.co.uk/projects/above-device-tactile-feedback/},
video = {{https://www.youtube.com/watch?v=K1TdnNBUFoc}},
}
I have worked in many areas of Human-Computer Interaction, including: accessible technology, wearable devices, gesture interaction, ultrasound haptics, and novel displays. This page gives a broad overview of my research.
I worked on the Levitate project, a four year EU FET-Open project which investigated novel interfaces composed of objects levitating in mid-air. My research on this project was mostly focused on developing new interaction techniques for levitating object displays [1]; for example, Point-and-Shake, a selection and feedback technique presented at CHI 2018 [2].
Ultrasound Haptic Feedback
I have been working with ultrasound haptics since 2012, when I first experienced the technology at a workshop hosted by Tom Carter and Sriram Subramanian. I contributed to a review of ultrasound haptics in 2020 [3] which I recommend for an overview of this awesome technology.
I chaired sessions at the CHI 2016 and CHI 2018 workshops on mid-air haptics and displays, and have chaired paper sessions at CHI 2018 and CHI 2019 about touch and haptic interfaces. I’m also a co-editor of an upcoming book on ultrasound haptics, due to be published in 2022 – watch this space!
Gesture Interaction
My PhD research focused on gesture interaction and around-device interaction. Some of my early PhD work looked at above-device interaction with mobile phones [4, 9], which I discuss more here. Towards the end of my PhD I also studied gesture interaction with simple household devices, like thermostats and lights, which present interesting design problems due to their lack of screens or output capabilities [10].
I am particularly interested in how gesture interaction techniques can be improved with better feedback design. Whereas most gesture interfaces rely on visual feedback, I am more interested in non-visual modalities and how these can be used to help users interact more easily and effectively. I have looked at tactile feedback for gesture interfaces [4, 5]; this is a promising modality but requires novel hardware solutions to overcome the challenges of giving tactile feedback without physical contact with a device. I have also looked at other types of output, including sound and interactive light, for giving feedback during gesture interaction. My PhD research in this area was funded by a studentship from Nokia Research in Finland.
Despite significant advances in gesture-sensing technology, there are some fundamental usability problems which we still need good solutions for. My PhD thesis focused on one of these problems in particular, the problem of addressing gesture systems. My CHI 2016 paper [10] describes interaction techniques for addressing gesture systems. I’ve also looked at clutching interaction techniques for touchless gesture systems, which summarises research from our CHI 2022 paper on touchless gestures for medical imaging systems [11].
Above-Device Gestures
Early in my PhD I looked at above-device gesture design. We asked users to create above-device gestures for some common mobile phone tasks. From the many gesture designs gathered in that study, we then created and evaluated two sets of gestures. We created design recommendations for good above-device interfaces based on the outcomes of these studies [9].
Tactile Feedback for Gestures
Small devices, like mobile phones and wearables, have limited display capabilities. Gesture interaction, being very uncertain for users, requires feedback to help users gesture effectively, but giving feedback visually on small devices constraints other content. Instead, other modalities – like sound and touch – could be used to give feedback. However, an obvious limitation with touch feedback is that users don’t always touch devices that they gesture towards. We looked at how we could give tactile feedback during gesture interaction, using ultrasound haptics and distal feedback from wearables [4].
Interactive Light Feedback for Gestures
Another way of giving visual feedback on small devices without taking away limited screen space is to give visual cues in the space surrounding the device instead. We embedded LEDs in the edge of some devices so that they could illuminate surrounding table or wall surfaces, giving low fidelity – but effective – visual feedback about gestures. We call this interactive light feedback [12]. As well as keeping the screen free for interactive content, these interactive light cues were also noticeable from a short distance away. For more on this, see Interactive Light Feedback.
Wearables for Visually Impaired Children
I worked on the ABBI (Audio Bracelet for Blind Interaction) project for a year. The ABBI project developed a bracelet for young visually impaired children; when the bracelet moved, it synthesised sound in response to that movement. The primary purpose of the bracelet was for sensory rehabilitation activities to improve spatial cognition; by hearing how other people and themselves moved, the children would be able to improve their understanding of movement and their spatial awareness.
My research looked at how the capabilities of the ABBI bracelet could be used for other things. The bracelet had motion sensors, Bluetooth communication, on-board audio synthesis and limited processing power, so my research investigated how these might facilitate other interactions. Some of my work looked at how Bluetooth beacons could be used with a wearable device to present relevant audio cues about surroundings, to help visually impaired children understand what is happening nearby [13]. I also considered how the bracelet might be used to detect location and activity within the home, so that the lighting could be adapted to make it easier to see, or to draw attention to specific areas of the home [14].
Reminders: Tabletops and Digital Pens
Before starting my PhD I worked on two projects looking at home-care reminder systems for elderly people. Reminders can help people live independently by prompting them to do things, such as taking medication or making sure the heating is on, and helping them manage their lives, for example reminding them of upcoming appointments or tasks such as shopping.
Tabletops in the Home
My final undergraduate project looked at how interactive tabletops could be used to deliver reminders. People often have coffee tables in a prominent location within the living room, making the tabletop an ideal display for ambient information and reminders. We wanted to see what challenges had to be overcome in order for tabletops to be an effective reminder display. One of the interesting challenges this project addressed was how to use the tabletop as a display and as a normal table. Clutter meant large parts of the display were often occluded so a solution was needed to allow reminders to be placed in a noticeable location. Part of this project was presented as an extended abstract at CHI 2013 [15].
Digital Pen and Paper Reminders
After graduating with my undergraduate degree, I worked on the MultiMemoHome project as a research assistant. My role in the project was to design and develop a paper-based diary system for digital pens which let users schedule reminders using pen and paper. Reminders were then delivered using a tablet placed in the living room. We were interested in using a paper-based approach because this was an approach already favoured by elderly people. We used a co-design approach to create a reminder system, Rememo, which we then deployed in peoples’ homes for two weeks at a time. This project was presented as an extended abstract at CHI 2013 [16] and as a workshop paper at Mobile HCI 2014 [17].
Predicting Visual Complexity
As an undergraduate I received two scholarships to fund research over my summer holidays. One of these scholarships funded research with Helen Purchase into visual complexity. We wanted to find out if we could predict how complex visual content was using image processing techniques to examine images. We gathered both rankings and ratings of visual complexity using an online survey and used this information to construct a model using linear regression with a collection of image metrics as predictors. This project was presented at Diagrammatic Representation and Inference 2012 [18] and Predicting Perceptions2012 [19].
Aesthetic Properties of Graphs
An earlier research scholarship also funded research with Helen Purchase, this time looking at aesthetic properties of hand-drawn graphs using SketchNode, a tool which lets users draw graphs using a stylus. We devised a series of aesthetic properties describing graph appearance and created algorithms to measure these properties. Aesthetic properties included features such as node orthogonality (were nodes placed in a grid-like manner?), edge length consistency (were edges of similar length?) and edge orthogonality (were edges largely perpendicular and arranged in a grid-like manner?). I produced a tool to analyse a large corpus of user-drawn graphs from earlier research studies.
References
[1]Levitating Particle Displays with Interactive Voxels E. Freeman, J. Williamson, P. Kourtelos, and S. Brewster. In Proceedings of the 7th ACM International Symposium on Pervasive Displays – PerDis ’18, Article 15. 2018.
@inproceedings{PerDis2018,
author = {Freeman, Euan and Williamson, Julie and Kourtelos, Praxitelis and Brewster, Stephen},
booktitle = {{Proceedings of the 7th ACM International Symposium on Pervasive Displays - PerDis '18}},
title = {{Levitating Particle Displays with Interactive Voxels}},
year = {2018},
publisher = {ACM},
pages = {Article 15},
doi = {10.1145/3205873.3205878},
url = {http://euanfreeman.co.uk/levitate/levitating-particle-displays/},
pdf = {http://research.euanfreeman.co.uk/papers/PerDis_2018.pdf},
}
[2]Point-and-Shake: Selecting from Levitating Object Displays E. Freeman, J. Williamson, S. Subramanian, and S. Brewster. In Proceedings of the 36th Annual ACM Conference on Human Factors in Computing Systems – CHI ’18, Paper 18. 2018.
@inproceedings{CHI2018,
author = {Freeman, Euan and Williamson, Julie and Subramanian, Sriram and Brewster, Stephen},
booktitle = {{Proceedings of the 36th Annual ACM Conference on Human Factors in Computing Systems - CHI '18}},
title = {{Point-and-Shake: Selecting from Levitating Object Displays}},
year = {2018},
publisher = {ACM},
pages = {Paper 18},
doi = {10.1145/3173574.3173592},
url = {http://euanfreeman.co.uk/levitate/},
video = {{https://www.youtube.com/watch?v=j8foZ5gahvQ}},
pdf = {http://research.euanfreeman.co.uk/papers/CHI_2018.pdf},
data = {https://zenodo.org/record/2541555},
}
[3]A Survey of Mid-Air Ultrasound Haptics and Its Applications I. Rakkolainen, E. Freeman, A. Sand, R. Raisamo, and S. Brewster. IEEE Transactions on Haptics, vol. 14, pp. 2-19, 2020.
@article{ToHSurvey,
author = {Rakkolainen, Ismo and Freeman, Euan and Sand, Antti and Raisamo, Roope and Brewster, Stephen},
title = {{A Survey of Mid-Air Ultrasound Haptics and Its Applications}},
year = {2020},
publisher = {IEEE},
journal = {IEEE Transactions on Haptics},
volume = {14},
issue = {1},
pages = {2--19},
pdf = {http://research.euanfreeman.co.uk/papers/IEEE_ToH_2020.pdf},
doi = {10.1109/TOH.2020.3018754},
url = {https://ieeexplore.ieee.org/document/9174896},
issn = {2329-4051},
}
[4]Tactile Feedback for Above-Device Gesture Interfaces: Adding Touch to Touchless Interactions E. Freeman, S. Brewster, and V. Lantz. In Proceedings of the International Conference on Multimodal Interaction – ICMI ’14, 419-426. 2014.
@inproceedings{ICMI2014,
author = {Freeman, Euan and Brewster, Stephen and Lantz, Vuokko},
booktitle = {Proceedings of the International Conference on Multimodal Interaction - ICMI '14},
pages = {419--426},
publisher = {ACM},
title = {{Tactile Feedback for Above-Device Gesture Interfaces: Adding Touch to Touchless Interactions}},
pdf = {http://research.euanfreeman.co.uk/papers/ICMI_2014.pdf},
doi = {10.1145/2663204.2663280},
year = {2014},
url = {http://euanfreeman.co.uk/projects/above-device-tactile-feedback/},
video = {{https://www.youtube.com/watch?v=K1TdnNBUFoc}},
}
[5]HaptiGlow: Helping Users Position their Hands for Better Mid-Air Gestures and Ultrasound Haptic Feedback E. Freeman, D. Vo, and S. Brewster. In Proceedings of IEEE World Haptics Conference 2019, the 8th Joint Eurohaptics Conference and the IEEE Haptics Symposium, TP2A.09. 2019.
@inproceedings{WHC2019,
author = {Freeman, Euan and Vo, Dong-Bach and Brewster, Stephen},
booktitle = {{Proceedings of IEEE World Haptics Conference 2019, the 8th Joint Eurohaptics Conference and the IEEE Haptics Symposium}},
title = {{HaptiGlow: Helping Users Position their Hands for Better Mid-Air Gestures and Ultrasound Haptic Feedback}},
year = {2019},
publisher = {IEEE},
pages = {TP2A.09},
doi = {10.1109/WHC.2019.8816092},
url = {http://euanfreeman.co.uk/haptiglow/},
pdf = {http://research.euanfreeman.co.uk/papers/WHC_2019.pdf},
data = {https://zenodo.org/record/2631398},
video = {https://www.youtube.com/watch?v=hXCH-xBgnig},
}
[6]Perception of Ultrasound Haptic Focal Point Motion E. Freeman and G. Wilson. In Proceedings of 23rd ACM International Conference on Multimodal Interaction – ICMI ’21, 697-701. 2021.
@inproceedings{ICMI2021Motion,
author = {Freeman, Euan and Wilson, Graham},
booktitle = {{Proceedings of 23rd ACM International Conference on Multimodal Interaction - ICMI '21}},
title = {{Perception of Ultrasound Haptic Focal Point Motion}},
year = {2021},
publisher = {ACM},
pages = {697--701},
doi = {10.1145/3462244.3479950},
url = {http://euanfreeman.co.uk/perception-of-ultrasound-haptic-focal-point-motion/},
pdf = {http://research.euanfreeman.co.uk/papers/ICMI_2021_Motion.pdf},
data = {https://zenodo.org/record/5142587},
}
[7]Enhancing Ultrasound Haptics with Parametric Audio Effects E. Freeman. In Proceedings of 23rd ACM International Conference on Multimodal Interaction – ICMI ’21, 692-696. 2021.
@inproceedings{ICMI2021AudioHaptic,
author = {Freeman, Euan},
booktitle = {{Proceedings of 23rd ACM International Conference on Multimodal Interaction - ICMI '21}},
title = {{Enhancing Ultrasound Haptics with Parametric Audio Effects}},
year = {2021},
publisher = {ACM},
pages = {692--696},
doi = {10.1145/3462244.3479951},
url = {http://euanfreeman.co.uk/enhancing-ultrasound-haptics-with-parametric-audio-effects/},
pdf = {http://research.euanfreeman.co.uk/papers/ICMI_2021_AudioHaptic.pdf},
data = {https://zenodo.org/record/5144878},
}
[8]UltraPower: Powering Tangible & Wearable Devices with Focused Ultrasound R. Morales Gonzalez, A. Marzo, E. Freeman, W. Frier, and O. Georgiou. In Proceedings of the Fifteenth International Conference on Tangible, Embedded, and Embodied Interaction – TEI ’21, Article 1. 2021.
@inproceedings{TEI2021,
author = {Morales Gonzalez, Rafael and Marzo, Asier and Freeman, Euan and Frier, William and Georgiou, Orestis},
booktitle = {{Proceedings of the Fifteenth International Conference on Tangible, Embedded, and Embodied Interaction - TEI '21}},
title = {{UltraPower: Powering Tangible & Wearable Devices with Focused Ultrasound}},
year = {2021},
publisher = {ACM},
pages = {Article 1},
doi = {10.1145/3430524.3440620},
pdf = {http://research.euanfreeman.co.uk/papers/TEI_2021.pdf},
url = {http://euanfreeman.co.uk/ultrapower-powering-tangible-wearable-devices-with-focused-ultrasound/},
}
[9]Towards Usable and Acceptable Above-Device Interactions E. Freeman, S. Brewster, and V. Lantz. In Mobile HCI ’14 Posters, 459-464. 2014.
@inproceedings{MobileHCI2014Poster,
author = {Freeman, Euan and Brewster, Stephen and Lantz, Vuokko},
booktitle = {Mobile HCI '14 Posters},
pages = {459--464},
publisher = {ACM},
title = {Towards Usable and Acceptable Above-Device Interactions},
pdf = {http://research.euanfreeman.co.uk/papers/MobileHCI_2014_Poster.pdf},
doi = {10.1145/2628363.2634215},
year = {2014},
}
[10]Do That, There: An Interaction Technique for Addressing In-Air Gesture Systems E. Freeman, S. Brewster, and V. Lantz. In Proceedings of the 34th Annual ACM Conference on Human Factors in Computing Systems – CHI ’16, 2319-2331. 2016.
@inproceedings{CHI2016,
author = {Freeman, Euan and Brewster, Stephen and Lantz, Vuokko},
booktitle = {{Proceedings of the 34th Annual ACM Conference on Human Factors in Computing Systems - CHI '16}},
title = {{Do That, There: An Interaction Technique for Addressing In-Air Gesture Systems}},
year = {2016},
publisher = {ACM},
pages = {2319--2331},
doi = {10.1145/2858036.2858308},
pdf = {http://research.euanfreeman.co.uk/papers/CHI_2016.pdf},
url = {http://euanfreeman.co.uk/gestures/},
video = {{https://www.youtube.com/watch?v=6_hGbI_SdQ4}},
}
[11]Investigating Clutching Interactions for Touchless Medical Imaging Systems S. Cronin, E. Freeman, and G. Doherty. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems. 2022.
@inproceedings{CHI2022Clutching,
title = {Investigating Clutching Interactions for Touchless Medical Imaging Systems},
author = {Cronin, Sean and Freeman, Euan and Doherty, Gavin},
doi = {10.1145/3491102.3517512},
publisher = {Association for Computing Machinery},
booktitle = {Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems},
numpages = {14},
series = {CHI '22},
year = {2022},
video = {https://www.youtube.com/watch?v=MFznaMUG_DU},
pdf = {http://research.euanfreeman.co.uk/papers/CHI_2022_Clutching.pdf},
}
[12]Illuminating Gesture Interfaces with Interactive Light Feedback E. Freeman, S. Brewster, and V. Lantz. In Proceedings of NordiCHI ’14 Beyond the Switch Workshop. 2014.
@inproceedings{NordiCHI2014Workshop,
author = {Freeman, Euan and Brewster, Stephen and Lantz, Vuokko},
booktitle = {Proceedings of NordiCHI '14 Beyond the Switch Workshop},
title = {{Illuminating Gesture Interfaces with Interactive Light Feedback}},
year = {2014},
pdf = {http://lightingworkshop.files.wordpress.com/2014/09/3-illuminating-gesture-interfaces-with-interactive-light-feedback.pdf},
url = {http://euanfreeman.co.uk/interactive-light-feedback/},
}
[13]Audible Beacons and Wearables in Schools: Helping Young Visually Impaired Children Play and Move Independently E. Freeman, G. Wilson, S. Brewster, G. Baud-Bovy, C. Magnusson, and H. Caltenco. In Proceedings of the 35th Annual ACM Conference on Human Factors in Computing Systems – CHI ’17, 4146-4157. 2017.
@inproceedings{CHI2017,
author = {Freeman, Euan and Wilson, Graham and Brewster, Stephen and Baud-Bovy, Gabriel and Magnusson, Charlotte and Caltenco, Hector},
booktitle = {{Proceedings of the 35th Annual ACM Conference on Human Factors in Computing Systems - CHI '17}},
title = {{Audible Beacons and Wearables in Schools: Helping Young Visually Impaired Children Play and Move Independently}},
year = {2017},
publisher = {ACM},
pages = {4146--4157},
doi = {10.1145/3025453.3025518},
url = {http://euanfreeman.co.uk/research/#abbi},
video = {{https://www.youtube.com/watch?v=SGQmt1NeAGQ}},
pdf = {http://research.euanfreeman.co.uk/papers/CHI_2017.pdf},
}
[14]Towards a Multimodal Adaptive Lighting System for Visually Impaired Children E. Freeman, G. Wilson, and S. Brewster. In Proceedings of the 18th ACM International Conference on Multimodal Interaction – ICMI ’16, 398-399. 2016.
@inproceedings{ICMI2016Demo1,
author = {Freeman, Euan and Wilson, Graham and Brewster, Stephen},
booktitle = {{Proceedings of the 18th ACM International Conference on Multimodal Interaction - ICMI '16}},
title = {{Towards a Multimodal Adaptive Lighting System for Visually Impaired Children}},
year = {2016},
publisher = {ACM},
pages = {398--399},
doi = {10.1145/2993148.2998521},
pdf = {http://research.euanfreeman.co.uk/papers/ICMI_2016.pdf},
}
[15]Messy Tabletops: Clearing Up the Occlusion Problem E. Freeman and S. Brewster. In CHI ’13 Extended Abstracts on Human Factors in Computing Systems, 1515-1520. 2013.
@inproceedings{CHI2013LBW1,
author = {Freeman, Euan and Brewster, Stephen},
booktitle = {CHI '13 Extended Abstracts on Human Factors in Computing Systems},
pages = {1515--1520},
publisher = {ACM},
title = {Messy Tabletops: Clearing Up the Occlusion Problem},
pdf = {http://dl.acm.org/authorize?6811198},
doi = {10.1145/2468356.2468627},
year = {2013},
url = {http://euanfreeman.co.uk/projects/occlusion-management/},
video = {https://www.youtube.com/watch?v=V42GYxnHkEk},
}
[16]Designing a Smartpen Reminder System for Older Adults J. Williamson, M. McGee-Lennon, E. Freeman, and S. Brewster. In CHI ’13 Extended Abstracts on Human Factors in Computing Systems, 73-78. 2013.
@inproceedings{CHI2013LBW2,
author = {Williamson, Julie and McGee-Lennon, Marilyn and Freeman, Euan and Brewster, Stephen},
booktitle = {CHI '13 Extended Abstracts on Human Factors in Computing Systems},
pages = {73--78},
publisher = {ACM},
title = {Designing a Smartpen Reminder System for Older Adults},
pdf = {http://dl.acm.org/authorize?6811896},
doi = {10.1145/2468356.2468371},
year = {2013},
}
[17]Rememo: Designing a Multimodal Mobile Reminder App with and for Older Adults M. Lennon, G. Hamilton, E. Freeman, and J. Williamson. In Mobile HCI ’14 Workshop on Re-imagining Commonly Used Mobile Interfaces for Older Adults. 2014.
@inproceedings{MobileHCI2014Workshop,
author = {Lennon, Marilyn and Hamilton, Greig and Freeman, Euan and Williamson, Julie},
booktitle = {Mobile HCI '14 Workshop on Re-imagining Commonly Used Mobile Interfaces for Older Adults},
publisher = {ACM},
title = {{Rememo: Designing a Multimodal Mobile Reminder App with and for Older Adults}},
year = {2014},
url = {http://olderadultsmobileinterfaces.wordpress.com/},
}
[18]An Exploration of Visual Complexity H. C. Purchase, E. Freeman, and J. Hamer. In Diagrammatic Representation and Inference, 200-213. 2012.
@inproceedings{Diagrams2012,
author = {Purchase, Helen C. and Freeman, Euan and Hamer, John},
booktitle = {Diagrammatic Representation and Inference},
doi = {10.1007/978-3-642-31223-6_22},
isbn = {978-3-642-31222-9},
pages = {200--213},
title = {An Exploration of Visual Complexity},
pdf = {http://www.springerlink.com/index/G647230842J38T43.pdf},
year = {2012},
}
[19]Predicting Visual Complexity H. C. Purchase, E. Freeman, and J. Hamer. In Predicting Perceptions: The 3rd International Conference on Appearance., 62-65. 2012.
@inproceedings{Perceptions2012,
author = {Purchase, Helen C. and Freeman, Euan and Hamer, John},
booktitle = {Predicting Perceptions: The 3rd International Conference on Appearance.},
isbn = {9781471668692},
pages = {62--65},
title = {Predicting Visual Complexity},
pdf = {http://opendepot.org/1060/},
year = {2012},
}
One of my biggest research interests is gesture interaction with mobile devices, also known as around-device interaction because users interact in the space around the device rather than on the device itself. In this post I’m going to give a brief overview of what around-device interaction is, how gestures can be sensed from mobile devices and how these interactions are being realised in commercial devices.
Why Use Around-Device Interaction?
Why would we want to gesture with mobile devices (such as phones or smart watches) anyway? These devices typically have small screens which we interact with in a very limited fashion; using the larger surrounding space lets us interact in more expressive ways and lets the display be utilised fully, rather than our hand occluding content as we reach out to touch the screen. Gestures also let us interact without having to first lift our device, meaning we can interact casually from a short distance. Finally, gesture input is non-contact so we can interact when we would not want to touch the screen, e.g. when preparing food and wanting to navigate a recipe but our hands are messy.
Sensing Around-Device Input
Motivated by the benefits of expressive non-contact input, HCI researchers have developed a variety of approaches for detecting around-device input. Early approaches used infrared proximity sensors, similar to the sensors used in phones to lock the display when we hold our phone to our ear. SideSight (Butler et al. 2008) placed proximity sensors around the edges of a mobile phone, letting users interact in the space beside the phone. HoverFlow (Kratz and Rohs 2009) took a similar approach, although their sensors faced upwards rather than outwards. This let users gesture above the display. Although this meant gesturing occluded the screen, users could interact in 3D space; a limitation of SideSight was that users were more or less restricted to a flat plane around the phone.
Abracadabra (Harrison and Hudson 2009) used magnetic sensing to detect input around a smart-watch. Users wore a magnetic ring which affected the magnetic field around the device, letting the watch determine finger position and detect gestures. This let users interact with a very small display in a much larger area (an example of what Harrison called “interacting with small devices in a big way” when he gave a presentation to our research group last year) – something today’s smart-watch designers should consider. uTrack (Chen et al. 2013) built on this approach with additional wearable sensors. MagiTact (Ketabdar et al. 2010) used a similar approach to Abracadabra for detecting gestures around mobile phones.
So far we’ve looked at two approaches for detecting around-device input: infrared proximity sensors and magnetic sensors. Researchers have developed camera-based approaches for detecting input. Most mobile phone cameras can be used to detect around-device gestures within the camera field of view, which can be extended using approaches such as Surround-see (Yang et al. 2013). Surround-see placed an omni-directional lens over the camera, giving the phone a complete view of its surrounding environment. Users could then gesture from even further away (e.g. across the room) because of the complete field of view.
Others have proposed using depth cameras for more accurate camera-based hand tracking. I was excited when Google revealed Project Tango earlier this year because a mobile phone with a depth sensor and processing resources dedicated to computer vision is a step closer to realising this type of interaction. While mobile phones can already detect basic gestures using their magnetic sensors and cameras, depth cameras, in my opinion, would allow more expressive gestures without having to wear anything (e.g. magnetic accessories).
We’re also now seeing low-powered alternative sensing approaches, such as AllSee (Kellogg et al. 2014) which can detect gestures using ambient wireless signals. These approaches could be ideal for wearables which are constrained by small battery sizes. Low-power sensing could also allow always-on gesture sensing; this is currently too demanding with some around-device sensing approaches.
Commercial Examples
I have so far discussed a variety of sensing approaches found in research; this is by no means a comprehensive survey of around-device gesture recognition although it shows the wide variety of approaches possible and identifies some seminal work in this area. Now I will look at some commercial examples of around-device interfaces to show that there is an interest in moving interaction away from the touch-screen and into the around-device space.
Perhaps the best known around-device interface is the Samsung Galaxy S4. Samsung included features called Air View and Air Gesture which let users gesture above the display without having to touch it. Users could hover over images in a gallery to see a larger preview and could navigate through a photo album by swiping over the display. A limitation of the Samsung implementation was that users had to be quite close to the display for gestures to be detected – so close that they may as well have used touch input!
Nokia also included an around-device gesture in an update for some of their Lumia phones last year. Users could peek at their notifications by holding their hand over the proximity sensor briefly. While just a single gesture, this let users check their phones easily without unlocking them. With young smartphone users reportedly checking their phones more than thirty times per day (BBC Newsbeat, 4th April 2014), this is a gesture that could get a lot of use!
There are also a number of software libraries which use the front-facing camera to detect gesture input, allowing around-device interaction on typical mobile phones.
Conclusion
In this post we took a quick look at around-device interaction. This is still an active research area and one where we are seeing many interesting developments – especially as researchers are now focusing on issues other than sensing approaches. With smartphone developers showing an interest in this modality, identifying and overcoming interaction challenges is the next big step in around-device interaction research.