I’m at ACM Pervasive Displays this week to present a full paper from the Levitate project. My paper is about using levitating particles as actuated display components for static physical objects. You can read more about this here.
In the paper, I look at how levitating particles can be used as secondary display elements to augment physical objects. Levitating particles can be used as dynamic cursors to annotate physical objects (for example: accompanying an audio narration to indicate features of a museum exhibit). They can be used as user representations in an interactive system, adding interactivity to static objects. Actuated particles can also be used as animated display elements, bringing lifeless objects to life with dynamic display elements.
Earlier this week I was at SICSA DemoFest in Edinburgh, talking about acoustic levitation and some of the work we’ve been doing on the Levitate project.
For more information about Levitate and the awesome work we’ve been doing, follow us on Twitter (@LevitateProj), check out the project website, and see more photos and videos here.
Want to make your own acoustic levitator? You can build a simpler version of our device using Asier Marzo’s instructables instructions.
Object levitation enables new types of display where the content is created from physical particles in air instead of pixels on a surface. Several technologies have been developed to enable levitation, including magnetic levitation and acoustic levitation. The Levitate project focuses on acoustic levitation, but my Pervasive Displays 2018 paper [1] considers all types of levitation and the novel display capabilities they allow. This synopsis outlines some of those capabilities.
Interacting with Levitating Particle Displays
Voxels in an invisible volume
Levitating objects are held in mid-air, e.g. using sound waves or magnetic forces. This means that unlike most shape-changing displays, the display elements exist within an invisible volume. An invisible volume can allow new interactions. Multiple users around a levitating particle display can view the content but also see each other, potentially improving collaborative interactions. Users can also see surfaces behind and beneath the levitating objects, allowing levitation to augment existing interactive displays.
Reaching into the display
Because of the invisible display volume, it is often possible for users to reach inside the display. With acoustic levitation, this may disrupt the sound waves, but users can still reach in to a certain extent. Other objects may also be placed within a levitating particle display, so long as they are transparent to the levitation forces. With acoustic levitation, this means the objects must allow sound waves to pass through them. Being able to reach into the display and manipulate the display elements, and being able to place objects into the display, could enable new interactions and applications that would not be possible with traditional screens.
Expressive voxels
Whilst levitating objects are typically quite simple (e.g., small polystyrene beads in acoustic levitation systems), they can be manipulated in order to allow greater expressivity. For example, in Point-and-Shake [2], we used expressive object movement as a simple but effective means of feedback about selection gestures. In the Pervasive Displays paper [1] we discuss more about the expressive potential of levitating particles.
Colocated non-visual feedback
Acoustic levitation is based on similar techniques to ultrasound haptics and parametric audio. These techniques could potentially be combined to allow colocated non-visual feedback. Levitating particle displays have the potential to be the first display type where audio and haptic feedback can be presented directly within the display volume.
Acknowledgements
This research has received funding from the 🇪🇺 European Union’s Horizon 2020 research and innovation programme under grant agreement #737087.
References
[1]Levitating Particle Displays with Interactive Voxels E. Freeman, J. Williamson, P. Kourtelos, and S. Brewster. In Proceedings of the 7th ACM International Symposium on Pervasive Displays – PerDis ’18, Article 15. 2018.
@inproceedings{PerDis2018,
author = {Freeman, Euan and Williamson, Julie and Kourtelos, Praxitelis and Brewster, Stephen},
booktitle = {{Proceedings of the 7th ACM International Symposium on Pervasive Displays - PerDis '18}},
title = {{Levitating Particle Displays with Interactive Voxels}},
year = {2018},
publisher = {ACM},
pages = {Article 15},
doi = {10.1145/3205873.3205878},
url = {http://euanfreeman.co.uk/levitate/levitating-particle-displays/},
pdf = {http://research.euanfreeman.co.uk/papers/PerDis_2018.pdf},
}
[2]Point-and-Shake: Selecting from Levitating Object Displays E. Freeman, J. Williamson, S. Subramanian, and S. Brewster. In Proceedings of the 36th Annual ACM Conference on Human Factors in Computing Systems – CHI ’18, Paper 18. 2018.
@inproceedings{CHI2018,
author = {Freeman, Euan and Williamson, Julie and Subramanian, Sriram and Brewster, Stephen},
booktitle = {{Proceedings of the 36th Annual ACM Conference on Human Factors in Computing Systems - CHI '18}},
title = {{Point-and-Shake: Selecting from Levitating Object Displays}},
year = {2018},
publisher = {ACM},
pages = {Paper 18},
doi = {10.1145/3173574.3173592},
url = {http://euanfreeman.co.uk/levitate/},
video = {{https://www.youtube.com/watch?v=j8foZ5gahvQ}},
pdf = {http://research.euanfreeman.co.uk/papers/CHI_2018.pdf},
data = {https://zenodo.org/record/2541555},
}
Levitating object displays are a novel type of display where content is composed of levitating objects. To find out more about these displays, read my introduction to the Levitate project. This page provides more information and some video demonstrations of the Point-and-Shake interaction technique, described in my CHI 2018 paper [1].
Point-and-Shake
Point-and-Shake is a technique for selecting levitating objects. Selection is an important interaction, because it needs to happen before users can manipulate or interact with content in a levitating particle display. Users cannot always directly touch objects (e.g., because it disrupts the acoustic levitation forces), so we developed a mid-air gesture technique. All feedback is given through the appearance and behaviour of the levitating objects.
We used ray-cast pointing, enabling users to select “that one there”, by pointing an extended finger towards the target object. Feedback is important, to help users understand how the system is interpreting their actions. The only visual elements in a levitating object display are the objects themselves, so we manipulate the appearance of the objects to give feedback. When the user targets an object, we shake it from side to side as a means of giving feedback. Thus, Point-and-Shake is the combination of pointing gestures with object shaking as feedback. The following video demonstrates this.
Selecting Occluded Objects
A limitation of ray-cast pointing is that users might have trouble selecting occluded objects: i.e., an object hidden behind another. This is because the user cannot directly point at the object without first pointing at others in the way. We implemented two versions of the Lock Ray technique (Grossman et al.), to allow selection of occluded objects. This breaks selection into two stages: 1) aiming towards the intended target; then 2) disambiguating the selection. The following video demonstrates this.
Evaluation
We evaluated Point-and-Shake through two user studies. In these studies, we asked users to select one of two objects using our technique. Object shaking was a successful way of giving feedback about selection. Users completed 94-96% of tasks successfully, within the given task time limits. The mean selection times were 3-4 seconds. Detailed results are described in my CHI 2018 paper [1].
Acknowledgements
This research has received funding from the 🇪🇺 European Union’s Horizon 2020 research and innovation programme under grant agreement #737087.
References
[1]Point-and-Shake: Selecting from Levitating Object Displays E. Freeman, J. Williamson, S. Subramanian, and S. Brewster. In Proceedings of the 36th Annual ACM Conference on Human Factors in Computing Systems – CHI ’18, Paper 18. 2018.
@inproceedings{CHI2018,
author = {Freeman, Euan and Williamson, Julie and Subramanian, Sriram and Brewster, Stephen},
booktitle = {{Proceedings of the 36th Annual ACM Conference on Human Factors in Computing Systems - CHI '18}},
title = {{Point-and-Shake: Selecting from Levitating Object Displays}},
year = {2018},
publisher = {ACM},
pages = {Paper 18},
doi = {10.1145/3173574.3173592},
url = {http://euanfreeman.co.uk/levitate/},
video = {{https://www.youtube.com/watch?v=j8foZ5gahvQ}},
pdf = {http://research.euanfreeman.co.uk/papers/CHI_2018.pdf},
data = {https://zenodo.org/record/2541555},
}
I’m going to be at CHI in Montreal next week, to present my full paper, titled “Point-and-Shake: Selecting from Levitating Object Displays”. I’m the last talk in the Input: Targets and Selection session (Thursday 26th April, 9am, Room 517C). Come along to hear about interaction with levitating objects! To find out more, read more about the Levitate project and Point-and-Shake.
I’m also participating in the Mid-Air Haptics for Control Interfaces workshop, run by Ultrahaptics. In the workshop, I’m co-chairing a session with Seokhee Jeon from Kyung Hee University, focusing on the perception of mid-air haptics.
Finally, I’m also going to be chairing the Typing & Touch 2 papers session (Thursday 26th April, 2pm, Room 514B), which has four interesting papers on touchscreen interaction and haptic feedback.
I’ve had a paper accepted by ACM ICMI 2017 titled “Rhythmic Micro-Gestures: Discreet Interaction On-the-Go” [1]. The paper is about rhythmic micro-gestures, a new interaction technique for interacting with mobile devices. This technique combines rhythmic gestures, an input technique from my CHI 2016 paper, with the concept of micro-gestures, small hand movements that can be performed discreetly. I’ll be giving a talk about this paper at the conference in November, in Glasgow.
We’ve also had a demo accepted by ACM ISS 2017 from the Levitate project [2]. That demo gives attendees the chance to try interacting with mid-air objects, suspended in air by acoustic levitation.
[1]Rhythmic Micro-Gestures: Discreet Interaction On-the-Go E. Freeman, G. Griffiths, and S. Brewster. In Proceedings of 19th ACM International Conference on Multimodal Interaction – ICMI ’17, 115-119. 2017.
@inproceedings{ICMI2017,
author = {Freeman, Euan and Griffiths, Gareth and Brewster, Stephen},
booktitle = {{Proceedings of 19th ACM International Conference on Multimodal Interaction - ICMI '17}},
title = {{Rhythmic Micro-Gestures: Discreet Interaction On-the-Go}},
year = {2017},
publisher = {ACM},
pages = {115--119},
doi = {10.1145/3136755.3136815},
url = {},
pdf = {http://research.euanfreeman.co.uk/papers/ICMI_2017.pdf},
}
[2]Floating Widgets: Interaction with Acoustically-Levitated Widgets E. Freeman, R. Anderson, C. Andersson, J. Williamson, and S. Brewster. In Proceedings of ACM International Conference on Interactive Surfaces and Spaces – ISS ’17 Demos, 417-420. 2017.
@inproceedings{ISS2017Demo,
author = {Freeman, Euan and Anderson, Ross and Andersson, Carl and Williamson, Julie and Brewster, Stephen},
booktitle = {{Proceedings of ACM International Conference on Interactive Surfaces and Spaces - ISS '17 Demos}},
title = {{Floating Widgets: Interaction with Acoustically-Levitated Widgets}},
year = {2017},
publisher = {ACM},
pages = {417-420},
doi = {10.1145/3132272.3132294},
url = {},
pdf = {http://research.euanfreeman.co.uk/papers/ISS_2017_Demo.pdf},
}
The Levitate project is investigating a novel type of display where interactive content is composed of levitating objects. We use ultrasound to levitate and move “particles” in mid-air. These acoustically-levitated particles are the basic display elements in a levitating object display.
Levitating object displays have some interesting properties:
Content is composed of physical objects, actuated by an invisible matter with no mechanical constraints (sound waves);
Users can see through the content to view and interact with it from many angles;
They can interact inside the display volume, using the space around and between the particles to manipulate and interact with content;
Non-interactive objects and materials can be combined with levitation, enabling new interactive experiences;
Mid-air haptics and directional audio can be presented using similar acoustic principles and hardware, for multimodal content in a single display.
Levitating object displays enable new interaction techniques and applications, allowing users to interact with content in new ways. One of the aims of the Levitate project is to explore new applications and develop novel techniques for interacting with levitating objects. Whilst we focus on acoustic levitation, these techniques are also relevant to other types of mid-air display (e.g., using magnetic levitation, drones, AR headsets).
What does levitation look like?
Levitating object displays
These are a novel type of display, where content is composed of levitating objects. With the current state-of-the-art, up to a dozen objects can be levitated and moved independently. Advances in ultrasound field synthesis and improved hardware will enable these displays to support many more display elements. Our project is also advancing the state-of-the-art in acoustics to allow this.
Levitating objects could represent data through their positions in mid-air and could be used to create physical 3D representations of data (i.e., physical visualisations). Acoustic levitation is dynamic, so object movement can be used in meaningful ways to create dynamic data displays. For example, using the levitating particles to show simulations of astronomical and physical data.
As the Levitate project progresses, we’ll be creating new interactive demonstrators to showcase some novel applications enabled by this technology. See Levitating Particle Displays for a summary of our Pervasive Displays 2018 paper outlining the display capabilities enabled by levitation.
Levitation Around Physical Objects
My work has been exploring alternative ways of producing content using levitating particles. Instead of creating composite objects composed of several particles (like the cube shown before), we can also levitate particles in the space around other physical objects.
This display concept offers new possibilities for content creation. Interactivity can be added to otherwise non-interactive objects, because the levitating particles can be actuated without having to modify or instrument the original object. There are no mechanical constraints, so the levitating particles can be used to create dynamic visual content in the space surrounding the objects. Despite their simple appearance, the particles can be used in expressive ways, because their context adds meaning to their position and behaviour.
One of my aims on this project is to develop new interaction techniques and applications based on levitation. We started by focusing on object selection, since this is a fundamental interaction. Selection is necessary for other actions to take place (e.g., moving an object or querying its value).
We developed a technique called Point-and-Shake, which combines ray-cast pointing input with object shaking as a feedback mechanism. Basically, users point at the object they want to select and it shakes to give feedback. You can read more about this technique here. A paper describing our selection technique was published at CHI 2018. The following video includes a demonstration of Point-and-Shake:
Sound and haptics
Another aim of the project is to develop multimodal interactions using sound and haptics, to enhance the experience of interacting with the levitating objects. This is a multidisciplinary project involving experts in acoustics, HCI, and ultrasound haptics. The hardware and acoustic techniques used for acoustic levitation are similar to those used for ultrasound haptics and directional audio, so the project aims to combine these modalities to create new multimodal experiences.
This research has received funding from the 🇪🇺 European Union’s Horizon 2020 research and innovation programme under grant agreement #737087.
Publications
Enhancing Physical Objects with Actuated Levitating Particles E. Freeman, A. Marzo, P. B. Kourtelos, J. R. Williamson, and S. Brewster. In Proceedings of the 8th ACM International Symposium on Pervasive Displays – PerDis ’19, Article 2. 2019.
@inproceedings{PerDis2019,
author = {Freeman, Euan and Marzo, Asier and Kourtelos, Praxitelis B. and Williamson, Julie R. and Brewster, Stephen},
booktitle = {{Proceedings of the 8th ACM International Symposium on Pervasive Displays - PerDis '19}},
title = {{Enhancing Physical Objects with Actuated Levitating Particles}},
year = {2019},
publisher = {ACM},
pages = {Article 2},
url = {http://euanfreeman.co.uk/levitate/enhancing-physical-objects-with-actuated-levitating-particles/},
pdf = {http://research.euanfreeman.co.uk/papers/PerDis_2019.pdf},
doi = {10.1145/3321335.3324939},
video = {https://www.youtube.com/watch?v=5vZwTvfWZgo},
}
Three-in-one: Levitation, Parametric Audio, and Mid-Air Haptic Feedback G. Shakeri, E. Freeman, W. Frier, M. Iodice, B. Long, O. Georgiou, and C. Andersson. In Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems – CHI EA ’19, Paper INT006. 2019.
@inproceedings{CHI2019Demo1,
author = {Shakeri, G\"{o}zel and Freeman, Euan and Frier, William and Iodice, Michele and Long, Benjamin and Georgiou, Orestis and Andersson, Carl},
booktitle = {{Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems - CHI EA '19}},
title = {{Three-in-one: Levitation, Parametric Audio, and Mid-Air Haptic Feedback}},
year = {2019},
publisher = {ACM},
pages = {Paper INT006},
doi = {10.1145/3290607.3313264},
pdf = {https://dl.acm.org/authorize?N673841},
}
Tangible Interactions with Acoustic Levitation A. Marzo, S. Kockaya, E. Freeman, and J. Williamson. In Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems – CHI EA ’19, Paper INT005. 2019.
@inproceedings{CHI2019Demo2,
author = {Marzo, Asier and Kockaya, Steven and Freeman, Euan and Williamson, Julie},
booktitle = {{Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems - CHI EA '19}},
title = {{Tangible Interactions with Acoustic Levitation}},
year = {2019},
publisher = {ACM},
pages = {Paper INT005},
doi = {10.1145/3290607.3313265},
pdf = {https://dl.acm.org/authorize?N673840},
}
Levitating Particle Displays with Interactive Voxels E. Freeman, J. Williamson, P. Kourtelos, and S. Brewster. In Proceedings of the 7th ACM International Symposium on Pervasive Displays – PerDis ’18, Article 15. 2018.
@inproceedings{PerDis2018,
author = {Freeman, Euan and Williamson, Julie and Kourtelos, Praxitelis and Brewster, Stephen},
booktitle = {{Proceedings of the 7th ACM International Symposium on Pervasive Displays - PerDis '18}},
title = {{Levitating Particle Displays with Interactive Voxels}},
year = {2018},
publisher = {ACM},
pages = {Article 15},
doi = {10.1145/3205873.3205878},
url = {http://euanfreeman.co.uk/levitate/levitating-particle-displays/},
pdf = {http://research.euanfreeman.co.uk/papers/PerDis_2018.pdf},
}
Point-and-Shake: Selecting from Levitating Object Displays E. Freeman, J. Williamson, S. Subramanian, and S. Brewster. In Proceedings of the 36th Annual ACM Conference on Human Factors in Computing Systems – CHI ’18, Paper 18. 2018.
@inproceedings{CHI2018,
author = {Freeman, Euan and Williamson, Julie and Subramanian, Sriram and Brewster, Stephen},
booktitle = {{Proceedings of the 36th Annual ACM Conference on Human Factors in Computing Systems - CHI '18}},
title = {{Point-and-Shake: Selecting from Levitating Object Displays}},
year = {2018},
publisher = {ACM},
pages = {Paper 18},
doi = {10.1145/3173574.3173592},
url = {http://euanfreeman.co.uk/levitate/},
video = {{https://www.youtube.com/watch?v=j8foZ5gahvQ}},
pdf = {http://research.euanfreeman.co.uk/papers/CHI_2018.pdf},
data = {https://zenodo.org/record/2541555},
}
Textured Surfaces for Ultrasound Haptic Displays E. Freeman, R. Anderson, J. Williamson, G. Wilson, and S. Brewster. In Proceedings of 19th ACM International Conference on Multimodal Interaction – ICMI ’17 Demos, 491-492. 2017.
@inproceedings{ICMI2017Demo,
author = {Freeman, Euan and Anderson, Ross and Williamson, Julie and Wilson, Graham and Brewster, Stephen},
booktitle = {{Proceedings of 19th ACM International Conference on Multimodal Interaction - ICMI '17 Demos}},
title = {{Textured Surfaces for Ultrasound Haptic Displays}},
year = {2017},
publisher = {ACM},
pages = {491-492},
doi = {10.1145/3136755.3143020},
url = {},
pdf = {http://research.euanfreeman.co.uk/papers/ICMI_2017_Demo.pdf},
}
Floating Widgets: Interaction with Acoustically-Levitated Widgets E. Freeman, R. Anderson, C. Andersson, J. Williamson, and S. Brewster. In Proceedings of ACM International Conference on Interactive Surfaces and Spaces – ISS ’17 Demos, 417-420. 2017.
@inproceedings{ISS2017Demo,
author = {Freeman, Euan and Anderson, Ross and Andersson, Carl and Williamson, Julie and Brewster, Stephen},
booktitle = {{Proceedings of ACM International Conference on Interactive Surfaces and Spaces - ISS '17 Demos}},
title = {{Floating Widgets: Interaction with Acoustically-Levitated Widgets}},
year = {2017},
publisher = {ACM},
pages = {417-420},
doi = {10.1145/3132272.3132294},
url = {},
pdf = {http://research.euanfreeman.co.uk/papers/ISS_2017_Demo.pdf},
}
Levitate: Interaction with Floating Particle Displays J. R. Williamson, E. Freeman, and S. Brewster. In Proceedings of the 6th ACM International Symposium on Pervasive Displays – PerDis ’17 Demos, Article 24. 2017.
@inproceedings{PerDis2017Demo,
author = {Williamson, Julie R. and Freeman, Euan and Brewster, Stephen},
booktitle = {{Proceedings of the 6th ACM International Symposium on Pervasive Displays - PerDis '17 Demos}},
title = {{Levitate: Interaction with Floating Particle Displays}},
year = {2017},
publisher = {ACM},
pages = {Article 24},
doi = {10.1145/3078810.3084347},
url = {http://dl.acm.org/citation.cfm?id=3084347},
pdf = {http://research.euanfreeman.co.uk/papers/PerDis_2017.pdf},
}