SICSA DemoFest 2018

Earlier this week I was at SICSA DemoFest in Edinburgh, talking about acoustic levitation and some of the work we’ve been doing on the Levitate project.

For more information about Levitate and the awesome work we’ve been doing, follow us on Twitter (@LevitateProj), check out the project website, and see more photos and videos here.

Want to make your own acoustic levitator? You can build a simpler version of our device using Asier Marzo’s instructables instructions.

Image used in my demo presentation. The image illustrates what a standing wave looks like and has a brief explanation about how acoustic levitation works: small objects can be levitated between high-amplitude areas of the standing sound wave and objects can be moved in mid-air by moving the sound waves.
A close-up photo of our demo booth at the event. We demonstrated a levitation device with a polystyrene bead being moved in a variety of patterns in mid-air.

Levitating Particle Displays

Introduction

Object levitation enables new types of display where the content is created from physical particles in air instead of pixels on a surface. Several technologies have been developed to enable levitation, including magnetic levitation and acoustic levitation. The Levitate project focuses on acoustic levitation, but my Pervasive Displays 2018 paper [1] considers all types of levitation and the novel display capabilities they allow. This synopsis outlines some of those capabilities.

Interacting with Levitating Particle Displays

Voxels in an invisible volume

Levitating objects are held in mid-air, e.g. using sound waves or magnetic forces. This means that unlike most shape-changing displays, the display elements exist within an invisible volume. An invisible volume can allow new interactions. Multiple users around a levitating particle display can view the content but also see each other, potentially improving collaborative interactions. Users can also see surfaces behind and beneath the levitating objects, allowing levitation to augment existing interactive displays.

Reaching into the display

Because of the invisible display volume, it is often possible for users to reach inside the display. With acoustic levitation, this may disrupt the sound waves, but users can still reach in to a certain extent. Other objects may also be placed within a levitating particle display, so long as they are transparent to the levitation forces. With acoustic levitation, this means the objects must allow sound waves to pass through them. Being able to reach into the display and manipulate the display elements, and being able to place objects into the display, could enable new interactions and applications that would not be possible with traditional screens.

Expressive voxels

Whilst levitating objects are typically quite simple (e.g., small polystyrene beads in acoustic levitation systems), they can be manipulated in order to allow greater expressivity. For example, in Point-and-Shake [2], we used expressive object movement as a simple but effective means of feedback about selection gestures. In the Pervasive Displays paper [1] we discuss more about the expressive potential of levitating particles.

Colocated non-visual feedback

Acoustic levitation is based on similar techniques to ultrasound haptics and parametric audio. These techniques could potentially be combined to allow colocated non-visual feedback. Levitating particle displays have the potential to be the first display type where audio and haptic feedback can be presented directly within the display volume.

Acknowledgements

This research has received funding from the 🇪🇺 European Union’s Horizon 2020 research and innovation programme under grant agreement #737087.

References

[1] Levitating Particle Displays with Interactive Voxels
E. Freeman, J. Williamson, P. Kourtelos, and S. Brewster.
In Proceedings of the 7th ACM International Symposium on Pervasive Displays – PerDis ’18, Article 15. 2018.

[2] Point-and-Shake: Selecting from Levitating Object Displays
E. Freeman, J. Williamson, S. Subramanian, and S. Brewster.
In Proceedings of the 36th Annual ACM Conference on Human Factors in Computing Systems – CHI ’18, Paper 18. 2018.

Point-and-Shake: Selecting Levitating Objects

Introduction

Levitating object displays are a novel type of display where content is composed of levitating objects. To find out more about these displays, read my introduction to the Levitate project. This page provides more information and some video demonstrations of the Point-and-Shake interaction technique, described in my CHI 2018 paper [1].

Point-and-Shake

Point-and-Shake is a technique for selecting levitating objects. Selection is an important interaction, because it needs to happen before users can manipulate or interact with content in a levitating particle display. Users cannot always directly touch objects (e.g., because it disrupts the acoustic levitation forces), so we developed a mid-air gesture technique. All feedback is given through the appearance and behaviour of the levitating objects.

We used ray-cast pointing, enabling users to select “that one there”, by pointing an extended finger towards the target object. Feedback is important, to help users understand how the system is interpreting their actions. The only visual elements in a levitating object display are the objects themselves, so we manipulate the appearance of the objects to give feedback. When the user targets an object, we shake it from side to side as a means of giving feedback. Thus, Point-and-Shake is the combination of pointing gestures with object shaking as feedback. The following video demonstrates this.

Selecting Occluded Objects

A limitation of ray-cast pointing is that users might have trouble selecting occluded objects: i.e., an object hidden behind another. This is because the user cannot directly point at the object without first pointing at others in the way. We implemented two versions of the Lock Ray technique (Grossman et al.), to allow selection of occluded objects. This breaks selection into two stages: 1) aiming towards the intended target; then 2) disambiguating the selection. The following video demonstrates this.

Evaluation

We evaluated Point-and-Shake through two user studies. In these studies, we asked users to select one of two objects using our technique. Object shaking was a successful way of giving feedback about selection. Users completed 94-96% of tasks successfully, within the given task time limits. The mean selection times were 3-4 seconds. Detailed results are described in my CHI 2018 paper [1].

Acknowledgements

This research has received funding from the 🇪🇺 European Union’s Horizon 2020 research and innovation programme under grant agreement #737087.

References

[1] Point-and-Shake: Selecting from Levitating Object Displays
E. Freeman, J. Williamson, S. Subramanian, and S. Brewster.
In Proceedings of the 36th Annual ACM Conference on Human Factors in Computing Systems – CHI ’18, Paper 18. 2018.

CHI 2018

CHI 2018 conference logo

I’m going to be at CHI in Montreal next week, to present my full paper, titled “Point-and-Shake: Selecting from Levitating Object Displays”. I’m the last talk in the Input: Targets and Selection session (Thursday 26th April, 9am, Room 517C). Come along to hear about interaction with levitating objects! To find out more, read more about the Levitate project and Point-and-Shake.

I’m also participating in the Mid-Air Haptics for Control Interfaces workshop, run by Ultrahaptics. In the workshop, I’m co-chairing a session with Seokhee Jeon from Kyung Hee University, focusing on the perception of mid-air haptics.

Finally, I’m also going to be chairing the Typing & Touch 2 papers session (Thursday 26th April, 2pm, Room 514B), which has four interesting papers on touchscreen interaction and haptic feedback.

ICMI ’17 Paper & ISS ’17 Demo

I’ve had a paper accepted by ACM ICMI 2017 titled “Rhythmic Micro-Gestures: Discreet Interaction On-the-Go” [1]. The paper is about rhythmic micro-gestures, a new interaction technique for interacting with mobile devices. This technique combines rhythmic gestures, an input technique from my CHI 2016 paper, with the concept of micro-gestures, small hand movements that can be performed discreetly. I’ll be giving a talk about this paper at the conference in November, in Glasgow.

We’ve also had a demo accepted by ACM ISS 2017 from the Levitate project [2]. That demo gives attendees the chance to try interacting with mid-air objects, suspended in air by acoustic levitation.

[1] Rhythmic Micro-Gestures: Discreet Interaction On-the-Go
E. Freeman, G. Griffiths, and S. Brewster.
In Proceedings of 19th ACM International Conference on Multimodal Interaction – ICMI ’17, pp. 115-119. 2017.

[2] Floating Widgets: Interaction with Acoustically-Levitated Widgets
E. Freeman, R. Anderson, C. Andersson, J. Williamson, and S. Brewster.
In Proceedings of ACM International Conference on Interactive Surfaces and Spaces – ISS ’17 Demos, pp. 417-420. 2017.

Levitate

The Levitate project is investigating a novel type of display where interactive content is composed of levitating objects. We use ultrasound to levitate and move “particles” in mid-air. These acoustically-levitated particles are the basic display elements in a levitating object display.

Levitating object displays have some interesting properties:

  • Content is composed of physical objects, actuated by an invisible matter with no mechanical constraints (sound waves);
  • Users can see through the content to view and interact with it from many angles;
  • They can interact inside the display volume, using the space around and between the particles to manipulate and interact with content;
  • Non-interactive objects and materials can be combined with levitation, enabling new interactive experiences;
  • Mid-air haptics and directional audio can be presented using similar acoustic principles and hardware, for multimodal content in a single display.

Levitating object displays enable new interaction techniques and applications, allowing users to interact with content in new ways. One of the aims of the Levitate project is to explore new applications and develop novel techniques for interacting with levitating objects. Whilst we focus on acoustic levitation, these techniques are also relevant to other types of mid-air display (e.g., using magnetic levitation, drones, AR headsets).

What does levitation look like?

Eight beads levitating in air, inside an acoustic levitation device. The beads are arranged so that one is positioned in each corner of a cube.
Eight polystyrene beads, levitating inside an acoustic levitator. Standing sound waves hold the objects in mid-air. The objects can be repositioned by manipulating the sound field.
Two small beads inside an acoustic levitation device.
Two small objects are held in-air by ultrasound, from arrays of small transducers within the device. The objects are trapped inside low-pressure areas of stationary sound waves.
A levitating bead above a smartphone screen.
A levitating bead above a smartphone screen, held in air by two small ultrasound arrays.

Levitating object displays

These are a novel type of display, where content is composed of levitating objects. With the current state-of-the-art, up to a dozen objects can be levitated and moved independently. Advances in ultrasound field synthesis and improved hardware will enable these displays to support many more display elements. Our project is also advancing the state-of-the-art in acoustics to allow this.

Levitating objects could represent data through their positions in mid-air and could be used to create physical 3D representations of data (i.e., physical visualisations). Acoustic levitation is dynamic, so object movement can be used in meaningful ways to create dynamic data displays. For example, using the levitating particles to show simulations of astronomical and physical data.

As the Levitate project progresses, we’ll be creating new interactive demonstrators to showcase some novel applications enabled by this technology. See Levitating Particle Displays for a summary of our Pervasive Displays 2018 paper outlining the display capabilities enabled by levitation.

Levitation Around Physical Objects

My work has been exploring alternative ways of producing content using levitating particles. Instead of creating composite objects composed of several particles (like the cube shown before), we can also levitate particles in the space around other physical objects.

Two beads levitating above a physical model of a mountain.
Two small objects levitating above a single ultrasound array. The mountains are tangible models made from an acoustically transparent material that allows sound waves to pass through, allowing levitation to be used around other physical objects.

This display concept offers new possibilities for content creation. Interactivity can be added to otherwise non-interactive objects, because the levitating particles can be actuated without having to modify or instrument the original object. There are no mechanical constraints, so the levitating particles can be used to create dynamic visual content in the space surrounding the objects. Despite their simple appearance, the particles can be used in expressive ways, because their context adds meaning to their position and behaviour.

A model of a volcano with five levitating beads above it. The beads are positioned to look like ash being ejected from the top of the volcano.
Five levitating beads above a physical model of a volcano. The volcano is a 3D model made from acoustically transparent materials. The levitating particles can be animated, representing the ash exploding from the volcano.

This display concept is discussed in a full paper accepted to Pervasive Displays 2019. See Enhancing Physical Objects with Actuated Levitating Particles for more about my work in this area.

Interacting with levitating objects

One of my aims on this project is to develop new interaction techniques and applications based on levitation. We started by focusing on object selection, since this is a fundamental interaction. Selection is necessary for other actions to take place (e.g., moving an object or querying its value).

We developed a technique called Point-and-Shake, which combines ray-cast pointing input with object shaking as a feedback mechanism. Basically, users point at the object they want to select and it shakes to give feedback. You can read more about this technique here. A paper describing our selection technique was published at CHI 2018. The following video includes a demonstration of Point-and-Shake:

Sound and haptics

Another aim of the project is to develop multimodal interactions using sound and haptics, to enhance the experience of interacting with the levitating objects. This is a multidisciplinary project involving experts in acoustics, HCI, and ultrasound haptics. The hardware and acoustic techniques used for acoustic levitation are similar to those used for ultrasound haptics and directional audio, so the project aims to combine these modalities to create new multimodal experiences.

Research summaries

Engage

Highlights

  • Apr 2019: Full paper accepted for IEEE World Haptics ’19.
  • Mar 2019: Full paper accepted for ACM Pervasive Displays ’19.
  • Feb 2019: Two demos accepted for CHI ’19.
  • Nov 2018: Demo at SICSA DemoFest ’18.
  • Jun 2018: Full paper presentation at Pervasive Displays ’18.
  • Apr 2018: Full paper presentation at CHI ’18.
  • Mar 2018: Full paper accepted for ACM Pervasive Displays ’18.
  • Dec 2017: Full paper accepted to CHI ’18.
  • Nov 2017: Demo at ACM ICMI ’17.
  • Oct 2017: Demos at SICSA DemoFest ’17 and ACM ISS ’17.
  • Jun 2017: Demo at Pervasive Displays ’17.
  • Jan 2017: Start of project,

Acknowledgements

This research has received funding from the 🇪🇺 European Union’s Horizon 2020 research and innovation programme under grant agreement #737087.

Publications

    Enhancing Physical Objects with Actuated Levitating Particles
    E. Freeman, A. Marzo, P. B. Kourtelos, J. R. Williamson, and S. Brewster.
    In Proceedings of the 8th ACM International Symposium on Pervasive Displays – PerDis ’19, Paper 24. 2019.

    Three-in-one: Levitation, Parametric Audio, and Mid-Air Haptic Feedback
    G. Shakeri, E. Freeman, W. Frier, M. Iodice, B. Long, O. Georgiou, and C. Andersson.
    In Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems CHI EA ’19, Paper INT006. 2019.

    Tangible Interactions with Acoustic Levitation
    A. Marzo, S. Kockaya, E. Freeman, and J. Williamson.
    In Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems CHI EA ’19, Paper INT005. 2019.

    Levitating Particle Displays with Interactive Voxels
    E. Freeman, J. Williamson, P. Kourtelos, and S. Brewster.
    In Proceedings of the 7th ACM International Symposium on Pervasive Displays – PerDis ’18, Article 15. 2018.

    Point-and-Shake: Selecting from Levitating Object Displays
    E. Freeman, J. Williamson, S. Subramanian, and S. Brewster.
    In Proceedings of the 36th Annual ACM Conference on Human Factors in Computing Systems – CHI ’18, Paper 18. 2018.

    Textured Surfaces for Ultrasound Haptic Displays
    E. Freeman, R. Anderson, J. Williamson, G. Wilson, and S. Brewster.
    In Proceedings of 19th ACM International Conference on Multimodal Interaction – ICMI ’17 Demos, pp. 491-492. 2017.

    Floating Widgets: Interaction with Acoustically-Levitated Widgets
    E. Freeman, R. Anderson, C. Andersson, J. Williamson, and S. Brewster.
    In Proceedings of ACM International Conference on Interactive Surfaces and Spaces – ISS ’17 Demos, pp. 417-420. 2017.

    Levitate: Interaction with Floating Particle Displays
    J. R. Williamson, E. Freeman, and S. Brewster.
    In Proceedings of the 6th ACM International Symposium on Pervasive Displays – PerDis ’17 Demos, Article 24. 2017.