IEEE World Haptics 2019

Last week I was at the IEEE World Haptics Conference in Tokyo, Japan. I was presenting my full paper on HaptiGlow, a new system and feedback technique for helping users find a good hand position for mid-air interaction.

Photo of the HaptiGlow system. An Ultrahaptics UHEV1 device with a strip of LEDs around the front edge and left and right sides. The LEDs are green, indicating that the user has their hand in a good position.

I brought a demo to give during the interactive presentation session, which seemed to go well. The demo was self-motivating: most people instinctively approach mid-air haptic devices in a poor way, so the demo session immediately highlighted the need for feedback that helps users improve their hand position.

Unsurprisingly, the person who needed the feedback the least was Hiroyuki Shinoda, whose lab has done some of the most important work on ultrasound haptic feedback. For most other attendees, however, I think this demo was a poignant way of showing the need for more work that helps users understand how to get the most out of these devices.

Some thoughts about the rest of the conference: there was a huge presence from Facebook Reality Labs, so it’ll be interesting to see how large scale industry involvement shapes the next couple of years of haptics research; wrist-based haptics seemed a popular topic, especially squeezing the wrist; the variety of haptic devices for VR continues to grow, including haptic shoes; rich passive haptics and material properties are clearly important to industry, a complement to the dynamic digital haptics that tend to dominate the conference proceedings; finally, there are lots of technology-focused contributions and lots of perception-focused contributions, why are these sub-communities not working together as much as they could be?

Creating Tactons in Pure Data

What are Tactons?

Tactons are “structured tactile messages” for communicating non-visual information. Structured patterns of vibration can be used to encode information, for example a quick buzz to tell me that I have a new email or a longer vibration to let me know that I have an incoming phone call.

Vibrotactile actuators are often used in HCI research to deliver Tactons as these provide a higher fidelity of feedback than the simple rotation motors used in mobile phones and videogame controllers. Sophisticated actuators allow us to change more vibrotactile parameters, providing more potential dimensions for Tacton design. Whereas my previous example used the duration of vibration to encode information (short = email, long = phone call), further information could also be encoded using a different vibrotactile parameter. Changing the “roughness” of the feedback could be used to indicate how important an email or phone call is, for example.

How do we create Tactons?

Now that we know what Tactons are and what they could be used for, how do we actually create them? How can we drive a vibrotactile actuator to produce different tactile sensations?

Linear and voice-coil actuators can be driven by providing a voltage but, rather than dabble in electronics, the HCI community typically uses audio signals to drive the actuator. A sine wave, for example, produces a smooth and continuous-feeling sensation. For more information on how audio signal parameters can be used to create different vibrotactile sensations, see [1], [2] and [3].

Tactons can be created statically using an audio synthesiser or a sound editing program like Audacity to generate sine waves, or can be created dynamically using Pure Data. The rest of this post is going to be a quick summary of Pure Data components which can be used in creating vibrotactile feedback in real-time. I’ve just provided an overview of the key components which I use when creating tactile feedback. With the components discussed, the following vibrotactile parameters can be manipulated: frequency, spatial location, amplitude, “roughness” (with amplitude modulation) and duration.

Tactons with Pure Data components

osc~ Generates a sine-wave. First inlet or argument can be used to set the frequency of the sine-wave, e.g. osc~ 250 creates a 250 Hz signal.

dac~ Audio output. First argument specifies the number of channels and each inlet is used to send an incoming signal to that channel, e.g. dac~ 4 creates a four-channel audio output. Driving different actuators with different audio channels can allow vibration to be encoded spatially.

*~ Multiply signal. Multiplies two signals to produce a single signal. Amplitude modulation (see [2] and [3] above) can be used to create different textures by multiplying two sine waves together. Multiplying osc~ 250 with osc~ 30 creates quite a “rough” feeling texture. This can also be used to change the amplitude of a signal. Multiplying by 0 silences the signal. Multiplying by 0.5 reduces amplitude by 50%. Tactons can be turned on and off by multiplying the wave by 1 and 0, respectively.

delay Sends a bang after a delay. This can be used to provide precise timings for tacton design. To play a 300 ms vibration, for example, an incoming bang could send 1 to the hot inlet of *~, enabling the tacton. Sending that same bang to delay 300 would send a bang after 300 ms, which could then send 0 to the cold inlet of *~, ending the tacton.

phasor~ Creates a ramping waveform. Can be used to create sawtooth waves. This tutorial explains how this component can also be used to create square waveforms.

“Feelable” touchscreens revisited

Tactus have gotten quite a lot of attention recently after demonstrating their new touchscreen technology (pictured above; image source). Their “Tactile Layer” technology raises bubbles on the touchscreen, creating, essentially, physical objects on the touchscreen. I suppose I’ve taken quite an interest in this since it’s similar to something I wrote about 6 months ago: feelable touchscreens.

Here are two amazing and innovative technologies, each taking a different approach towards creating tactile sensations from a touchscreen. Senseg use small electric currents to stimulate the skin, creating edges and feelings of texture, while Tactus actually create something physical.

To the best of my understanding, Tactus’ technology allows bubbles (I’m reluctant to call them buttons; who knows what else interaction designers could do with this!) in pre-determined locations, configured during manufacture. Different configurations are possible, apparently, but from what I’ve read it seems that these are decided at manufacture. Whilst this allows some fundamental improvements to the touchscreen experience (e.g. providing a configuration for a keyboard), it lacks some flexibility as manufacture determines where bubbles can be used.

Senseg’s tech, however, is more flexible and appears to be truly dynamic; application developers can control the precise location where feelings can be experienced rather than this being decided during manufacture.

Having dabbled with Microsoft Surface over the past year I’m pleased to see that both of these technologies apparently scale well to larger displays. Interactive tabletops suffer from the same loss of tactile feedback that touchscreen mobile devices do although this is perhaps less apparent on a large scale device where widgets aren’t crammed into such a small space.

I don’t think it’s fair to ask which of these technologies is better, because they can’t fairly be compared. Although the flexibility of Senseg vs the physical tactility of Tactus is an interesting comparison, I feel that a better question is could these concepts be somehow combined? Imagine a touchscreen which offers complete configuration flexibility, a richer tactile experience like Senseg claim to offer (e.g. feeling texture, not just the presence of something) and the benefits of feeling something physical on the touchscreen. Now that would be awesome.

Virtual keyboards and "feelable" touchscreens

Senseg made a splash recently when they revealed their touchscreen technology which allows you to actually “feel” objects on-screen. By manipulating small electric charges, users can actually feel texture as they interact with a touchscreen. It’d be too easy to dismiss this as a gimmick, however I think this type of technology has the potential to make a positive impact on mobile devices.

Touchscreens are becoming increasingly ubiquitous in mobile devices, leading to the demise of the hardware keyboard. A glance at the list of all HTC phones in their current line-up shows only two of seventeen phones with a hardware keyboard. Samsung again only offer two phones with a hardware keyboard. While touchscreens offer the ability to eliminate hardware keyboards and other unsightly buttons for the sake of sleek aesthetics, they’ve so far failed (in my opinion) to provide a suitable replacement for hardware keys.

Yes, touchscreen keyboards are flexible and can offer a variety of layouts, however they still don’t give sufficient physical feedback to allow fast touch typing. One reason we’re better at typing on physical keyboards is because we “know” where our fingers are. The edges of keys (and the raised bumps often found on some keys) provide reference to other locations on the keyboard. Without looking at the keyboard, an experienced typist can type upwards of 100 words per minute. On a touchscreen, without proper physical feedback, you can expect just a small fraction of those speeds.

One argument against that could be the screen size, however tablets suffer from the same problems. The 26 character keys on my keyboard are of comparable size to the virtual keyboard on my 10-inch tablet. A popular approach to providing feedback for a mobile devices is to vibrate upon key press, however this provides little information other than “you’ve pressed a key”. An alternative approach to making touchscreen keyboards easier to use has been patented by IBM; a virtual keyboard that adjusts itself to how users type on-screen. Auto-correct is another feature which has risen to aid the use of virtual keyboards, yet addresses the symptoms rather than the cause.

Enter touchscreens you can “feel”. Actually being able to feel (something which resembles) the edges of keys on a virtual keyboard is likely to make it much easier to type on touchscreen devices. If technology becomes available which allows effective representation of edges (which Senseg claims their technology can), touchscreen devices will be able to offer what is, in my opinion, an improvement to virtual keyboards. I think this could be of particularly great benefit on tabletop computers which, by nature, allow a more natural typing position than handheld devices. Or perhaps this is all just wishful thinking because I go from 110WPM at my desktop to around 5WPM on my phone.