NordiCHI Workshop Paper on Interactive Light Feedback

Our position paper, “Illuminating Gesture Interfaces with Interactive Light Feedback” [1], was accepted to the NordiCHI Beyond the Switch: Explicit and Implicit Interaction with Light workshop. In it, we discuss examples of light being used in gesture interfaces and we discuss how we use interactive light for feedback about gesture interactions in one of our own gesture interfaces. The paper will be available on the workshop website by the end of September.

An example of interactive light feedback from an early prototype.
An example of interactive light feedback from an early prototype.

[1] Unknown bibtex entry with key [InteractiveLight]

ICMI ’14 Paper Accepted

My full paper, “Tactile Feedback for Above-Device Gesture Interfaces: Adding Touch to Touchless Interactions”, was accepted to ICMI 2014. It was also accepted for oral presentation rather than poster presentation, so I’m looking forward to that!

Tactile Feedback for Above-Device Interaction.
Tactile Feedback for Above-Device Interaction.

In this paper we looked at tactile feedback for above-device interaction with a mobile phone. We compared direct tactile feedback to distal tactile feedback from wearables (rings, smart-watches) and ultrasound haptic feedback. We also looked at different feedback designs and investigated the impact of tactile feedback on performance, workload and preference.

ultrasound array
Array of Ultrasound Transducers for Ultrasound Haptic Feedback.

We found that tactile feedback had no impact on input performance but did improve workload significantly (making it easier to interact). Users also significantly preferred tactile feedback to no tactile feedback. More details are in the paper [1] along with design recommendations for above- and around-device interface designers. I’ve written a bit more about this project here.

Video

The following video (including awful typo on the last scene!) shows the two gestures we used in these studies.

References

[1] Tactile Feedback for Above-Device Gesture Interfaces: Adding Touch to Touchless Interactions
E. Freeman, S. Brewster, and V. Lantz.
In Proceedings of the International Conference on Multimodal Interaction – ICMI ’14, 419-426. 2014.

Posters, SICSA HCI and Second Year Viva

SICSA HCI Lanyard

June has gotten off to an exciting start! I won a poster presentation competition, got a poster paper into Mobile HCI ’14 and scheduled my annual progress review. I’ve arranged my 2nd year viva, which is an annual progress review for my PhD research. Finishing and submitting my report felt a little anticlimactic compared to last year; I suppose the first year review is much more important and by this stage it’s more of a checkpoint. Lately I’ve been writing and research planning a lot so I’m looking forward to getting back to actually doing research. Designing, making, all those fun things that make HCI awesome! Yesterday was the SICSA HCI yearly meetup, which was good fun. This year it was hosted by University of Dundee so travelling to Dundee was nice. I spent a lot of time there as a kid, especially around the university campus, so it was cool to go back there and see how everything has changed. Highlights of the day included keynotes from Miguel Nacenta and David Flatla. Miguel presented some really cool research and David was so damn entertaining! We also had a few posters and a demo from our group. I won the poster presentation competition, which was a nice surprise. My poster (below) gave a general overview of my PhD research and showed off a couple of projects.

SICSA HCI Poster
SICSA HCI ’14 Poster

Winning the poster competition must have been a good omen because I returned home to a notification from Mobile HCI saying my poster was accepted. That paper and poster are about two gesture design studies from the start of my PhD. I’ll post more about them another time.

Mobile HCI Doctoral Consortium

I’ve been accepted for the doctoral consortium at Mobile HCI ’14! I’m looking forward to it – it’ll be great to get a grilling from others and will help with my thesis, something which becomes increasingly daunting knowing that I’m halfway through my three years of PhD research.

What Is Around-Device Interaction?

One of my biggest research interests is gesture interaction with mobile devices, also known as around-device interaction because users interact in the space around the device rather than on the device itself. In this post I’m going to give a brief overview of what around-device interaction is, how gestures can be sensed from mobile devices and how these interactions are being realised in commercial devices.

Why Use Around-Device Interaction?

Why would we want to gesture with mobile devices (such as phones or smart watches) anyway? These devices typically have small screens which we interact with in a very limited fashion; using the larger surrounding space lets us interact in more expressive ways and lets the display be utilised fully, rather than our hand occluding content as we reach out to touch the screen. Gestures also let us interact without having to first lift our device, meaning we can interact casually from a short distance. Finally, gesture input is non-contact so we can interact when we would not want to touch the screen, e.g. when preparing food and wanting to navigate a recipe but our hands are messy.

Sensing Around-Device Input

Motivated by the benefits of expressive non-contact input, HCI researchers have developed a variety of approaches for detecting around-device input. Early approaches used infrared proximity sensors, similar to the sensors used in phones to lock the display when we hold our phone to our ear. SideSight (Butler et al. 2008) placed proximity sensors around the edges of a mobile phone, letting users interact in the space beside the phone. HoverFlow (Kratz and Rohs 2009) took a similar approach, although their sensors faced upwards rather than outwards. This let users gesture above the display. Although this meant gesturing occluded the screen, users could interact in 3D space; a limitation of SideSight was that users were more or less restricted to a flat plane around the phone.

Abracadabra (Harrison and Hudson 2009) used magnetic sensing to detect input around a smart-watch. Users wore a magnetic ring which affected the magnetic field around the device, letting the watch determine finger position and detect gestures. This let users interact with a very small display in a much larger area (an example of what Harrison called “interacting with small devices in a big way” when he gave a presentation to our research group last year) – something today’s smart-watch designers should consider. uTrack (Chen et al. 2013) built on this approach with additional wearable sensors. MagiTact (Ketabdar et al. 2010) used a similar approach to Abracadabra for detecting gestures around mobile phones.

So far we’ve looked at two approaches for detecting around-device input: infrared proximity sensors and magnetic sensors. Researchers have developed camera-based approaches for detecting input. Most mobile phone cameras can be used to detect around-device gestures within the camera field of view, which can be extended using approaches such as Surround-see (Yang et al. 2013). Surround-see placed an omni-directional lens over the camera, giving the phone a complete view of its surrounding environment. Users could then gesture from even further away (e.g. across the room) because of the complete field of view.

Others have proposed using depth cameras for more accurate camera-based hand tracking. I was excited when Google revealed Project Tango earlier this year because a mobile phone with a depth sensor and processing resources dedicated to computer vision is a step closer to realising this type of interaction. While mobile phones can already detect basic gestures using their magnetic sensors and cameras, depth cameras, in my opinion, would allow more expressive gestures without having to wear anything (e.g. magnetic accessories).

We’re also now seeing low-powered alternative sensing approaches, such as AllSee (Kellogg et al. 2014) which can detect gestures using ambient wireless signals. These approaches could be ideal for wearables which are constrained by small battery sizes. Low-power sensing could also allow always-on gesture sensing; this is currently too demanding with some around-device sensing approaches.

Commercial Examples

I have so far discussed a variety of sensing approaches found in research; this is by no means a comprehensive survey of around-device gesture recognition although it shows the wide variety of approaches possible and identifies some seminal work in this area. Now I will look at some commercial examples of around-device interfaces to show that there is an interest in moving interaction away from the touch-screen and into the around-device space.

Perhaps the best known around-device interface is the Samsung Galaxy S4. Samsung included features called Air View and Air Gesture which let users gesture above the display without having to touch it. Users could hover over images in a gallery to see a larger preview and could navigate through a photo album by swiping over the display. A limitation of the Samsung implementation was that users had to be quite close to the display for gestures to be detected – so close that they may as well have used touch input!

Nokia also included an around-device gesture in an update for some of their Lumia phones last year. Users could peek at their notifications by holding their hand over the proximity sensor briefly. While just a single gesture, this let users check their phones easily without unlocking them. With young smartphone users reportedly checking their phones more than thirty times per day (BBC Newsbeat, 4th April 2014), this is a gesture that could get a lot of use!

There are also a number of software libraries which use the front-facing camera to detect gesture input, allowing around-device interaction on typical mobile phones.

Conclusion

In this post we took a quick look at around-device interaction. This is still an active research area and one where we are seeing many interesting developments – especially as researchers are now focusing on issues other than sensing approaches. With smartphone developers showing an interest in this modality, identifying and overcoming interaction challenges is the next big step in around-device interaction research.

Round-faced Smart Watches

A photo of a Motorola 360 smart watch.

When Motorola announced their Moto 360 watch (above) recently, many hailed it as a great moment in wearable design – finally, someone designed a watch which actually looks like a watch! It’s a big step forward from the uninspiring square-faced designs which have come before it. The timing of the announcement was perfect because people were still awing over a concept design which appeared a few days earlier, also featuring a circular display which imitates traditional watch design.

At the same time, Google also announced their Android Wear platform for wearables (used by the Moto 360 amongst others). At the moment the platform seems focused on three areas: card-based notification design, and touch-gesture and speech input. I’d love to see if Google eventually follow up on their patent for on-arm input and provide future platform support for novel input on the body or in the space around the watch.

TEDx Demos

TEDx badge

A couple of days ago our group showed off some of our research during TEDx Glasgow University. It was a fun experience. We don’t often get to engage with a non-academic audience so it was refreshing to chat about future technology with non-computing scientists. I came away from the demo session feeling inspired and with some fresh ideas about where to take my research.

I presented a gesture interface for mobile phones, using around-device gestures as input. People seemed to find this modality particularly attractive for use in the kitchen, when hands are often full, wet or messy. I was also showing how wearables could be used alongside mobile phones. People seemed to enjoy the novelty of this, although understandably there was some doubt about having to wear another accessory (alongside fashion items like watches or bracelets). I definitely feel that future wearables need to be designed with fashion in mind so people want to wear them as accessories first and interfaces second.

Inevitably someone mentioned interfaces found in Minority Report and Iron Man – I hope HCI finds ways to inspire people’s imaginations about interfaces of the future in the same way that Hollywood has.

PyQT: QPixmap and threads

I’ve been working with PyQT lately and got stuck on a seemingly simple problem: updating the UI from another thread. Having never used PyQT before it wasn’t obvious what the solution was and any Stack Overflow results I found gave incomplete code samples. I’m hoping this post helps give pointers for anyone searching for the same things I did.

This particular example is very contrived but it’s the only solution I could find for updating an image with QPixmap objects in a multithreaded interface, overcoming the “QPixmap: It is not safe to use pixmaps outside the GUI thread” error message. I think part of my problem was that I wasn’t using QThreads in my threaded code and I wasn’t willing to refactor a large codebase just to improve PyQT integration.

First another thread calls someFunctionCalledFromAnotherThread, which uses PyQT’s signal mechanism to pass events across threads. This function creates a LoadImageThread with the filename and desired size as arguments, connects it to a signal to call the showImage function, then starts the thread.

def someFunctionCalledFromAnotherThread(self):
  thread = LoadImageThread(file="test.png", w=512, h=512)
  self.connect(thread, QtCore.SIGNAL("showImage(QString, int, int)"), self.showImage)
  thread.start()

def showImage(self, filename, w, h):
  pixmap = QtGui.QPixmap(filename).scaled(w, h)
  self.image.setPixmap(pixmap)
  self.image.repaint()

LoadImageThread then does nothing other than emit a response to the showImage signal we connected above, passing the thread arguments back. This means showImage will be executed on the GUI thread, avoiding those nasty QPixmap errors. Note the __del__ function below; that prevents the thread from being garbage collected while running.

class LoadImageThread(QtCore.QThread):
  def __init__(self, file w, h):
    QtCore.QThread.__init__(self)
    self.file = file
    self.w = w
    self.h = h

  def __del__(self):
    self.wait()

  def run(self):
    self.emit(QtCore.SIGNAL('showImage(QString, int, int)'), self.file, self.w, self.h)

There we have it – a stupid and contrived solution to a stupid problem.

New Wearable Tech: On-Body Input and Pop-out Earpieces

This week I’ve seen two wearable device concepts which I really like: extending the input space using the arm and detachable ear-pieces letting the device be used for phone calls. The first is demonstrated in the following tweet, showing an excerpt from a patent application attributed to Google:

Here the arm is used for extra input space, keeping the small display entirely visible during interaction. This style of input is similar to SideSight [1], a research prototype which used proximity sensors on the side of a phone to detect pointer input beside the device. I like the idea of interacting on the arm rather than in the space around the watch (e.g. Abracadabra [2]) because tactile cues from pressing against your own body could make it easier to interact [3].

Huawei’s new wearable wristband features a pop-out earpiece (the display) which lets you use your phone without taking it out of your pocket. While this is hardly a groundbreaking idea (it’s basically a bluetooth headset that you don’t wear on your head), it at least justifies using a wearable to give you incoming call alerts. While Pebble, for example, gives call notifications on the watch display, you’d have to take your phone out of your pocket to use it anyway.

[1] Butler, A., Izadi, S. and Hodges, S.: SideSight: Multi-“touch” Interaction Around Small Devices. In Proc. of UIST ’08 (2008), p. 201-204.
[2] Harrison, C. and Hudson, S.: Abracadabra: Wireless, High-Precision, and Unpowered Finger Input for Very Small Mobile Devices. In Proc. of UIST ’09(2009), p. 121-124.
[3] Gustafson, S., Rabe, B. and Baudisch, P.: Understanding Palm-Based Imaginary Interfaces: The Role of Visual and Tactile Cues when Browsing. In Proc. of CHI ’13 (2013), p. 889-898.

Network Messages in Pure Data

Pure Data is great for generating sound and is used quite often in HCI for this reason. I’ve previously written about how it can be used for creating tactons as well. This post shows a simple patch which receives messages from a local socket and parses the input. As a Pure Data newbie I found documentation to be pretty poor so I’m hoping this helps others see how to easily integrate pd~ with other programs using sockets.

PDNetworking

Source: networklistener.pd

First the tcpserver object creates a TCP server which listens on the given port number (34567 here). The first outlet has incoming messages; the second has connection status. The bytes2any object takes a bytestream and creates a message from it. As an example of how to parse information from these messages, the unpack object here parses three floats from messages. This patch has four outlets: the first three are the parsed floats, the fourth is connection status (True when a socket is connected to the server).