CHI 2017 Paper + Videos

I’m happy to note that I’ve had a full paper [1] accepted to CHI 2017. The paper describes research from the ABBI project, about how sound from wearable and fixed sources can be used to help visually impaired children at school (for more, please see here). The videos in this post include a short description of the paper as well as a longer description of the research and our findings.

[1] Audible Beacons and Wearables in Schools: Helping Young Visually Impaired Children Play and Move Independently
E. Freeman, G. Wilson, S. Brewster, G. Baud-Bovy, C. Magnusson, and H. Caltenco.
In Proceedings of the 35th Annual ACM Conference on Human Factors in Computing Systems – CHI ’17, pp. 4146-4157. 2017.

 DOI       Website       Video      [Bibtex]

@inproceedings{AudibleBeacons,
    author = {Freeman, Euan and Wilson, Graham and Brewster, Stephen and Baud-Bovy, Gabriel and Magnusson, Charlotte and Caltenco, Hector},
    booktitle = {{Proceedings of the 35th Annual ACM Conference on Human Factors in Computing Systems - CHI '17}},
    title = {{Audible Beacons and Wearables in Schools: Helping Young Visually Impaired Children Play and Move Independently}},
    year = {2017},
    publisher = {ACM Press},
    pages = {4146--4157},
    doi = {10.1145/3025453.3025518},
  url = {http://euanfreeman.co.uk/research/#abbi},
  video = {{https://www.youtube.com/watch?v=SGQmt1NeAGQ}},
}

Android 6.0 Multipart HTTP POST

This how-to shows you how to use a multipart HTTP POST request to upload a file and metadata to a web server. Android 6.0 removed support for legacy HTTP libraries, so a lot of examples I found online are outdated (or require adding the legacy libraries). This solution uses the excellent OkHttp library from Square – because instead of adding legacy libraries for the old method, you should add a new library that’ll also save you a lot of work!

Step 1: Add OkHttp to your gradle build script

In Android Studio, open the build.gradle script for your main project module and add OkHttp to your dependencies:

Step 2: Create and execute an HTTP request

This example shows how to upload the contents of a File object to a server, with a username and date string as metadata.

Summary

OkHttp is awesome because it removes a lot of the heavy lifting necessary to work with HTTP requests in Android. Construct your request content using Java objects and it’ll do the rest for you. If you’re looking for a replacement for the HTTP libraries deprecated in Android 6.0, I strongly recommend this one.

ABBI Demo at ICMI ’16

Earlier this month I was in Tokyo for the International Conference on Multimodal Interaction (ICMI). I was there to demo research from the ABBI project. We had two ABBI demos from the Multimodal Interaction Group at the conference: mine demonstrated how ABBI could be used to adapt the lighting at home for visually impaired children, and Graham’s was about using non-visual stimulus (e.g., thermal, vibration) to present affective cues in a more accessible way for visually impaired smartphone users.

The conference was good and it was held in an amazing city – Tokyo. I spent a lot of down time playing with my camera; you can see some photos by clicking on the image below.

Tokyo 2016

Next year ICMI visits another amazing city – Glasgow! Julie and Alessandro from the Glasgow Interactive Systems Group will be hosting the conference here at Glasgow Uni.

Viva and Other CHI ’16 Papers

Last week I passed my viva, subject to minor thesis corrections!

I’ve also had a Late-Breaking Work submission accepted to CHI, which discusses recent work I’ve been doing on the ABBI (Audio Bracelet for Blind Interaction) project. The paper, titled “Using Sound to Help Visually Impaired Children Play Independently”, describes initial requirement capture and prototyping for a system which uses iBeacons and a ‘smart’ bracelet to help blind and visually impaired children during play time at nursery and school.

Finally, we’ve also had a position paper accepted to the CHI ’16 workshop on mid-air haptics and displays. It outlines mid-air haptics research we have been doing at Glasgow and discusses how it can inform the creation of more usable mid-air widgets for in-air interfaces.

CHI 30-second Preview + ACing

Below is a (very!) short preview of my upcoming CHI paper. In recent years, CHI has asked authors to submit a 30 second preview video summarising accepted papers, so that’s mine.

This year I’m an AC for Late-Breaking Work submissions at CHI. I’ve been reviewing papers since the start of my PhD, but this is my first time as an AC. It’s been interesting to see a conference from the “other” side.

CHI 2016, ABBI, and other things

My CHI 2016 submission, “Do That There: An Interaction Technique for Addressing In-Air Gesture Systems“, has been conditionally accepted! The paper covers the final three studies in my PhD, where I developed and evaluated a technique for addressing in-air gesture systems.

To address a gesture system is to direct input towards it; this involves finding where to perform gestures and how to specify the system you intend to interact with (so that other systems do not act upon your gestures). Do That There (a play on one of HCI’s most famous gesture papers, Put That There) allows both of these things: it shows you where to perform gestures, using multimodal feedback (there) and it shows you how to identify the system you want to gesture at (do that).

Three months ago I started working on the ABBI (Audio Bracelet for Blind Interaction) project as a post-doctoral researcher. The ABBI project is developing wearable technology for blind and visually impaired children. Our role at Glasgow is to investigate sound design and novel interactions which use the technology, focusing on helping visually impaired kids. Recently, we’ve presented our research and ideas to the RNIB TechShare conference and to members of SAVIE, an association focusing on the education of visually impaired children.

Finally, I submitted my PhD thesis in September although I’m still waiting for my final examination. Unfortunately it’s not going to be happening in 2015 but I’m looking forward to getting that wrapped up soon.

Interactive Light Demo at Interact ’15

This week I’ve been in Bamberg (below), in Germany, presenting a poster and an interactive demo at Interact 2015. If you’ve stumbled across this website via my poster, or if you tried my demo at the conference, then it was nice meeting you and I hope you had some fun with it! If you’re looking for more information about the research then I’ve written a little about it here: http://euanfreeman.co.uk/interactive-light-feedback/

BambergIf you’re interested in learning more about in-air gestures or want to know what my demo was about, I’ve written about gestures and their usability problems here: http://euanfreeman.co.uk/gestures/

For some earlier research, where we looked at using tactile feedback for in-air gestures, see: http://euanfreeman.co.uk/projects/above-device-tactile-feedback/

If you have any other questions, my email address is on the left.

PhD Thesis and Interact 2015

I haven’t updated my website in months, mostly because I’ve been focusing on finishing my PhD research. In May I started writing my thesis and I’m now approaching the end of it. With 60,000 words written and my first draft almost complete, I am almost there. It’s been an exciting couple of months writing that up and I’m looking forward to finishing. Not because I haven’t enjoyed it, but because it’s the most substantial piece of work I’ve ever undertaken and it’s exciting seeing it all come together into a single piece of writing.

Not much else has happened in that time, although I did get two submissions accepted to Interact 2015: a poster [1] and a demo [2]. Both describe interactive light feedback, something which has featured a lot in my recent PhD research. I describe interactive light feedback in more detail here. Interact is in Bamberg in Germany, this year, which I’m excited about visiting! Nearer to the time (mid-September) I’ll show more photos and maybe a video of the demo I’ll be giving at the conference.

[1] Towards In-Air Gesture Control of Household Appliances with Limited Displays
E. Freeman, S. Brewster, and V. Lantz.
In Interact 2015 Posters. 2015.

 PDF       DOI       Website      [Bibtex]

@inproceedings{GestureThermostat,
    author = {Freeman, Euan and Brewster, Stephen and Lantz, Vuokko},
    booktitle = {Interact 2015 Posters},
    title = {{Towards In-Air Gesture Control of Household Appliances with Limited Displays}},
    year = {2015},
    publisher = {Springer},
    doi = {10.1007/978-3-319-22723-8_73},
  pdf = {http://research.euanfreeman.co.uk/papers/Interact_2015_Poster.pdf},
  url = {http://link.springer.com/chapter/10.1007/978-3-319-22723-8_73},
}

[2] Interactive Light Feedback: Illuminating Above-Device Gesture Interfaces
E. Freeman, S. Brewster, and V. Lantz.
In Interact 2015 Demos. 2015.

 PDF       DOI       Website      [Bibtex]

@inproceedings{InteractiveLightFeedback,
    author = {Freeman, Euan and Brewster, Stephen and Lantz, Vuokko},
    booktitle = {Interact 2015 Demos},
    title = {{Interactive Light Feedback: Illuminating Above-Device Gesture Interfaces}},
    year = {2015},
    publisher = {Springer},
    doi = {10.1007/978-3-319-22723-8_42},
  pdf = {http://research.euanfreeman.co.uk/papers/Interact_2015_Demo.pdf},
  url = {http://euanfreeman.co.uk/interactive-light-feedback/},
}

Synthesising Speech in Python

There’s a Scottish company called CereProc who do some of the best speech synthesis in the world. They excel in regional accents, especially difficult Scottish ones! I’ve been using their CereVoice Cloud SDK in some recent projects (like Speek). In this post I’m going to share a wee Python script and an Android class for using their cloud API to generate synthesised speech. To use these, you’ll need to create a (free) account over on CereProc’s developer site and then add your auth credentials to the code.

Downloading Speech in Python

Call the download() function with the message you wish to synthesise, optionally specifying which voice to use, which file format to use and what to name the file.

Downloading and Playing Speech in Android

Create a CereCloudPlayer object and use its play method to request, download, and play the message you wish to synthesise.