Android 6.0 Multipart HTTP POST

This how-to shows you how to use a multipart HTTP POST request to upload a file and metadata to a web server. Android 6.0 removed support for legacy HTTP libraries, so a lot of examples I found online are outdated (or require adding the legacy libraries). This solution uses the excellent OkHttp library from Square – because instead of adding legacy libraries for the old method, you should add a new library that’ll also save you a lot of work!

Step 1: Add OkHttp to your gradle build script

In Android Studio, open the build.gradle script for your main project module and add OkHttp to your dependencies:

Step 2: Create and execute an HTTP request

This example shows how to upload the contents of a File object to a server, with a username and date string as metadata.

Summary

OkHttp is awesome because it removes a lot of the heavy lifting necessary to work with HTTP requests in Android. Construct your request content using Java objects and it’ll do the rest for you. If you’re looking for a replacement for the HTTP libraries deprecated in Android 6.0, I strongly recommend this one.

ABBI Demo at ICMI ’16

Earlier this month I was in Tokyo for the International Conference on Multimodal Interaction (ICMI). I was there to demo research from the ABBI project. We had two ABBI demos from the Multimodal Interaction Group at the conference: mine demonstrated how ABBI could be used to adapt the lighting at home for visually impaired children, and Graham’s was about using non-visual stimulus (e.g., thermal, vibration) to present affective cues in a more accessible way for visually impaired smartphone users.

The conference was good and it was held in an amazing city – Tokyo. Next year, ICMI visits another amazing city – Glasgow! Julie and Alessandro from the Glasgow Interactive Systems Group will be hosting the conference here at Glasgow Uni.

Viva and Other CHI ’16 Papers

Last week I passed my viva, subject to minor thesis corrections!

I’ve also had a Late-Breaking Work submission accepted to CHI, which discusses recent work I’ve been doing on the ABBI (Audio Bracelet for Blind Interaction) project. The paper, titled “Using Sound to Help Visually Impaired Children Play Independently”, describes initial requirement capture and prototyping for a system which uses iBeacons and a ‘smart’ bracelet to help blind and visually impaired children during play time at nursery and school.

Finally, we’ve also had a position paper accepted to the CHI ’16 workshop on mid-air haptics and displays. It outlines mid-air haptics research we have been doing at Glasgow and discusses how it can inform the creation of more usable mid-air widgets for in-air interfaces.

CHI 30-second Preview + ACing

Below is a (very!) short preview of my upcoming CHI paper. In recent years, CHI has asked authors to submit a 30 second preview video summarising accepted papers, so that’s mine.

This year I’m an AC for Late-Breaking Work submissions at CHI. I’ve been reviewing papers since the start of my PhD, but this is my first time as an AC. It’s been interesting to see a conference from the “other” side.

CHI 2016, ABBI, and other things

My CHI 2016 submission, “Do That There: An Interaction Technique for Addressing In-Air Gesture Systems“, has been conditionally accepted! The paper covers the final three studies in my PhD, where I developed and evaluated a technique for addressing in-air gesture systems.

To address a gesture system is to direct input towards it; this involves finding where to perform gestures and how to specify the system you intend to interact with (so that other systems do not act upon your gestures). Do That There (a play on one of HCI’s most famous gesture papers, Put That There) allows both of these things: it shows you where to perform gestures, using multimodal feedback (there) and it shows you how to identify the system you want to gesture at (do that).

Three months ago I started working on the ABBI (Audio Bracelet for Blind Interaction) project as a post-doctoral researcher. The ABBI project is developing wearable technology for blind and visually impaired children. Our role at Glasgow is to investigate sound design and novel interactions which use the technology, focusing on helping visually impaired kids. Recently, we’ve presented our research and ideas to the RNIB TechShare conference and to members of SAVIE, an association focusing on the education of visually impaired children.

Finally, I submitted my PhD thesis in September although I’m still waiting for my final examination. Unfortunately it’s not going to be happening in 2015 but I’m looking forward to getting that wrapped up soon.

Interactive Light Demo at Interact ’15

This week I’ve been in Bamberg, in Germany, presenting a poster and an interactive demo at Interact 2015. If you’ve stumbled across this website via my poster, or if you tried my demo at the conference, then it was nice meeting you and I hope you had some fun with it! If you’re looking for more information about the research then I’ve written a little about it here: http://euanfreeman.co.uk/interactive-light-feedback/

For some earlier research, where we looked at using tactile feedback for in-air gestures, see: http://euanfreeman.co.uk/projects/above-device-tactile-feedback/

PhD Thesis and Interact 2015

I haven’t updated my website in months, mostly because I’ve been focusing on finishing my PhD research. In May I started writing my thesis and I’m now approaching the end of it. With 60,000 words written and my first draft almost complete, I am almost there. It’s been an exciting couple of months writing that up and I’m looking forward to finishing. Not because I haven’t enjoyed it, but because it’s the most substantial piece of work I’ve ever undertaken and it’s exciting seeing it all come together into a single piece of writing.

Not much else has happened in that time, although I did get two submissions accepted to Interact 2015: a poster [1] and a demo [2]. Both describe interactive light feedback, something which has featured a lot in my recent PhD research. I describe interactive light feedback in more detail here. Interact is in Bamberg in Germany, this year, which I’m excited about visiting! Nearer to the time (mid-September) I’ll show more photos and maybe a video of the demo I’ll be giving at the conference.

[1] Towards In-Air Gesture Control of Household Appliances with Limited Displays
E. Freeman, S. Brewster, and V. Lantz.
In Interact 2015 Posters. 2015.

[2] Interactive Light Feedback: Illuminating Above-Device Gesture Interfaces
E. Freeman, S. Brewster, and V. Lantz.
In Interact 2015 Demos. 2015.

A New Smart-Watch Design Space?

Almost exactly a year ago I wrote about my first impressions of Pebble and concluded that “I have to wonder if smart-watches even need a display“. As a smart-watch, I found Pebble most useful for remotely controlling my phone (through its physical buttons) and for promoting awareness of notifications on my phone (through its vibration alerts); its “clunky and awkward user interface” was even detrimental to its other, more important, function as an ordinary watch.

With that in mind, I was excited by Yahoo! Labs recent paper at Tangible, Embedded, and Embodied Interaction (or TEI): Shimmering Smartwatches. In it, they present two prototype smart-watches which don’t have a screen, instead using less sophisticated (but just as expressive and informative) LEDs.

One of their prototypes, Circle, used a circular arrangement of twelve LEDs, each in place of an hour mark on the watch-face. By changing the brightness and hue of the LEDs, the watch was able to communicate information from smart-watch applications, like activity trackers and countdown timers. Their other prototype used four LEDs placed behind icons on the watch-face. Again, brightness and hue could be modulated to allow greater information to be communicated about each of the icons.

I really like the ideas in this paper and its prototypes. High resolution displays are more expensive than simple LED layouts, require more power and are not necessarily more expressive. Hopefully someone builds on the new design space presented by Shimmering Smartwatches, which can certainly be expressive but also lower cost. Also, everything is better with coloured LEDs.

Synthesising Speech in Python

There’s a Scottish company called CereProc who do some of the best speech synthesis in the world. They excel in regional accents, especially difficult Scottish ones! I’ve been using their CereVoice Cloud SDK in some recent projects (like Speek). In this post I’m going to share a wee Python script and an Android class for using their cloud API to generate synthesised speech. To use these, you’ll need to create a (free) account over on CereProc’s developer site and then add your auth credentials to the code.

Downloading Speech in Python

Call the download() function with the message you wish to synthesise, optionally specifying which voice to use, which file format to use and what to name the file.

Downloading and Playing Speech in Android

Create a CereCloudPlayer object and use its play method to request, download, and play the message you wish to synthesise.

Pure Data Patches

I’ve started uploading and documenting some Pure Data patches which I’ve used for generating Earcons and Tactons – hopefully they’ll be useful to someone. Check them out here.