Android 6.0 Multipart HTTP POST

This how-to shows you how to use a multipart HTTP POST request to upload a file and metadata to a web server. Android 6.0 removed support for legacy HTTP libraries, so a lot of examples I found online are outdated (or require adding the legacy libraries). This solution uses the excellent OkHttp library from Square – because instead of adding legacy libraries for the old method, you should add a new library that’ll also save you a lot of work!

Step 1: Add OkHttp to your gradle build script

In Android Studio, open the build.gradle script for your main project module and add OkHttp to your dependencies:

dependencies {
    compile 'com.squareup.okhttp3:okhttp:3.5.0'
}

Step 2: Create and execute an HTTP request

This example shows how to upload the contents of a File object to a server, with a username and date string as metadata.

String UPLOAD_URL = "http://yoururl.com/example.php";

// Example data
String username = "test_user_123";
String datetime = "2016-12-09 10:00:00";
File image = getImage();

// Create an HTTP client to execute the request
OkHttpClient client = new OkHttpClient();

// Create a multipart request body. Add metadata and files as 'data parts'.
RequestBody requestBody = new MultipartBody.Builder()
        .setType(MultipartBody.FORM)
        .addFormDataPart("username", username)
        .addFormDataPart("datetime", datetime)
        .addFormDataPart("image", image.getName(),
                RequestBody.create(MediaType.parse("image/jpeg"), image))
        .build();

// Create a POST request to send the data to UPLOAD_URL
Request request = new Request.Builder()
        .url(UPLOAD_URL)
        .post(requestBody)
        .build();

// Execute the request and get the response from the server
Response response = null;

try {
    response = client.newCall(request).execute();
} catch (IOException e) {
    e.printStackTrace();
}

// Check the response to see if the upload succeeded
if (response == null || !response.isSuccessful()) {
    Log.w("Example", "Unable to upload to server.");
} else {
    Log.v("Example", "Upload was successful.");
}

Summary

OkHttp is awesome because it removes a lot of the heavy lifting necessary to work with HTTP requests in Android. Construct your request content using Java objects and it’ll do the rest for you. If you’re looking for a replacement for the HTTP libraries deprecated in Android 6.0, I strongly recommend this one.

ABBI Demo at ICMI ’16

Earlier this month I was in Tokyo for the International Conference on Multimodal Interaction (ICMI). I was there to demo research from the ABBI project. We had two ABBI demos from the Multimodal Interaction Group at the conference: mine demonstrated how ABBI could be used to adapt the lighting at home for visually impaired children, and Graham’s was about using non-visual stimulus (e.g., thermal, vibration) to present affective cues in a more accessible way for visually impaired smartphone users.

The conference was good and it was held in an amazing city – Tokyo. Next year, ICMI visits another amazing city – Glasgow! Julie and Alessandro from the Glasgow Interactive Systems Group will be hosting the conference here at Glasgow Uni.

Viva and Other CHI ’16 Papers

Last week I passed my viva, subject to minor thesis corrections!

I’ve also had a Late-Breaking Work submission accepted to CHI, which discusses recent work I’ve been doing on the ABBI (Audio Bracelet for Blind Interaction) project. The paper, titled “Using Sound to Help Visually Impaired Children Play Independently”, describes initial requirement capture and prototyping for a system which uses iBeacons and a ‘smart’ bracelet to help blind and visually impaired children during play time at nursery and school.

Finally, we’ve also had a position paper accepted to the CHI ’16 workshop on mid-air haptics and displays. It outlines mid-air haptics research we have been doing at Glasgow and discusses how it can inform the creation of more usable mid-air widgets for in-air interfaces.

CHI 30-second Preview + ACing

Below is a (very!) short preview of my upcoming CHI paper. In recent years, CHI has asked authors to submit a 30 second preview video summarising accepted papers, so that’s mine.

This year I’m an AC for Late-Breaking Work submissions at CHI. I’ve been reviewing papers since the start of my PhD, but this is my first time as an AC. It’s been interesting to see a conference from the “other” side.

CHI 2016, ABBI, and other things

My CHI 2016 submission, “Do That There: An Interaction Technique for Addressing In-Air Gesture Systems“, has been conditionally accepted! The paper covers the final three studies in my PhD, where I developed and evaluated a technique for addressing in-air gesture systems.

To address a gesture system is to direct input towards it; this involves finding where to perform gestures and how to specify the system you intend to interact with (so that other systems do not act upon your gestures). Do That There (a play on one of HCI’s most famous gesture papers, Put That There) allows both of these things: it shows you where to perform gestures, using multimodal feedback (there) and it shows you how to identify the system you want to gesture at (do that).

Three months ago I started working on the ABBI (Audio Bracelet for Blind Interaction) project as a post-doctoral researcher. The ABBI project is developing wearable technology for blind and visually impaired children. Our role at Glasgow is to investigate sound design and novel interactions which use the technology, focusing on helping visually impaired kids. Recently, we’ve presented our research and ideas to the RNIB TechShare conference and to members of SAVIE, an association focusing on the education of visually impaired children.

Finally, I submitted my PhD thesis in September although I’m still waiting for my final examination. Unfortunately it’s not going to be happening in 2015 but I’m looking forward to getting that wrapped up soon.

Interactive Light Demo at Interact ’15

This week I’ve been in Bamberg, in Germany, presenting a poster and an interactive demo at Interact 2015. If you’ve stumbled across this website via my poster, or if you tried my demo at the conference, then it was nice meeting you and I hope you had some fun with it! If you’re looking for more information about the research then I’ve written a little about it here: http://euanfreeman.co.uk/interactive-light-feedback/

For some earlier research, where we looked at using tactile feedback for in-air gestures, see: http://euanfreeman.co.uk/projects/above-device-tactile-feedback/

PhD Thesis and Interact 2015

I haven’t updated my website in months, mostly because I’ve been focusing on finishing my PhD research. In May I started writing my thesis and I’m now approaching the end of it. With 60,000 words written and my first draft almost complete, I am almost there. It’s been an exciting couple of months writing that up and I’m looking forward to finishing. Not because I haven’t enjoyed it, but because it’s the most substantial piece of work I’ve ever undertaken and it’s exciting seeing it all come together into a single piece of writing.

Not much else has happened in that time, although I did get two submissions accepted to Interact 2015: a poster [1] and a demo [2]. Both describe interactive light feedback, something which has featured a lot in my recent PhD research. I describe interactive light feedback in more detail here. Interact is in Bamberg in Germany, this year, which I’m excited about visiting! Nearer to the time (mid-September) I’ll show more photos and maybe a video of the demo I’ll be giving at the conference.

[1] Towards In-Air Gesture Control of Household Appliances with Limited Displays
E. Freeman, S. Brewster, and V. Lantz.
In Interact 2015 Posters. 2015.

[2] Interactive Light Feedback: Illuminating Above-Device Gesture Interfaces
E. Freeman, S. Brewster, and V. Lantz.
In Interact 2015 Demos. 2015.

Synthesising Speech in Python

There’s a Scottish company called CereProc who do some of the best speech synthesis in the world. They excel in regional accents, especially difficult Scottish ones! I’ve been using their CereVoice Cloud SDK in some recent projects (like Speek). In this post I’m going to share a wee Python script and an Android class for using their cloud API to generate synthesised speech. To use these, you’ll need to create a (free) account over on CereProc’s developer site and then add your auth credentials to the code.

Downloading Speech in Python

Call the download() function with the message you wish to synthesise, optionally specifying which voice to use, which file format to use and what to name the file.

Downloading and Playing Speech in Android

Create a CereCloudPlayer object and use its play method to request, download, and play the message you wish to synthesise.

Pure Data Patches

I’ve started uploading and documenting some Pure Data patches which I’ve used for generating Earcons and Tactons – hopefully they’ll be useful to someone. Check them out here.

ICMI ’14 Highlights

Last week I was in Istanbul for ICMI ’14, the International Conference on Multimodal Interaction. ICMI is where signal processing and machine learning meets human-computer interaction, with aims of finding ways to use and improve multimodal interaction.

Ask two people and you’ll get a different definition of “multimodal interaction“. From my (HCI) perspective, it is interaction with technology using a variety of human capabilities; such as our perceptual abilities (like seeing, hearing, feeling) and motor control abilities (like speaking, gesturing, touching). In one of this year’s keynotes, Yvonne Rogers said we should design multimodal interfaces because we also experience the world using many modalities.

In this post I’m going to recap what I thought were the most interesting papers at the conference this year. There are also some photos of the sights, because why not?

Gesture Heatmaps: Understanding Gesture Performance with Colorful Visualizations

by Radu-Daniel Vatavu, Lisa Anthony and Jacob O. Wobbrock

Vatavu et al. presented a poster on Gesture Heatmaps, which are visualisations of how users perform touch-stroke gestures. Their visualisations represent characteristics of how users perform gestures, such as stroke speed and distance error from a gesture template. These visualisations can be used to summarise gesture performances, giving insight into how users perform touch gestures. These could be used to identify problematic gestures or understand which parts of gestures users find difficult, for example. Something which I liked about this paper was the way they used these visualisations to create confusion matrices, showing where and why gestures were misclassified.

CrossMotion: Fusing Device and Image Motion for User Identification, Tracking and Device Association

by Andrew D. Wilson and Hrvoje Benko

Wilson and Benko found that device acceleration (from accelerometers) was highly correlated with image acceleration (from a Kinect, in this case). This means that fusing acceleration data from these two sources can be used to identify a particular person in an image, even if their mobile device isn’t visible (for example, phone in pocket). Some advantages of using this approach are that users can be found in an image from their device movement alone (simplifying identification) and devices can be identified and tracked, even without direct line of sight.

SoundFLEX: Designing Audio to Guide Interactions with Shape-Retaining Deformable Interfaces

by Koray Tahiroğlu, Thomas Svedström, Valtteri Wikström, Simon Overstall, Johan Kildal and Teemu Ahmaniemi

Tahiroğlu et al. looked at how audio cues could be used to guide interactions with a deformable interface. They found that sound was an effective way of encouraging users to deform devices and some of their designs were particularly effective for guiding users to specific deformations. Based on these findings, they recommend using sound to help users discover deformations. Koray had a cool demo at the conference, which is the first time I’ve tried a deformable device prototype. Pretty neat idea.