CHI 2019

CHI 2019 logo

I was at CHI 2019 earlier this week. It was the biggest CHI so far (almost 3,900 attendees), so I’m extra proud to have been part of the organising committee – especially since it was in Glasgow! Aside from organisation, I was helping with University of Glasgow’s exhibitor booth, had two Interactivity exhibits about acoustic levitation, and chaired a great session on Touch and Haptics. I didn’t get to see many of the technical sessions, but a few stuck in mind.

There were a couple of really good papers in the first alt.chi session. First, an analysis of dichotomous inference in CHI papers, followed by a first look at trends and clichés in CHI paper writing. Both papers were well presented and were a chance to reflect on how we present our science as a community. I’m moving away from dichotomous statistics but am a bit apprehensive about how reviewers will respond to that style. Papers like this provide a bit more momentum for change which we’ll all benefit from.

I liked Aakar Gupta’s talk on RotoSwype, which used an IMU embedded in a ring for swipe keyboard input in XR. The neat thing about that work was the focus on subtle, low-effort interaction, with hands by the side of the body instead of raised in front. Fatigue is a big barrier for mid-air interaction, especially for prolonged interactions like text entry, so it was nice to see attention paid to that.

There were good papers in the Touch and Haptics session I chaired, but one that especially sticks in mind was Philip Quinn’s work on touchscreen input sensing using a barometric pressure sensor. The core idea was that devices are sealed to prevent water and dust ingress, and also contain barometric pressure sensors for accurate altitude measurements; when someone applies pressure to the touchscreen, the air pressure inside the almost-completely-sealed device changes briefly. This internal pressure change reliably correlates with pressure input on the touchscreen. Our group in Glasgow did a lot of foundational work on pressure input for mobile devices, so it’s cool to see steps towards facilitating this without needing dedicated sensors.