HCI Research Group Seminars
Seminars usually take place on Thursdays, 13.30-14.30 in CSE/082 – unless otherwise specified.
Summer Term 2017
(Links to abstract)
|27 April||Chris Power||CSE/082||The Science of Getting Stuck: Understanding Perceived Uncertainty in Interactive Systems|
|Wednesday 17 May||Helen Petrie||CSE/082||
Usable security: a view from HCI
University of Brighton
|CSE/082||Embracing the Gesture-Driven Interface: challenges for the design of graphical mid-air interfaces using a gestural approach|
|Wednesday 24 May||Alistair Edwards||CSE/082||
The SRC Common Base Policy
|25 May||Andrew Lewis||CSE/082||Introduction to the Mobile ESM application (provisional title)|
Consultant in accessibility and usability
|CSE/082||(To be announced)|
University of St Andrews
|CSE/082||Ubiquitous User Interfaces|
|CSE/082||Directing for Cinematic Virtual Reality: the relationship between 'presence', 'transportation' and 'suspension of disbelief'|
|22 June||David Zendle||CSE/082||(TBA - but something on juicy feedback)|
Chris Power: The Science of Getting Stuck: Understanding Perceived Uncertainty in Interactive Systems
Imagine a gamer, trying to jump over a chasm for the twentieth time, wondering if they are doing something wrong, or if the game just too hard for them. Picture a family historian navigating through 300 pages of search results to discover a long lost aunt, but unsure which poorly labelled link will lead to her place of birth. Envision financial analysts who want insights about their business, but cannot orient themselves in multi-dimensional data because the system does not react they way they expect. Finally, remember your own experiences, when you were hopelessly lost on a website, unable to find that form or policy you needed, even though you were sure you found it before.
All of these scenarios are examples of users experiencing uncertainty in interactive systems. This uncertainty leads users to getting "stuck" and unable to progress in their tasks. Some of this uncertainty is unavoidable, caused by what we are trying to do, such as solving hard problems or playing a game. In other cases, uncertainty is unnecessary, caused by the design and feedback of the interactive system.
In this talk I will discuss some of the work that has been ongoing by our PhD and MSc students looking at untangling this problem.
Embracing the Gesture-Driven Interface: challenges for the design of graphical mid-air interfaces using a gestural approach
Mid-air interaction has been investigated for many years, and with the launch of affordable sensors such as Microsoft Kinect (Microsoft Corporation), Leap Motion (Leap Motion, Inc.) and Myo Armband (Thalmic Labs Inc.), this type of interaction has become more popular. However, graphical interfaces for mid-air interaction have been designed using two dominant styles: cursor-based, where the user's hand replicates the mouse movement by copying the WIMP interaction pattern (Windows, Icons, Menus, and Pointing), and gesture-library, where different gestures are assigned to the functionalities of a system, generating a cognitive overload due to the need for recall over recognition. The use of a gestural approach based on manipulation presents itself as an alternative to the mentioned interaction styles, with a focus on enhancing the user experience in 2D Design Patterns. Taking a practice-based research approach, this talk presents the design space of a gestural approach based on manipulation, with challenges and strategies that can be used in the design of graphical interfaces for mid-air interaction. A series of experiments will be presented to exemplify the use of visual elements and mid-air gestures, in an attempt to create gesture-driven interfaces with a satisfactory user experience.
Aaron Quigley: Ubiquitous User Interfaces
Displays are all around us, on and around our body, fixed and mobile, bleeding into the very fabric of our day to day lives. Displays come in many forms such as smart watches, head-mounted displays or tablets and fixed, mobile, ambient and public displays. However, we know more about the displays connected to our devices than they know about us. Displays and the devices they are connected to are largely ignorant of the context in which they sit including knowing physiological, environmental and computational state. They don't know about the physiological differences between people, the environments they are being used in, if they are being used by one or more persons.
In this talk I review a number of aspects of displays in terms of how we can model, measure, predict and adapt how people can use displays in a myriad of settings. With modeling we seek to represent the physiological differences between people and use the models to adapt and personalize designs, user interfaces. With measurement and prediction we seek to employ various computer vision and depth sensing techniques to better understand how displays are used. And with adaptation we aim to explore subtle techniques and means to support diverging input and output fidelities of display devices. This talk draws on a number of studies from work published in UIST, CHI, MobileHCI, IUI, AVI and UMAP.
Professor Aaron Quigley is the Chair of Human Computer Interaction in the School of Computer Science at the University of St Andrews, UK. Aaron is one of the ACM Future of Computing Academy convenors, ACM SIGCHI Vice President for Conferences and program co-chair for the ACM IUI 2018 conference in Tokyo Japan. Aaron's research interests include surface and multi-display computing, human computer interaction, pervasive and ubiquitous computing and information visualisation. He has published over 160 internationally peer-reviewed publications including edited volumes, journal papers, book chapters, conference and workshop papers and holds 3 patents. In addition he has served on over 80 program committees and has been involved in chairing roles of over 20 international conferences and workshops including UIST, ITS, CHI, Pervasive, UbiComp, Tabletop, LoCA, UM, I-HCI, BCS HCI and MobileHCI.
Directing for Cinematic Virtual Reality: the relationship between 'presence', 'transportation' and 'suspension of disbelief'
The emerging medium of 'Cinematic Virtual Reality' (CVR) features media fidelity that approaches what is found in feature film. Unlike traditional VR, CVR limits the level of control users have within the environment to choosing viewpoints rather than interacting with the world itself. This means that CVR production arguably represents a new type of filmmaking. 'Suspension of disbelief' represents the level of immersion audiences experience when watching a film. Likewise, 'presence' refers to a similar experiential measure in Virtual Reality though it is considered slightly differently. This talk considers the use of 'transportation theory' as a bridge between these constructs to enable established film directing methods to be more readily transferred to Virtual Reality and, specifically, Cinematic VR production.