HCI Research Group Seminars

News about seminars (and other activities of the HCI Group) is distributed via the york-hci mailing list. This is maintained by Jisc - and you may subscribe yourself to the list.

Seminars usually take place on Thursdays, 13.30-14.30 in CSE/082 – unless otherwise specified.

Information for speakers is available.

Summer Term 2017

Date Speaker Room Title
(Links to abstract)
20 April Sam Simpson CSE/082 Postponed How do experienced mindfulness practitioners develop a mindful life and what does this tell us about playing computer games mindfully?
27 April Chris Power CSE/082 The Science of Getting Stuck: Understanding Perceived Uncertainty in Interactive Systems
4 May LMB/036
11 May CSE/082
Wednesday 17 May Helen Petrie CSE/082 Departmental seminar
Usable security: a view from HCI
18 May Tobias Mülling
University of Brighton
CSE/082 Embracing the Gesture-Driven Interface: challenges for the design of graphical mid-air interfaces using a gestural approach
Wednesday 24 May Alistair Edwards CSE/082 Postponed Departmental seminar
The SRC Common Base Policy
25 May Andrew Lewis CSE/082 Postponed Introduction to the Mobile ESM application
1 June Guido Gybels
Consultant in accessibility and usability
CSE/082 No user left behind: embedding accessibility and usability throughout design and implementation
8 June Aaron Quigley
University of St Andrews
CSE/082 Ubiquitous User Interfaces
15 June John Mateer
CSE/082 Directing for Cinematic Virtual Reality: the relationship between 'presence', 'transportation' and 'suspension of disbelief'
22 June David Zendle CSE/082 Postponed VR Workshop: An introduction to designing and conducting room-scale VR experiments with Unity3D and the HTC Vive

27 April
Chris Power: The Science of Getting Stuck: Understanding Perceived Uncertainty in Interactive Systems

Imagine a gamer, trying to jump over a chasm for the twentieth time, wondering if they are doing something wrong, or if the game just too hard for them. Picture a family historian navigating through 300 pages of search results to discover a long lost aunt, but unsure which poorly labelled link will lead to her place of birth. Envision financial analysts who want insights about their business, but cannot orient themselves in multi-dimensional data because the system does not react they way they expect. Finally, remember your own experiences, when you were hopelessly lost on a website, unable to find that form or policy you needed, even though you were sure you found it before.

All of these scenarios are examples of users experiencing uncertainty in interactive systems. This uncertainty leads users to getting "stuck" and unable to progress in their tasks. Some of this uncertainty is unavoidable, caused by what we are trying to do, such as solving hard problems or playing a game. In other cases, uncertainty is unnecessary, caused by the design and feedback of the interactive system.

In this talk I will discuss some of the work that has been ongoing by our PhD and MSc students looking at untangling this problem.

18 May
Tobias Mülling: Embracing the Gesture-Driven Interface: challenges for the design of graphical mid-air interfaces using a gestural approach

Mid-air interaction has been investigated for many years, and with the launch of affordable sensors such as Microsoft Kinect (Microsoft Corporation), Leap Motion (Leap Motion, Inc.) and Myo Armband (Thalmic Labs Inc.), this type of interaction has become more popular. However, graphical interfaces for mid-air interaction have been designed using two dominant styles: cursor-based, where the user's hand replicates the mouse movement by copying the WIMP interaction pattern (Windows, Icons, Menus, and Pointing), and gesture-library, where different gestures are assigned to the functionalities of a system, generating a cognitive overload due to the need for recall over recognition. The use of a gestural approach based on manipulation presents itself as an alternative to the mentioned interaction styles, with a focus on enhancing the user experience in 2D Design Patterns. Taking a practice-based research approach, this talk presents the design space of a gestural approach based on manipulation, with challenges and strategies that can be used in the design of graphical interfaces for mid-air interaction. A series of experiments will be presented to exemplify the use of visual elements and mid-air gestures, in an attempt to create gesture-driven interfaces with a satisfactory user experience.

25 May
Andrew Lewis: Introduction to the Mobile ESM application

The experience sampling method is a well established research methodology that asks participants to stop at certain times and make notes about their experience. Traditionally, this has been done with a paper booklet and a pager. Modern smartphones and mobile devices are able to update this procedure to deliver a questionnaire online, either paging the participant, or allowing the participant to directly send data to the researcher in response to a predetermined event. In this talk I will describe an opensource framework that allows researchers to develop ESM applications for Android and iOS devices, explain how to setup the environment, generate questions, and look at the results.

1 June
Guido Gybels: No user left behind: embedding accessibility and usability throughout design and implementation

The staggering development of information and communication technology, at sustained, exponential rates in many areas, has brought tremendous change to the way people live, work and entertain themselves. The ubiquity of ICT, the march of “smart” devices and the ever more ambient nature of technology have changed our world fundamentally.

In our modern Society, then, the ability to access, understand and use all these products and services is not just nice to have, but is absolutely essential for full opportunity and participation as a citizen.

However, users exhibit a wide diversity in terms of abilities and preferences. The diversity is such that defining mainstream interaction models is challenging, and any notion of mainstream implies also the need to cater for those interactions that fall outside it. A proper understanding of, and consideration for, the broad variety of user needs and requirements manifesting themselves as a consequence of these different individual abilities, characteristics and preferences is thus essential.

In this short seminar we will take a crash course on how to embed accessibility and usability in the scope, design, development, implementation and delivery of ICTs. We will look at the relationship between accessibility and usability, at the demographics that drive the abovementioned wide variety of needs and requirements, the relationship between mainstream ICT and assistive technology and some common myths and misunderstandings about accessibility and usability.

This is not a new topic and there have been a great many projects, research, investigations, pilot programmes, e-inclusion charters and investment in this field. But while we see exponential developments in many aspects of technology, it often seems as though progress in accessibility and usability is painfully slow. In this seminar, we will therefore also touch upon the need for a more pre-emptive approach to the problem, recognising that retrofitting functionality to existing systems and services is often much harder than designing for it from the start.

We will also look at moving towards a deliberate, process-driven approach to accessibility and usability as opposed to the often ad-hoc or post-fact approach we see today. Finally, we will consider what further research may be helpful in progressing the accessibility and usability agenda.


Having written his first computer programme in 1979 and with over three decades of professional experience, Guido Gybels is a veteran technology expert with a proven track record of award-winning innovation, research and development, software and hardware engineering, standardisation and policy and regulation. A former Director of New Technologies and Director of Technology, he is also an accomplished senior manager with in-depth understanding of the wider context in which technology solutions must operate. He is a long-standing advocate of the view that technology is there to serve the user, not the other way around.

8 June
Aaron Quigley: Ubiquitous User Interfaces

Displays are all around us, on and around our body, fixed and mobile, bleeding into the very fabric of our day to day lives. Displays come in many forms such as smart watches, head-mounted displays or tablets and fixed, mobile, ambient and public displays. However, we know more about the displays connected to our devices than they know about us. Displays and the devices they are connected to are largely ignorant of the context in which they sit including knowing physiological, environmental and computational state. They don't know about the physiological differences between people, the environments they are being used in, if they are being used by one or more persons.

In this talk I review a number of aspects of displays in terms of how we can model, measure, predict and adapt how people can use displays in a myriad of settings. With modeling we seek to represent the physiological differences between people and use the models to adapt and personalize designs, user interfaces. With measurement and prediction we seek to employ various computer vision and depth sensing techniques to better understand how displays are used. And with adaptation we aim to explore subtle techniques and means to support diverging input and output fidelities of display devices. This talk draws on a number of studies from work published in UIST, CHI, MobileHCI, IUI, AVI and UMAP.


Professor Aaron Quigley is the Chair of Human Computer Interaction in the School of Computer Science at the University of St Andrews, UK. Aaron is one of the ACM Future of Computing Academy convenors, ACM SIGCHI Vice President for Conferences and program co-chair for the ACM IUI 2018 conference in Tokyo Japan. Aaron's research interests include surface and multi-display computing, human computer interaction, pervasive and ubiquitous computing and information visualisation. He has published over 160 internationally peer-reviewed publications including edited volumes, journal papers, book chapters, conference and workshop papers and holds 3 patents. In addition he has served on over 80 program committees and has been involved in chairing roles of over 20 international conferences and workshops including UIST, ITS, CHI, Pervasive, UbiComp, Tabletop, LoCA, UM, I-HCI, BCS HCI and MobileHCI.

15 June
John Mateer: Directing for Cinematic Virtual Reality: the relationship between 'presence', 'transportation' and 'suspension of disbelief'

The emerging medium of 'Cinematic Virtual Reality' (CVR) features media fidelity that approaches what is found in feature film. Unlike traditional VR, CVR limits the level of control users have within the environment to choosing viewpoints rather than interacting with the world itself. This means that CVR production arguably represents a new type of filmmaking. 'Suspension of disbelief' represents the level of immersion audiences experience when watching a film. Likewise, 'presence' refers to a similar experiential measure in Virtual Reality though it is considered slightly differently. This talk considers the use of 'transportation theory' as a bridge between these constructs to enable established film directing methods to be more readily transferred to Virtual Reality and, specifically, Cinematic VR production.

22 June
David Zendle: VR Workshop: An introduction to designing and conducting room-scale VR experiments with Unity3D and the HTC Vive

Over the past two years, VR technology has become increasingly accessible, with many christening 2017 the 'year of VR'. For those unfamiliar with this new equipment, this session offers an overview of the capabilities of much of the VR equipment currently on the market, with a particular focus on the HTC Vive: The most advanced head-mounted display that is widely available to commercial consumers.

Following this, a broad overview of development for this new technology will be given, including the 'live-coding' of a VR simulation in the Unity3D game engine. Hands-on trials of the Vive will then be made available, whilst attendees split into groups to workshop potential new studies they could conduct with this exciting new technology.

Autumn Term 2017

Date Speaker Room Title
(Links to abstract)
5 October CSE/082
12 October   CSE/082  
19 October   CSE/082  
26 October   CSE/082  
2 November   CSE/082  
9 November   CSE/082  
16 November   CSE/082  
23 November   CSE/082  
30 November   CSE/082  

Archive of previous seminars