This project will investigate two main questions: how does a human being verify that a computer acting as an agent for them is carrying out a cryptographic algorithm as expected, and how does a human being verify that cryptographic data created on their behalf remains intact throughout a process. These two questions appear in many scenarios where human verifiability is usually required to reduce the level of required trust assumptions on computers.
Examples of the occurrence of the first question include electronic voting systems and also the use of digital signatures. In e-voting, a machine needs to be used to capture the intent of a voter in the form of a ballot. This might be the voter's personal computer in remote voting scenarios or a so-called direct recording electronic (DRE) machine in a polling station. Usually the machine encrypts the voter's ballot, but this needs to be verified by the voter in end-to-end verifiable voting systems (a widely accepted security requirement for voting systems). The current solution is for the voter to challenge the machine several times to get confidence in its behaviour, and for the systems to achieve verifiability, these challenges need to be unpredictable for the machine. It is well known that humans are not good at producing unpredictable decisions (to challenge or not to challenge). Hence, the question remains an open problem.
In the use of digital signatures, especially in light of recent developments making digital signatures legally binding, a similar open problem is acknowledged by the security community and has been named "What You See Is What You Sign" or WYSIWYS for short.
Examples of the occurrence of the second question include end-to-end encrypted instant messaging, web of trust, and again electronic voting. In end-to-end encrypted instant messaging protocols such as Signal (deployed widely, e.g. in WhatsApp, Facebook Messenger, and Google Allo), a crucial step to detect person-in-the-middle attacks is to verify that both user devices have the same cryptographic keys, and this is done by presenting the users with a digest (cryptographic hash) of the keys and asking them to compare them manually. The reason behind this choice is that currently there is no technical solution to prevent or detect such attacks without involving humans or having to trust a third device. A similar open problem exists in end-to-end verifiable electronic voting where a voter is asked to compare the information on a "ballot casting receipt" (provided after casting a ballot as the name suggests) with similar information appearing on a "bulletin board", a trusted public ledger including the definitive list of ballots. This is required to ensure that the voter's ballot has gone through the e-voting system unmodified. The information on the receipt usually includes a cryptographic digest of the encrypted ballot. Again it is well known that humans are not good at comparing random strings, but this is the best solution available for an open problem.
Tackling these questions requires knowledge of cryptographic protocols and the security requirements in such scenarios on the one hand, and studying human behaviour interacting with machines and codes on the other. Building on these two sources of knowledge together would provide the required toolbox to design usable and secure solutions to such general and reoccurring problems in the cross-section of the fields of cyber security and human-computer interaction.