|Home | Research projects | Publications | Teaching | Short bio|
Roadside cameras, that can automatically read your license plate, are now commonplace. These are used to enforce road pricing and are useful for preventing and detecting a wide range of criminal activity. However, in the case of a swapped or 'cloned plate' (copied plate), the effectiveness of such systems is compromised. It would be highly useful if we could build an automatic system that could recognise generic vehicle type (car, lorry, bus), specific vehicle type (such as make and model) and vehicle colour from the video stream. This project will explore a range of possible algorithms that aim to achieve that goal. Key ideas to explore include (i) segmenting the vehicle from the background, (ii) building features and classifiers based on outline shape only, (iii) building features and classifiers based on foreground (vehicle) visual content.
It is anticipated that several system variants will be prototyped in MATLAB and they will be compared using a database of roadside vehicle images and video clips.
PAT and CVI desirable
Imagine that you take a picture using your mobile phone, while out for a walk. When you come home, you might want to store this image on your PC and print a copy using your PC printer. This would be made easy if you could do all of this just by moving your cell phone in front of your PC screen. For example, you could look at the "My Documents" folder through the cell-phone camera, and twist the phone momentarily to open that folder. Do the same for "My Pictures" folder and then by moving the phone momentarily towards the PC screen you can transfer all recently taken images to the PC over a Bluetooth link. This is now possible using technology developed at the University of York and University of Newcastle ( New Scientist article )
The basic requirement is that your phone has got a camera on the rear and a wireless connection with the PC, such as Bluetooth. The camera on the phone looks at the PC screen and extracts some features, such as the corners of windows, and the positions of these features are sent from the phone to the PC over the Bluetooth link. The PC then compares these positions, with what it knows that it is displaying on its screen and so it can calculate precisely what the phone is looking at on its screen. It can also calculate the phone position and orientation relative to the PC screen. Thus the phone can be used as a six-degree of freedom "flying mouse", a bit like the Wii system. But it is more than that, because you have a cloned part of the PC screen on your cell phone screen and so you can interact with the PC screen through the phone. This starts to get exciting, when you realise that large public displays are just PCs with a large screen.
The current system uses bright green markers to establish a one-to-one registration between mobile phone image and the PC screen, the next stage is to develop a markerless system. Rather than using a mobile phone, we will use a web cam connected to a laptop, or even connected to the PC itself in order to make the system development easier. What we would like to do is find features that are invariant to (i.e. not changed by) perspective distortion and match those across the phone image and PC screen. Part of the project will be about what such invariant features are and how we can extract them from the images. We will look at David Lowe's work on 'Scale Invariant Feature Transforms' (SIFT features). The system will be implemented in MATLAB and evaluated on a wide variety of PC screen configurations.
For videos of a marker based system working, see Nick Pears' direct interaction research page
PAT: desirable; CVI: desirable
Hartley, R. and Zisserman, A., 2000. Multiple View Geometry in Computer Vision . Cambridge Press.
Forsyth, D.A. and Ponce, J., 2003. Computer Vision: A modern approach. Prentice Hall.
Zhao, W. and Chellappa, R., 2006. Face Processing: Advanced Modeling and Methods. Academic Press.
Bishop, C. M., 2006. Pattern Recognition and Machine Learning. Springer