David Pumfrey will only be supervising projects for the MSc in Safety Critical Systems Engineering (SCSE) and MSc in Gas Turbine Control (GTC) this year.
I am happy to consider suggestions for student-defined projects. Some further suggestions of my own appear below; note that all of these are open for discussion, i.e. if my suggestion inspires an idea for a related project, or you'd like to change the scope or direction of one of my suggestions, please feel free to talk to me about it.
11.djp.1 Sneak Analysis: Realising the Potential [SCSE]
Sneak Analysis has existed in various forms since the mid-1960s. Sneak Circuit Analysis [1.1] is the best-defined and most widely used variant of the technique, and is supported by a number of excellent tools for use in specific domains, such as 12-volt automotive circuitry.
Other variants of the technique, however, such as sneak at system level, or software sneak analysis, have not found particularly wide acceptance. This has been at least partly due to the originators of the techique resisting widespread publication of guidance for the method. Recently, however, the European Space Agency / ECSS (European Cooperation for Space Standardisation) has published its own standard for Sneak Analysis (including Software Sneak Analysis), which is available as two documents: "method and procedure" [1.2] and "clue list" [1.3].
From the documentation of Sneak Analysis that has previously been available, it appears that the method incorporates some unique ideas (such as its "path" model, and explicit investigation of unintended interactions) which offer the potential for a more comprehensive and powerful analysis than widely accepted techniques such as HAZOP [1.4, 1.5]. However, many system safety engineers who have reviewed the method (and the ESA documentation) have been repelled by its apparent complexity, the huge set of "clues" proposed, and by the difficulty of gaining confidence in the completeness and validity of the analysis.
A few years ago, researchers in the HISE group at York conducted a review of HAZOP [1.6, 1.7], which attempted to identify the key features that give the technique its strength and utility. The review resulted in a technique called "SHARD", which was designed to retain the principal technical benefits of HAZOP whilst reducing the cost and effort required to apply it.
This project aims to do something similar for Sneak Analysis, i.e. to investigate whether there are useful ideas and features which are unique to Sneak Analysis that could be retained, whilst cutting down the size of the clue set and finding other refinements to produce a more practical and cost-effective approach ("Sneak-Lite"!)
The project will review the available documentation of all forms of Sneak Analysis, with especial focus on the ESA standard. Having identified and characterised the underlying principles of the technique, a minimal "core" method will be defined, which retains the unique features of Sneak whilst simplifying the clue lists, and reducing the dependence on highly experienced analysts. The refined method will be tested by practical application to at least one substantial case study.
[1.1] Rankin J.P., Sneak Circuit Analysis. Nuclear Safety, vol. 14 no. 5, 1973
[1.2] European Cooperation for Space Standardization (ECSS), Space Product Assurance: Q-40-04 Part 1A: Sneak Analysis - Part 1: Method and Procedure, ECSS Secretariat, Requirements and Standards Division, Noordwijk, The Netherlands, 1997
[1.3] European Cooperation for Space Standardization (ECSS), Space Product Assurance: Q-40-04 Part 2A: Sneak Analysis - Part 2: Clue List, ECSS Secretariat, Requirements and Standards Division, Noordwijk, The Netherlands, 1997
[1.4] CISHEC, A Guide to Hazard and Operability Studies, 1977. The Chemical Industry Safety and Health Council of the Chemical Industries Association Ltd
[1.5] Kletz T., Hazop and Hazan: Identifying and Assessing Process Industry Hazards, Third ed., 1992. Institution of Chemical Engineers. ISBN 0-85295-285-6
[1.6] McDermid J.A. and Pumfrey D.J., A Development of Hazard Analysis to aid Software Design in COMPASS '94: Proceedings of the Ninth Annual Conference on Computer Assurance, NIST Gaithersburg MD, 1994. pp. 17-25. IEEE, 445 Hoes Lane, P.O. Box 1331, Piscataway, NJ 0855-1331.
[1.7] McDermid J.A., Nicholson M., Pumfrey D.J. and Fenelon P., Experience with the Application of HAZOP to Computer-Based Systems in COMPASS '95: Proceedings of the Tenth Annual Conference on Computer Assurance, 1995, pp. 37-48.
11.djp.2 Converting from Prescriptive to Risk-Based Safety Regulation: An investigation of common issues [SCSE]
Over the past few years, an increasing number of safety critical industry sectors have made, or begun to make, a transition from a prescriptive to a risk-based style of regulation. Examples include offshore oil production, military procurement and, most recently, civil air traffic management. The move to risk-based regulation is usually coupled with a formal requirement for some form of Safety Management System, e.g. that described by the International Civil Aviation Organisation in [2.1].
Very crudely, prescriptive regulation can be characterised as "do X,Y and Z and we will give you a licence". Risk based regulation can be characterised as "You can decide for yourselves how to do things, but you must provide us with a satisfactory argument of safety". The reality under either regime is, of course, much less straightforward than this!
It is clear that such a change of regime is difficult for both the regulator and the regulated. Above all, it appears to imply a major shift of responsibility for risk identification and determination of acceptable control measures. Further discussion can be found in a paper from Adelard and the CAA [2.2].
This is a project which would appeal to someone who would like to consider the "philosophy" of Safety Critical Systems engineering and management.
The exact definition of this project is open to discussion with interested students, as the scope and, particularly, evaluation of the work undertaken will depend on the candidate's experience and access to suitable case studies in their workplace. However, I anticipate that an important component of the work will be interviews with practitioners from sectors where such changes are taking place - for example, the regulation of civil air traffic management.
Ideally, I would like the project to provide a theoretical investigation of the implications of a change from prescription to risk-based regulation, and to identify likely problems and issues that might be faced. The aim would be to produce a "practial primer" to help both regulator and regulated organisations consider how they should prepare for the transition.
[2.1] International Civial Aviation Organisation, Doc 9859: Safety Management Manual (SMM) - First Edition, 2006, ISBN: 92-9194-675-3
[2.2] Penny J., Eaton A., Bishop, P.G. and Bloomfield R.E., The Practicalities of Goal-Based Safety Regulation in Aspects of Safety Management: Proceedings of the Ninth Safety-Critical Systems Symposium, Redmill F. and Anderson T. (eds.), Springer, 2001, ISBN: 1-85233-411-8, pages 35-48
11.djp.3 ALARP: Confronting the assumption of free information [SCSE, GTC]
The idea that risk should be reduced As Low As Reasonably Practicable (ALARP) [3.1] is a fundamental principle of safety management, especially in the UK, where ALARP is has been understood in law since the judgement of Lord Justice Asquith [3.2] in 1949, and forms a key element of legislation such as the Health and Safety at Work Act [3.3].
The application of the ALARP principle requires that sufficient information is available to make well-informed judgements about the costs and risk-reduction benefits of any given option. It has been observed that, implicitly, the "traditional" interpretation of ALARP assumes that this information is free (i.e. the cost of obtaining it is negligible). In other words, the only cost considered in ALARP is that of taking action to mitigate risk, not of the work required to determine the potential cost and benefit of such action.
This becomes a significant problem for the application of ALARP in situations where the cost of information is, itself, high. An example of such a situation arises in software-intensive systems, where the cost of analysis to determine properties of the software (information cost) can be extremely high compared to the cost of software modifications (action cost).
In (partial) response to this problem, McDermid and Kelly have proposed the concept of "As Confident as Reasonably Practicable" [3.4], which proposes that there is, in effect, a scale of confidence in the evidence available to justify safety related decisions, and that this scale of confidence can be treated similarly to the way risk is considered under ALARP in other words, the into unsatisfactory / tolerable provided that the lack of further work can be justified / generally satisfactory. There is, however, little published work on this concept, and there has consequently been little public discussion of whether it truly addresses the problems of ALARP.
Project OutlineThis project will examine how the assumption of free information implicit in the ALARP principle can be challenged and managed. It will consider ways in which the "risk of not knowing" can be factored into ALARP considerations, and how to formulate an approach to making ALARP arguments in situations where knowledge is incomplete. The ACARP principle will be investigated, and its specific applicability to the ALARP information problem will be tested.
New ideas - probably involving the investigation of techniques from outside traditional system safety engineering - for example, game theory - will be encouraged.
The ultimate aim of the project will be to provide practical guidance notes for engineers and managers, and (one or more) revised and extended version(s) of Tim Kelly's ALARP safety case pattern [3.5, 3.6], showing how to complete ALARP arguments under these circumstances.
[3.1] Health & Safety Executive, Reducing Risks, Protecting People: HSE's decision-making process, HSE, 2001, ISBN 0 7176 2151 0, available online at http://www.hse.gov.uk/risk/theory/r2p2.pdf.
[3.2] Lord Justice Asquith, Judgement in Edwards v National Coal Board, 1949, quoted in Barret B. and Howells R., Occupational Health and Safety Law, 1993, Pitman Publishing, London.
[3.3] Health and Safety at Work etc Act, 1974, available online at http://www.hse.gov.uk/legislation/hswa.pdf.
[3.4] McDermid J., Risk, Uncertainty, Software and Professional Ethics, Safety Systems: The Safety-Critical Systems Club Newsletter, vol.17 no.2, January 2008.
[3.5] Kelly T.P. and McDermid, J.A., Safety Case Construction and Reuse using Patterns, Proceedings of 16th International Conference on Computer Safety, Reliability and Security (SAFECOMP'97), September 1997, Springer.
[3.6] Kelly T.P, ALARP GSN pattern in Hazard and Risk Management (HRM) course notes, MSc in Safety Critical Systems Engineering, Department of Computer Science, University of York, 2007.
11.djp.4 Using techniques from Gathered Fault Combination to make Fault Trees tractable [SCSE, GTC]
To be added.
To be added.
These are ideas which SCSE students
may be interested in pursuing, but which I have not yet worked up into full
These are ideas which SCSE students may be interested in pursuing, but which I have not yet worked up into full project proposals.