Recent white papers by Nancy Leveson:
Nancy Leveson, Safety-III: A Systems Approach to Safety and Resilience, July 2020
Recently, there has been a lot of interest in some ideas proposed by Prof. Erik Hollnagel and labeled as “Safety-II” and argued to be the basis for achieving system resilience. He contrasts Safety-II to what he describes as Safety-I, which he claims to be what engineers do now to prevent accidents. What he describes as Safety-I, however, has very little or no resemblance to what is done today or to what has been done in safety engineering for at least 70 years. This white paper describes the history of safety engineering, provides a description of safety engineering as actually practiced in different industries, shows the flaws and inaccuracies in Prof. Hollnagel’s arguments and the flaws in the Safety-II concept, and suggests that a systems approach (Safety-III) is a way forward for the future.
Nancy Leveson,White Paper on Limitations of Safety Assurance and Goal Structuring Notation (GSN), July 2020
People are putting a lot of effort into figuring out how to assure a system is safe after the system design is completed. This white paper presents some of the difficulties and alternatives to emphasizing after-the-fact assurance of safety
Nancy Leveson, Shortcomings of the Bow Tie and other Safety Tools Based on Linear Causality, September 2019.
For some reason, bow tie diagrams are becoming widely used and are thought to be relatively new. Actually, they date back to the early 1970s and seem to have been rediscovered and greatly simplified in the 1990s. They are the least powerful and least useful modeling and diagramming language available. In this paper, I explain why the standard safety tools based on linear causality (including bow ties) oversimplify the cause of accidents, omitting the most important causal factors, and underestimate the level of risk in a system. Special emphasis in placed on Bow Tie diagrams, including their problems and limitations.
Nancy Leveson, Improving the Standard Risk Matrix: Part 1, February 2019
The Risk Matrix is widely used but has many limitations. This white paper describes the problems with the standard Risk Matrix and how to improve the results obtained by using it. A second part is in preparation that suggests a change to the Matrix and the standard definition of risk.
Nancy Leveson, An Engineering Perspective on Avoiding Inadvertent Nuclear War, January 2019:
Written for a workshop on Nuclear Command, Control, and Communication Systems and Strategy Stability.
Nancy Leveson, How to Perform Hazard Analysis on a ‘System-of-Systems’
The term “system-of-systems” is misleading and hindering progress. This paper describes why this is true and shows how STPA can be used to perform hazard analysis on what has been labeled (erroneously) a system-of-systems using an extremely complex defense system as an example.
This paper proposes augmenting the standard V-model to assist in designing human-cyber-physical systems. A new process to create a Conceptual Architecture is inserted after Concept Development and Requirements Engineering and before detailed physical/logical Architecture Development.
In the standard V-model, going from a high-level conceptual view of a system or CONOPS, agreed upon by the stakeholders, to detailed requirements and then to a physical/logical architecture requires a lot of big jumps without having much assistance in making the design decisions involved. These jumps need to be simplified and assistance provided in making them if we want to produce better designs. Too often we find later that there are potential safety and security issues in the architecture generated. Changes to achieve these and other critical system properties may by then be either enormously expensive or even infeasible, requiring operational controls of limited effectiveness and reliability. Some upgrades may be impossible or very expensive.
A conceptual architecture can also augment our ability to produce user-centered designs. We blame most accidents on the operators (pilots, drivers, etc.) but have few tools that can forge an effective partnership between human factors experts who are designing system interfaces (control panels, displays, physical controls) and operator procedures and the engineers who are focusing on the physical (hardware) and logical (software) parts of the system. Too often today, these two groups work relatively independently and we end up creating systems with the potential for mode confusion, situational awareness problems, etc. These problems need not have been created if the designers could work together effectively as an integrated team. For this they need common models and language.
The process of creating a conceptual architecture will not only make it easier to design safety, security, and other emergent properties into these systems from the beginning, but also provide tremendous increases in our ability to assure, operate, maintain, and evolve these systems within reasonable cost limits. It could also have important uses in the certification of safety-critical systems.
Nancy Leveson, Are you sure your software will not kill anyone?
An opinion piece published February 2020 in the Communcations of the ACM.