Professor Nancy Leveson
Nancy Leveson is Professor of Aeronautics and Astronautics and also Professor of Engineering Systems at MIT. She is an elected member of the National Academy of Engineering (NAE). Prof. Leveson conducts research on the topics of system safety, software safety, software and system engineering, and human-computer interaction. In 1999, she received the ACM Allen Newell Award for outstanding computer science research and in 1995 the AIAA Information Systems Award for “developing the field of software safety and for promoting responsible software and system engineering practices where life and property are at stake.” In 2005 she received the ACM Sigsoft Outstanding Research Award. She has published over 200 research papers and is author of two books, “Safeware: System Safety and Computers” published in 1995 by Addison-Wesley and “Engineering a Safer World” published in 2012 by MIT Press. She consults extensively in many industries on the ways to prevent accidents.
Dr. Thomas has a background in CS, EE, Computer Engineering, and Systems Engineering and spent a number of years in industry working for aerospace, automotive, and defense companies. He holds a Ph.D. in Engineering Systems and he now works as a member of the aeronautics and astronautics department at MIT. His research is focused on developing STAMP-based methods. His work includes creating structured processes for analyzing complex automated and human-intensive systems, especially systems that may behave in unanticipated, unsafe, or otherwise undesirable ways through complex interactions with each other and their environment. By using control theory and systems theory, more efficient and effective design and analysis processes can be created to prevent flaws that lead to unsafe or unexpected behaviors when integrated with other systems. More recently he has been applying these techniques to automated systems that are heavily dependent on human interactions and may not only experience human error but may inadvertently induce human error through mode confusion, clumsy automation, and other mechanisms that can be difficult to anticipate.
Dr. Thomas’s work also includes defining a formal structure underlying a systems-theoretic process that can be used to help ensure potentially hazardous or undesirable behaviors are systematically identified. He has developed algorithms to automatically generate formal executable and model-based requirements for software components as well as methods to detect flaws in a set of existing requirements. The same process can be applied to both safety and functional goals of the system, thereby permitting the automated detection of conflicts between safety and other requirements during early system development.
Dr. Thomas has taught classes on software engineering, cybersecurity, system safety, system engineering, human-centered design, and related topics.
Andrew Kopeikin is an Aero Astro PhD candidate at MIT. His research interests are in systems engineering and safety of collaborative control teaming systems. Andrew is part of the Technical Staff at MIT Lincoln Lab and has a background in manned/unmanned flight testing and developing UAV and multi-UAV systems. Prior to starting at MIT, he was on assignment at the US Military Academy at West Point where he taught Controls, Mechatronics, and Senior Capstone Engineering Design. Andrew is also a USAF Reserve officer assigned to the Pentagon and a Certified Flight Instructor out of Hanscom AFB. He has a Master’s from MIT in Aero Astro, and a BS in Aerospace Engineering from the University of Illinois Urbana Champaign.
Justin Poh is a PhD candidate in MIT’s Aero/Astro department whose academic interests lie at the intersection of System Safety and Systems Engineering. He holds a B.S. in Mechanical Engineering from Olin College of Engineering and an M.S. in Aeronautics and Astronautics from MIT. Throughout his career in industry and academia, he has developed system architectures for a variety of automotive, aviation and aerospace systems. Prior to MIT, Justin spent four years developing system architectures for several generations of self-driving vehicles with a major automotive company where the safety of the system was just as important as the functionality it delivered. He was also heavily involved in the development and management of requirements for a variety of vehicle subsystems and components. At MIT, Justin’s current research is focused on developing new methods for system architecture development and applying them to design new types of complex systems that are needed in the aviation industry.
Michael Schmid’s distinct approach to solving real-world artificial intelligence applications is changing the way enterprises adopt Artificial Intelligence. As a PhD Candidate in the Department of Aeronautics Astronautics at MIT, Michael’s cutting-edge research focuses on solving the impeding problems that arise with automation in highly-complex systems.
Automation is becoming increasingly prominent in our everyday lives — from cars and aircraft to applications in our private households. As the number of applications increase, so does the severity of emerging risk – including human lives, business profits, and company reputations. In today’s competitive market, companies need to protect themselves from these obstacles for long-term survival. With point-solutions and ineffective design approaches, enterprises face a safety dilemma that only a truly holistic protection approach can resolve. Michael’s mission is to make a positive difference in the world and drive innovation that improves human life.
Michael has successfully led automation projects with Ford on self-driving software and is currently helping NASA implement AI in urban air transportation. Previously, he has developed a certification approach for automated vehicles, won awards for his work on aircraft systems at Airbus, led multi-million dollar projects at the United Technologies Corporation (now Raytheon Technologies), and has designed future flight-control systems for the German Aerospace Center (DLR). He has provided analysis examples for the international standard ISO 21448: Safety of the Intended Functionality (SOTIF) and contributed to the OMG standard Risk Analysis and Assessment Modeling Language. Michael has been awarded competitive scholarships from the Fulbright Program and the German Academic Scholarship Foundation (less than 0.5% of German students are awarded this scholarship). He also has a Masters degree from MIT (4.9/5.0 GPA).
Alexander “HEFOE” Hillman is a Major in the United States Air Force assigned to the Massachusetts Institute Technology as a PhD student in the Aeronautics and Astronautics Department. A recipient of the Air Education & Training Command Developing Airmen We Need Fellowship, HEFOE has been a data scientist for the Air Force for more than ten years. Alex holds a Bachelor’s of Science in Economics with a minor in Russian Language from the United States Air Force Academy. A career flight tester and C4ISR systems developer for the U.S. Air Force, HEFOE is a graduate of the U.S. Air Force Test Pilot School. He holds master’s degrees in Operations Research, Systems Engineering, Flight Test Engineering, and Military Operational Art and Science. Prior to his current assignment at MIT, HEFOE was the Chief Data Officer of the Air Force’s only Cyberspace Test Group at Eglin Air Force Base in Florida, responsible for the data lifecycle for more than 250 test programs across a U.S. DoD portfolio that included joint, industry, and foreign partner projects. As the principal for analytics in the Cyber Test Group, HEFOE guided information management and analysis for a group of more than 700 military, government civilians, and support contractors.
Alex is also the co-founder of the Air Force Test Center’s Emerald Flag, a developmental test venue for capabilities evaluation at all maturity levels in their lifecycle across multiple warfighting domains. Here his work focused on expanding usability and the performance of advanced networks within the military internet of things.
A recipient of the U.S. State Department’s Critical Language Scholarship for Russian, Alex is professionally proficient in Russian and conversant in Ukrainian. He is also a member of the Air Force’s Language-Enabled Airmen Program.
Alex’s research interests include systems theory, human-centered design, complexity management, manned-unmanned teaming, and systems theoretic approaches to capabilities development. Alex’s advisor is Prof Nancy Leveson, head of the Safety and Cybersecurity Research Group in the Engineering Systems Lab. Alex is also a Military Fellow at MIT’s Lincoln Laboratory in Group 108.
LT Andy Canady is a Nuclear Surface Warfare Officer in the U.S. Navy with previous tours aboard USS THE SULLIVANS (DDG 68) and USS GERALD R FORD (CVN 78). He received a B.S. in Electrical Engineering from the U.S. Naval Academy in 2015 and is pursuing a Masters Degree in the System Design and Management (SDM) program at MIT. While at MIT he has actively pursued courses in system safety and sustainability. His research is in the application of a STAMP-based CAST analysis on the USS FITSGERALD (DDG 62) and USS MCCAIN (DDG 56) collisions in 2017.