From Rigid Orderliness to Barely Controlled Chaos: Sociotechnical Risk and AI in Aviation
Bidragsytere
-
Jan HayesProsjektdeltager
-
Martin Rasmussen SkogstadProsjektdeltager
Publiseringsår
2025
Avdeling
Studio Apertura
Given the potentially catastrophic consequences of errors, faults and poor decisions in aviation, to date artificial
intelligence (AI) applications are permitted only in non-safety related activities and tasks, and machine learning is
banned during in flight operations. By restricting the possible adverse consequences of using AI technologies, this
approach also severely restricts the possible benefits and so there are broad plans from regulatory bodies to allow
further integration of AI into the sector. Based on interviews with aviation sector safety and AI experts, this study
aims to understand the strengths and vulnerabilities in current aviation safety processes and how processes and
practices may need to be adapted to address safety in AI. Drawing on Macrae’s SOTEC (Structural, Organizational,
Technological, Epistemic, and Cultural) framework for sociotechnical risk in autonomous and intelligent systems,
we develop a preliminary set of risks posed by use of AI in aviation across these five domains. One of the significant
challenges is the fact that different parts of the sector have different safety management approaches and so may be
impacted by AI in different ways. Safety in aircraft manufacturing and flight operations is certification and
compliance based. When it comes to safety in air traffic management, with multiple actors making judgment-based
time pressured decisions, one interviewee described the environment as ‘barely controlled chaos’. Uncertainty is
high and risk-based processes prevail. This paper unpacks these issues and looks at the implications for identification
and evaluation of novel risks linked to new AI applications.