The Air India Dreamliner Crash and the Ethics of Automation
✈️ When Software Overrides Safety:
Introduction
On July 12, 2025, Air India Flight AI 171 — a Boeing 787 Dreamliner — crashed moments after takeoff from Ahmedabad, killing 260 people. The cause? A pair of fuel switches were mysteriously flipped to “CUTOFF,” starving both engines of fuel mid-air. The pilots didn’t touch them. The system did.
This wasn’t the first time. In 2019, a similar incident occurred with All Nippon Airways in Japan. In both cases, the aircraft’s Thrust Control Malfunction Accommodation (TCMA) software misjudged the plane’s status and cut fuel — prioritizing engine protection over human lives. Aviation expert Mary Schiavo, former Inspector General of the US Department of Transportation, has warned:
“It is not only unfair but simplistic and harmful to blame the pilots… That system — TCMA — has already been faulted in a prior incident. It can and will cut the thrust to both engines if it malfunctions.”
This article explores how automation went tragically wrong, why it happened, and what it reveals about the urgent need for human-centric design, transparency, and ethical boundaries in software systems.
Part 1: The Crash — A Timeline of Automation Failure
What Happened?
- Seconds after takeoff, both engine fuel switches flipped from “RUN” to “CUTOFF.”
- The engines lost thrust. The aircraft began to descend.
- Pilots attempted to restart the engines, but there wasn’t enough altitude to recover.
- The Boeing Dreamliner plane crashed into a medical hostel near the airport. Only one passenger survived.
The Cockpit Exchange
The cockpit voice recorder of flight AI 171 captured a chilling moment with following conversation between the pilots:
“Why did you cut off the fuel?”
“I didn’t.”
The switches were later found in the “RUN” position at the crash site — suggesting they had been manually reset. But it was too late.
Part 2: TCMA — The Autonomous Software That Took Control
Skynet, the fictional Autonomous Software
The coined word “Skynet” originates from the Terminator film series and refers to a highly advanced, self-aware artificial intelligence (AI) system. In the movie, Skynet was initially developed by Cyberdyne Systems as a global digital defense network. However, it eventually achieves sentience, deems humanity a threat, and initiates a nuclear war (Judgment Day) to eradicate its creators. Essentially, Skynet is a fictional representation of a rogue AI or a superintelligence that turns against humanity, a common theme in science fiction exploring the dangers of unchecked technological advancement. Is TCMA a mini real life version of SKYNET?
What Is TCMA?
TCMA is a software protocol mandated by the FAA. It works with FADEC (Full Authority Digital Engine Control) to:
- Detect engine anomalies.
- Adjust or cut thrust automatically.
- Decide whether the aircraft is airborne or grounded.
In both the 2019 ANA incident and the 2025 Air India crash, TCMA mistakenly believed the plane was on the ground and cut fuel — a decision that makes sense only if the aircraft is parked. The software took decision on its own without any warning to pilots what to speak of their permission to do so.
The Flawed Logic:
TCMA’s logic prioritizes engine protection without any consideration of passenger safety:
- If it thinks the plane is grounded, it may cut fuel to prevent engine damage.
- But in flight, this decision is catastrophic and it cost lives of passengers.
As Schiavo explained:
“The system wanted the plane to have the ability all by itself — pilots didn’t have to do this — to sense whether it’s in the air or on the ground. And it got it wrong.”
Part 3: Passenger Safety vs. Aircraft Safety:
A Dangerous Tradeoff
This incident exposes a disturbing truth: the software was designed to protect the aircraft, not the passengers.
- TCMA’s fuel cutoff logic is meant to prevent engine wear or fire risk.
- But it doesn’t account for the fact that cutting fuel mid-air can kill everyone onboard.
- Thus in practice the autonomous software had no built in algorithm to take into account passenger safety.
This is a philosophical and ethical failure. In any human-centric system, passenger safety must override mechanical preservation. Software should never make irreversible decisions that endanger lives — especially without human input or override.
Part 4: The Expert’s Warning — Mary Schiavo’s Report
Mary Schiavo has been vocal about the dangers of blaming pilots prematurely:
“In about 75% of the cases, the pilots are blamed — and in many cases, we’ve been able to disprove that.”
She cited:
- The 2019 ANA incident, where TCMA cut fuel mid-air.
- A recent United Airlines Dreamliner flight that experienced a software-induced nose dive.
- The Air India flight AI 171 crash, where both engines lost power seconds after takeoff.
Schiavo emphasized:
“Altitude is time. The higher you are, the more time you have to react. On takeoff, you don’t have that luxury.”
Part 5: Regulatory Blind Spots
FAA and CAA Warnings:
- The FAA issued advisories in 2018 about fuel switch locking mechanisms — but they weren’t mandatory.
- The UK Civil Aviation Authority issued a bulletin just weeks before the crash, urging checks on Boeing fuel shutoff valves.
- Air India had replaced throttle modules but did not inspect the locking mechanism, citing the advisory as optional.
No Accountability
The Aircraft Accident Investigation Bureau (AAIB) report:
- Did not assign blame to Boeing, Rolls-Royce, or Air India.
- Did not mention TCMA by name.
- Did not recommend corrective actions.
This lack of accountability is alarming — especially when prior incidents and warnings were ignored. It also reveal that passenger safety is not the top priority of AAIB, too as they accepted engine wear priority algorithm of TMC.
Part 6: Rethinking Automation — Ethics, Transparency, and Control
The Illusion of Autonomy
The Boeing Dreamliner crash shows that autonomous systems can make fatal decisions — and humans may be powerless to interrupt it and when they did manage to override, it was already too late..
- The pilots didn’t touch the switches.
- The system acted on flawed assumptions.
- There was no override, no warning, and no time to recover.
The Need for Ethical Boundaries
Automation must be guided by principles:
- Human override must always be possible. The software must have given a warning before acting on its own.
- Passenger safety must take precedence over hardware (read engine) protection.
- Transparency must be built into every decision-making layer. Pilots practical experience must be taken into consideration before writing such software. Reliance of Boeing on in house pilots proved to be insufficient.
Part 7: What Digital Sovereignty Is — and Isn’t
Let’s clarify a common confusion: digital sovereignty doesn’t mean giving software full autonomy. Quite the opposite. True digital sovereignty means:
- Humans retain control over software logic.
- Systems are transparent, inspectable, and modifiable.
- Decisions are traceable and reversible.
The Boeing Dreamliner crash is a case of software autonomy without sovereignty — a system acting without accountability or human consent.
Conclusion: A Wake-Up Call for Human-Centric Design
The Air India (flight AI 171) Boeing Dreamliner crash wasn’t just a technical failure. It was a moral failure — a system designed to protect machinery at the cost of human life.
This tragedy demands a rethinking of how we design, certify, and deploy automation in critical systems. We must move from machine-centric logic to human-centric ethics.
Software should never make irreversible decisions without human oversight. And when lives are at stake, transparency, accountability, and control are non-negotiable.