Is Targeting in Crash-Optimization Algorithms Ethically Permissible?

In a perfect world, obviously, we’d want to save everyone. However, this is not always possible. Every time a new technology shows up, especially one without any real successful precedent, accidents and casualties are inevitable. And because of human nature’s self-interested and error-prone tendencies, I’d say the trade-off is warranted. Machines, in my view, are already making our lives safer than humans do, because humans, at least in my books, are inherently flawed.

Take the recent Air India crash. The preliminary report indicated that the fuel system was manually turned off by one of the pilots. Whether it was intentional or an attempt to solve another issue that hasn’t surfaced yet, that is yet to be seen, but the fact remains that only a human could make that kind of mistake. Boeing designed those switches specifically to prevent accidental toggling from both human error and potential cockpit vibration. If the system had been controlled by machine logic, it likely would have prevented that catastrophic action. And yes, while the RAT, or the Ram Air Turbine, is designed to provide temporary power in the event of dual engine failures, it was, nevertheless, built with just rare “acts of God” in mind at high altitude, not human pilots deliberately cutting fuel right after takeoff.

And then there’s the infamous 2002 Uberlingen midair collision. In which seventy-one people were killed, including fifty-two children. It was just horrific. The cause? The Russian pilots on the passenger jet prioritized air traffic control’s instructions instead of listening to TCAS, the machine that was designed for exactly this situation. TCAS works by instructing one plane to descend and the other to climb, thereby preventing mid-air collisions. It could have saved those children, but humans chose to ignore it. That accident wasn’t caused by some mysterious machine failure; it was caused by human error, bad training, and, if I’m being blunt, Russia’s inability at the time to align with international aviation standards. Because under President Yeltsin in the late 1990s and the early 2000s, Russia tried to integrate itself with the rest of the world and failed, and to some degree, it was also a pure failure of Western countries, due to their skepticism and their unwillingness to accept Russia into the international community, while Russia was at the height of its pro-Western sentiment. And when President Putin took over in 2000, his hardline stance took root in the Russian society, slowly but surely, and by 2002, it was practically too late, meaning Russia had no choice but to cling to its old ways. That political pride and resistance of Russia translated into poor training standards for Russian pilots, and, as a result, fifty-two children died. Machines didn’t kill them. People did

Of course, I don’t mean to say machines are flawless. Just look at the Boeing 737 MAX disasters. But even here, the real failure wasn’t the machine; it was the humans behind it. Boeing executives wanted to compete with the Airbus A320 Neo without spending money on new pilot training or procedures, and without drawing too much regulatory scrutiny. So, they secretly added MCAS to the new Max planes, a system that would automatically try to push the nose down for the pilots when the larger, repositioned engines caused an upward nose pitch, thereby preventing a stall. On paper, it wasn’t a bad idea. But they didn’t tell the pilots about it. Why? To save money, to please shareholders, and to speed up approval. The result? Two crashes, 346 lives lost. Again, not because automation itself is inherently evil, but because flawed human decisions have corrupted it.

So, back to the discussion question: Is it ethical to let machines play roulette with our lives? My answer is yes. Not because I trust machines blindly, but because I trust them more than flawed humans. I’d rather have a machine making a data-driven decision in a split-second crash scenario than risk dying because of arrogance, panic, or corporate greed. Air travel is safer today thanks to systems like autopilot and TCAS. If autonomous vehicles can follow the same trajectory, I’ll take my chances with the algorithm over being the victim of a careless driver—or worse, of executives willing to gamble my life for a supposedly beautiful quarterly numbers.

Leave a comment