It didn’t pull the trigger. It didn’t crash the car. It didn’t press the button. But the machine decided—and someone died.
The age of automation was supposed to free us. Safer cars. Smarter planes. Sharper decisions. But in our rush to trust the machine, something essential was lost: control.
In 2018 and 2019, two modern jetliners fell from the sky. Investigators pointed to an automated flight system—intended to protect passengers—that instead overpowered pilots, forcing the planes into fatal dives.
On a quiet night in Arizona, a woman crossing the street was struck and killed by a self-driving car. The vehicle’s system reportedly saw her—but didn’t understand her. And so, it didn’t stop.
In hospitals, software buried deep inside medical machines has silently malfunctioned. One radiation device, the Therac-25, delivered massive overdoses to unsuspecting patients due to a bug hidden in its logic—unseen, uncorrected, until it was too late.
And now, every day, algorithms make decisions in courtrooms, in hiring offices, in credit bureaus. They influence parole, detect suspects, steer cars, and—more and more—determine what happens next in life and death situations.
Today's systems are not just tools. They are arbiters. Black boxes of logic that even their creators often can't fully explain. And yet—they act. And we follow.
It used to sound like science fiction. Now it sounds like the news.
From the skies to the streets to the surgery table, decisions once made by humans are increasingly handed over to algorithms and code.
Modern software systems are layered, opaque, and often behave in ways their creators never anticipated. AI and automation promise efficiency—but too often, oversight and accountability lag behind.
We ask:
DeathByComputer.com isn’t just a domain. It’s a conversation starter. A banner for those working to investigate, expose, regulate, and prevent the growing risks posed by autonomous systems and software failure.
This name could power a site for: