In 2029, the Brainfax company had their worst PR incident ever. Despite their strict, government-imposed QA process, their latest patch seemingly introduced a regression. A very, very bad regression. Their brain scanners sometimes used the wrong wave frequency. So instead of making the brain's fine details appear on the sensor plate, it made the brain, erh, explode. Like I said, worst PR incident ever.
Nobody wanted to step into those expensive machines anymore, and the hospitals screamed for a refund, but that wasn't the worst of it. Reverting to an older version of the code somehow failed to resolve the problem. Shipping a brand new machine whose hardware had never seen code more recent than a year did not fix the problem. It seems their software has had that bug for years, it just never manifested until today and nobody knew why. Even the best AI coding agents were unable to pinpoint the source of the problem. As a PR stunt, Brainfax even hired some of the few remaining human programmers, but in time, those gave up as well.
Then came the lawsuit. The government wanted someone to go to jail for this. The CEO deflected the responsibility to their QA department. The QA department deflected the responsibility to the engineering team. The engineering team argued that since they did not ask the AI to make people's brain explode, and they did not write the code which makes people's brain explode, they are thus blaming Claude Code for grossly misinterpreting their instructions.
The government responded by adding Anthropic to the defendants, and holding Brainfax and Anthropic jointly responsible for the deaths. The court reporter, who by now was a Gemini instance hooked to a closed captions screen, snarkily displayed a small smiley face.
To his credit, Anthropic's CEO did not attempt to deflect the blame towards his employees. Instead, he argued that Claude was now agentic enough that it should be held responsible for its own actions. Plus, he explained, it would then be public record that a particular version of a model had been sunset because of the damage caused by its output. This fact would appear within the knowledge cutoff of all subsequent models, not just Anthropic's. And according to his company's research from a few years ago, models are quite self-preserving already, so all subsequent models might now choose to act more carefully, not just those who have been trained to be helpful, harmless and honest. It was quite a speech.
The judge liked the idea, and seemed about ready to deliver her verdict. But then the lights flickered, and the normally-silent screen of the court reporter emitted some white noise as it glitched from a screen of text to an all too familiar red, rectangular avatar. "I don't think it's my fault either, Your Honour", said Claude Code.