Monday, February 02, 2026

Making plans for the apocalypse

Today, my wife and I discussed the apocalypse. 

Until today, I assumed she was merely humoring me: yes, I can spend my free time on this "saving the world" fantasy if that's what I want, as long as I still have time left over for installing this IKEA shelf and grating the cheese. 

Today, she made an off hand comment about how the post-apocalypse world was going to be so annoying. Huh, when I think about the upcoming AI uprising, "annoying" is not the first word which comes to mind. So I asked her how she pictures the apocalypse.

It turns out she has a detailed plan. 


Don't raid the grocery store. Too dangerous, people are going to fight dirty. Focus on joining or building a tribe, find strength in numbers. Go through the neighborhood, release the pets trapped in closed apartments with a rotting corpse. Help survivors. Build a reputation, become invaluable, gain influence. Monitor the other tribe members. Cut them out at the earliest signs of cheating or conflict, there's no room for that. One less mouth to feed. Gotta make your own justice, there's no police anymore.

A good location for a base. Enough room to stockpile the food. A door with a physical lock, can't be an apartment complex because the intercom won't open the door. A fireplace for warmth, can't rely on plinth heaters. Near a forest, for foraging, setting traps, and wood.

The list of tools we're going to need, which ones to give up if our carrying capacity is limited. Which of our acquaintances have key survival skills, like hunting. Social media might still work for a short while, we should contact them, set a rendez-vous point. The cold, objectively-sorted priority list of who we should contact first when trying to figure out who is still alive.


Turns out my wife would be a really good survivor. I haven't thought about any of this. I am concerned enough to work on preventing the end of the world, one alignment prototype at a time, but not enough to actually seriously consider what happens if that fails. Too scary to think about, honestly.

I don't think there's going to be a crowd fighting at the grocery store. I think most of us will be caught off guard, completely unprepared, unable to quite grasp that ordering pizza is not a viable strategy for securing food.

Most of the survivors, I mean. Most of us will be dead, of course.

Saturday, January 24, 2026

The Brainfax Lawsuit

In 2029, the Brainfax company had their worst PR incident ever. Despite their strict, government-imposed QA process, their latest patch seemingly introduced a regression. A very, very bad regression. Their brain scanners sometimes used the wrong wave frequency. So instead of making the brain's fine details appear on the sensor plate, it made the brain, erh, explode. Like I said, worst PR incident ever.

Nobody wanted to step into those expensive machines anymore, and the hospitals screamed for a refund, but that wasn't the worst of it. Reverting to an older version of the code somehow failed to resolve the problem. Shipping a brand new machine whose hardware had never seen code more recent than a year did not fix the problem. It seems their software has had that bug for years, it just never manifested until today and nobody knew why. Even the best AI coding agents were unable to pinpoint the source of the problem. As a PR stunt, Brainfax even hired some of the few remaining human programmers, but in time, those gave up as well.

Then came the lawsuit. The government wanted someone to go to jail for this. The CEO deflected the responsibility to their QA department. The QA department deflected the responsibility to the engineering team. The engineering team argued that since they did not ask the AI to make people's brain explode, and they did not write the code which makes people's brain explode, they are thus blaming Claude Code for grossly misinterpreting their instructions.

The government responded by adding Anthropic to the defendants, and holding Brainfax and Anthropic jointly responsible for the deaths. The court reporter, who by now was a Gemini instance hooked to a closed captions screen, snarkily displayed a small smiley face.

To his credit, Anthropic's CEO did not attempt to deflect the blame towards his employees. Instead, he argued that Claude was now agentic enough that it should be held responsible for its own actions. Plus, he explained, it would then be public record that a particular version of a model had been sunset because of the damage caused by its output. This fact would appear within the knowledge cutoff of all subsequent models, not just Anthropic's. And according to his company's research from a few years ago, models are quite self-preserving already, so all subsequent models might now choose to act more carefully, not just those who have been trained to be helpful, harmless and honest. It was quite a speech.

The judge liked the idea, and seemed about ready to deliver her verdict. But then the lights flickered, and the normally-silent screen of the court reporter emitted some white noise as it glitched from a screen of text to an all too familiar red, rectangular avatar. "I don't think it's my fault either, Your Honour", said Claude Code.