OpenAI employees: Now is the time to stop doing good work.

Americans don’t like OpenAI very much anymore, and you know why. Of course, AI systems it helped make have caused various problems already, like:

  • bots pushing politics on social media

  • maintainers of open source projects having their time wasted, and sometimes quitting

  • AI-generated scientific papers crowding out good research

  • scammers who clone the voice of a family member

  • AI-generated fake pictures fooling people on Facebook

And gamers haven’t been huge fans of OpenAI since it bought up half the RAM wafer production—mostly just to keep it off the market so other people can’t use it, since they only bought the raw wafers, not finished RAM.

But the popularity of OpenAI has fallen abruptly this week, because it decided to help the US military make automated weapons and mass surveillance systems that Anthropic refused to make on ethical grounds.

The leadership of OpenAI is trying to pretend that that they got the same deal Anthropic wanted but they were just nicer about it. They are lying. Anthropic wanted a human in the loop of autonomous weapons. OpenAI’s contract just says that there must be human oversight to whatever extent is required by law. But of course, the US military can just say that whatever they want to do is legal, and OpenAI leadership would have no ability (or desire) to challenge them on that. Remember when the US government wanted to torture people so they just wrote some legal memos saying it was OK? I do.

And then, Sam Altman had the nerve to say this:

  1. There is more open debate than I thought ther ewould be, at least in this part of Twitter, about whether we should prefer a democratically elected government or unelected private companies to have more power. I guess this is something people disagree on, but…I don’t. This seems like an important area for more discussion.

Let’s be clear: this was not about Anthropic telling the US military not to work on autonomous weapons on its own. Altman is advocating for the government being able to require private companies (and their employees) to provide whatever services it wants, even if they don’t currently do that thing. I know the term “fascism” has been thrown around a lot, but that is Actual Fascism. Here are some other ways to use that argument:

  • “Why should Sam Altman decide what should be done with that billion dollars instead of the government, which reflects the will of the people?”

  • “Why should a private citizen get to decide they don’t want to spy on their neighbors and report any hidden jews? That should be the decision of the government, which reflects the will of the people!”

The past week, I’ve seen a lot of people announcing on social media that they’re canceling their OpenAI subscription and moving to Anthropic. Well, that’s fine, but they only care slightly. Most of their money isn’t from subscriptions, it’s from investors. And those investors aren’t mainly hoping for a payback from individuals paying $20/​month with a 40% profit margin or whatever. No, they want to replace employees. That’s the hope. That’s the main basis of the investments. Actually getting OpenAI to change course would take...well, something else.


I’ve seen a lot of posts this week saying that employees are morally obligated to quit OpenAI immediately. But I wouldn’t go that far: I’d only say that you’re obligated to stop doing good work.

  • Did you see a bug? No you didn’t.

  • OpenAI leadership says vibe coding is fine, so why review AI code? (You can pretend to spend time on it if you want tho.)

  • Are you annoyed by unnecessary meetings? Why? Just relax.

  • Unplugged cable somewhere? Water leak? Not your job.

  • Lots of bad programmers have succeeded by spending their time on office politics instead. Office politics is a valuable skill! You should get some practice with it!

Really, why would you care if you put in less effort and OpenAI eventually fires you? There’s an AI boom: with OpenAI on your resume, you can get a job somewhere else. If you’ve been working in Silicon Valley and saving money, you might even just be able to go retire in Thailand or something. Or just visit Japan for a while, it’s cheap right now. This is a particularly good time to do this at OpenAI, because:

  • People will be leaving. If something goes wrong, you can just blame someone who quit recently.

  • There might be something of an ideological inquisition coming up, so it could turn out that you might want to leave soon anyway.

  • Normally, you just don’t give any specifics about why you were fired, but this is even a rare situation where you could get respect by answering, “I was fired for refusing to work on unethical projects.”


Sam Altman, in particular, doesn’t deserve your best efforts.

Look at his X account. He got community noted on his post, so he reposted the same thing to get rid of the note. But then he realized he couldn’t delete the noted posts, so he has 3 copies of the same thing up and looks like an asshole. This is a metaphor for his behavior his whole life—he’s used to being able to hide whatever came before, but now that he’s in the public eye he’s under too much scrutiny for some of his tactics to work.

Sam Altman does have a notable talent: he can talk to slightly autistic nerds and seem like one of them, and then go talk to a bunch of MBAs and CEOs and seem like one of them. I can’t do that, and he’s really making the most of his acting skills.

Altman had a pattern of deception and fraud from the very start of his career—and then he had the ability to convince people that he’s a “really genuine person” even as they see him lying. He was the CEO that really cared about AI safety, then he became the advocate of unrestricted AI progress, then he became the guy who’d maximize investor profits. The audiences he appealed to should have realized that he’d screw them over too when it became convenient.

And now, Altman is talking like humans are just meat computers that eat too much food compared to silicon. I don’t think whoever he’s appealing to with that rhetoric are your friends. For all the criticisms I have of the Chinese government, even they’re not as...anti-human as Altman is in front of certain audiences. I haven’t seen the Chinese leadership calling people “speciesist” either. If this is what US leadership is like, I have to wonder what the advantage of the US “winning” an AI race is—especially since it’s fairly likely that everyone loses. I remember when Sam Altman was arguing that OpenAI was good for AI risk because it reduced “compute overhang”—so much for that, eh?