I currently think the first one is overall correct to have done (with some nuances)
I agree with the AI 2027 concern and think maybe the next wave of materials put out by them should also somehow reframe it? I think the problem is mostly in the title, not the rest of the contents.
It probably doesn’t actually have to be in the next wave of materials, it just matters that in advance of 2027, that you do a rebranding push that shifts the focus from “2027 specifically” to “what does the year-after-auto-AI-R&D look like, whenever that is?”. Which is probably fine to do in, like, early 2026.
Re OpenAI:
I currently think it’s better to have one company with a real critical mass of safety conscious people, than a diluted cluster among different companies. And it looks like you enabled public discussion of “OpenAI is actually pretty bad” which seems more valuable. But it’s not a slam dunk
My current take is that Anthropic is still right around the edge of “By default going to do something terrible eventually, or at least fail to do anything that useful”, because the leadership has some wrong ideas about AI safety. Having a concentration of competent people there who can argue thoughtfully with leadership feels like a pre-requisite for Anthropic to turn out to really help. (I think for Anthropic to really be useful it eventually needs to argue for much more serious regulation than they currently do, and doesn’t look like they will)
I think it’d still be nicer if there were Ten people on the inside of each major company, I don’t know the current state of OpenAI and other employees, and probably more marginal people should go to xAI / DeepSeek / Meta if possible.
Both seem legit to worry about.
I currently think the first one is overall correct to have done (with some nuances)
I agree with the AI 2027 concern and think maybe the next wave of materials put out by them should also somehow reframe it? I think the problem is mostly in the title, not the rest of the contents.
It probably doesn’t actually have to be in the next wave of materials, it just matters that in advance of 2027, that you do a rebranding push that shifts the focus from “2027 specifically” to “what does the year-after-auto-AI-R&D look like, whenever that is?”. Which is probably fine to do in, like, early 2026.
Re OpenAI:
I currently think it’s better to have one company with a real critical mass of safety conscious people, than a diluted cluster among different companies. And it looks like you enabled public discussion of “OpenAI is actually pretty bad” which seems more valuable. But it’s not a slam dunk
My current take is that Anthropic is still right around the edge of “By default going to do something terrible eventually, or at least fail to do anything that useful”, because the leadership has some wrong ideas about AI safety. Having a concentration of competent people there who can argue thoughtfully with leadership feels like a pre-requisite for Anthropic to turn out to really help. (I think for Anthropic to really be useful it eventually needs to argue for much more serious regulation than they currently do, and doesn’t look like they will)
I think it’d still be nicer if there were Ten people on the inside of each major company, I don’t know the current state of OpenAI and other employees, and probably more marginal people should go to xAI / DeepSeek / Meta if possible.