> assumption that advance civilization has larger receivers
If they are more advanced than us, wouldn’t they either have aligned AI or be AI? In that case, I’m not sure what warning them about our possible AI would do for them?
Josh Snider
I’ve read this story plenty of times before, but this was the first time I saw it on LessWrong. That was a pleasant surprise.
This is pretty insightful, but I’m not sure the assumption that we would halt development if there were unsolved legible problems holds. The core issue might not be illegibility, but a risk-tolerance threshold in leadership that’s terrifyingly high.
Even if we legibly showed the powers that be that an AI had a 20% chance of catastrophic unsolved safety problems, I’d expect competitive pressure would lead them to deploy such a system anyway.
The agency and intentionality of current models is still up to debate, but the current versions of Claude, ChatGPT, etc. were all created with the assistance of their earlier versions.
I strongly agree. I expect AI to be able to “take over the world” before it can create a more powerful AI that perfectly shares its values. This matches the Sable scenario Yudkowsky outlined in “If Anyone Builds It, Everyone Dies” where it becomes dangerously capable before solving its own alignment problem.
The problem is that doesn’t avert doom. If modern AIs become smart enough to do self-improvement at all, then their makers will have them do it. This has in some ways already started.
I’ll do that next time if that’s the way.
This kind of scenario seems pretty reasonable and likely, but I’m much more optimistic about it being morally valuable. Mostly because I expect “grabbiness” to happen sooner and by an AI that is morally valuable.
I’m not entirely sure on the etiquette of posting something here and to your own site, so if there’s a better way to do this, please let me know. Any other feedback or criticism would also be appreciated.
AI Science Companies: Evidence AGI Is Near
This is an update in the doomier direction for me, but it may be beneficial if it gets governments to start securing biolabs before future AI even exists.
If it’s a choice between the genie giving me “the solution to AI Alignment” and the genie doing nothing, I’d take the solution and then spend the rest of my life testing it.
If I can use my wish for anything, I’d wish for some form of story-breaker power that I could verify easily and which would give the genie less room to screw me over.
This is a very nice addition to the collection of doomer short stories.
Yes, this would be a bad situation to end up in, but I think it’s extremely unlikely.
This is pretty clever. It reminds me of GANs in a way, but much more advanced. I know that the Pokemon-playing AIs on Twitch all have a version of “Critique Claude”, which is a post-deployment version of this in some sense. Integrating that earlier in the process could be very useful. I’m not so sure how much this contributes to advancing capabilities vs advancing safety though, but I hope we’ll get some good results from it.
I’d partially agree. I routinely see normal women who are more attractive than Sydney Sweeney or Gal Gadot, but they are still massively outnumbered by the women who aren’t.
Avoiding what you suggested is why private conversations are an advantage. I think you misunderstood the essay, unless I’m misunderstanding your response.
> I think it’s cultural and goes back to the rise of Christianity.
This seems testable with a cross-cultural analysis. Not just the pre-Christian Greek stories that Garrett mentioned, but Chinese, Japanese, Indian, and Middle Eastern cultures should have plenty of non-Christian stories.
This is pretty cool. As for Opus, could you just use it for “free” by running it in Claude Code and use your account’s built-in usage limits.
Edit: That might also work for gemini-cli and 2.5 Pro.
This is a great story and the animation is also great. Good work everyone!
2 votes
Overall karma indicates overall quality.
0 votes
Agreement karma indicates agreement, separate from overall quality.
People were too busy worrying whether China or America will win the race, to see Sirius sneaking up on us.