Lightning Post: Things people in AI Safety should stop talking about

This is experimenting with a new kind of post which is meant to convey a lot of ideas very quickly, without going into much detail for each.

Things I wish people in AI Safety would stop talking about

A list of topics people concerned about x-risk from AI spend, in my opinion, way too much time talking about to those outside the community. It’s not that these things aren’t real, they just likely won’t actually end up mattering that much.

How an AI could persuade you to let it out of the box

WRONG!

Keeping AIs in boxes was never a thing companies were seriously going to do. An AI in a box isn’t useful. These aren’t academics carefully studying a new species. This is an industry, with everyone trying to get ahead, get consumer feedback, and training on parallel cloud computing.

How an AI could become an agent

WRONG!

Agency is the obvious next step that people will try to make their AIs into. An AI “tool” is simply inferior in almost every possible way to an agent. You don’t need to specify a special prompt, constantly click “Approve Plan”, or any of the supervised, time-consuming requirements from mere tools. The economic advantage is simply too staggering for people not to do this. Everyone can even agree that it’s dangerous, a totally bad idea, and still have to do it anyway if they want to (economically) survive.

How an AI could get ahold of, or create, weapons

WRONG!

The military advantages of fully-autonomous weapons is just too great for any large-scale government not to do, especially for democracies where losing troops abroad results in massive political backlash. Humans using a remote controller is just too slow a process, because it still means humans have to make very fast, split-second decisions. Fully autonomous warfare would result in tactical decisions occurring faster than any possible human could perform. Look at how Alpha Zero played millions of games against itself in 70 hours. AIs can make decisions faster, which is all that will matter.

How an AI might Recursively Self Improve without humans noticing

WRONG!

RSI is an ace in the hole for any company or government. They will try to do this. As AI continues to expand faster, and the stakes get higher, the paranoia that someone else will achieve it first will drive players to compete to create RSI. It’s the gift that keeps on giving. You don’t just get momentarily ahead of your competition, but you get to stay ahead, and keep moving so fast no one else can hope to keep up. Everyone will want to do this, even if they know it’s dangerous, because the potential gains are too great.

Why a specific AI will want to kill you

Even though most AI systems, scaled to superintelligence, might want you dead doesn’t mean this is a hill to fight hard on. At the end of the day, even if most don’t want you dead, it doesn’t matter. All you need is just one superintelligence to want you dead, and then you get dead. If someone’s idea of a “safe” superintelligence doesn’t specify how it deals with all potential future intelligences, then inevitably someone designs an AI that kills everyone. It’s an end state to the game. Unless an AI kills everyone or somehow prevents other AIs from developing, the game continues.

Crossposted to EA Forum (5 points, 3 comments)