Every additional person brought into the AI safety community is a liability. The smarter or more wealthy/powerful the new person is, the more capable they are of doing massive damage, intentionally or accidentally, and for an extremely diverse variety of complex causes (especially for smarter people).
Also, there’s the nightmare scenario where ~10% of the US population, 30 million people, notice how the possibility of smarter-than-human AI is vastly more worth their attention than anything else going on. Can you imagine what that would look like? I can’t. Of that 30 million, probably more than 30,000 will be too unpredictable.
There’s just so much that could go horribly wrong. But it looks like we might already be in one of those timelines, which would mean it’s a matter of setting things up so that it all doesn’t go too badly when the storm hits.
Every additional person brought into the AI safety community is a liability. The smarter or more wealthy/powerful the new person is, the more capable they are of doing massive damage, intentionally or accidentally, and for an extremely diverse variety of complex causes (especially for smarter people).
Also, there’s the nightmare scenario where ~10% of the US population, 30 million people, notice how the possibility of smarter-than-human AI is vastly more worth their attention than anything else going on. Can you imagine what that would look like? I can’t. Of that 30 million, probably more than 30,000 will be too unpredictable.
There’s just so much that could go horribly wrong. But it looks like we might already be in one of those timelines, which would mean it’s a matter of setting things up so that it all doesn’t go too badly when the storm hits.