To play devil’s advocate is increasing everyone’s appreciation of the risk of AI a good idea?
A risky AI implies believing that the AI is powerful. This potential impact of AI is currently under appreciated. We don’t have large governmental teams working on it hoovering up all the talent.
Spreading the news of the dangerousness of AI might have the unintended consequence of starting the arms race.
Pretty sure it is. You have two factors, increasing the awareness of AI risk and of AI specifically. The first is good, the second may be bad but since the set of people caring about AI generally is so much larger, the second is also much less important.
Telling everyone has some benefits in maybe getting people that are close to working on AGI that you wouldn’t get otherwise and maybe making it more convincing. It might be most efficient as well.
While lots of people care about AI I think establishment is probably still a bit jaded from the hype before the AI winters. I think the number of people who think about artificial general intelligence is a small subset of the number of of people involved in weak AI.
So I think I am less sure than you and I’m going to think about what the second option might look like.
Wow, I hadn’t thought of it like this. Maybe if AGI is sufficiently ridiculous in the eyes of world leaders, they won’t start an arms race until we’ve figured out how to align them. Maybe we want the issue to remain largely a laughingstock.
To play devil’s advocate is increasing everyone’s appreciation of the risk of AI a good idea?
A risky AI implies believing that the AI is powerful. This potential impact of AI is currently under appreciated. We don’t have large governmental teams working on it hoovering up all the talent.
Spreading the news of the dangerousness of AI might have the unintended consequence of starting the arms race.
This seems like a crucial consideration.
Pretty sure it is. You have two factors, increasing the awareness of AI risk and of AI specifically. The first is good, the second may be bad but since the set of people caring about AI generally is so much larger, the second is also much less important.
There are roughly 3 actions:
1) Tell no one and work in secret
2) Tell people that are close to working on AGI
3) Tell everyone
Telling everyone has some benefits in maybe getting people that are close to working on AGI that you wouldn’t get otherwise and maybe making it more convincing. It might be most efficient as well.
While lots of people care about AI I think establishment is probably still a bit jaded from the hype before the AI winters. I think the number of people who think about artificial general intelligence is a small subset of the number of of people involved in weak AI.
So I think I am less sure than you and I’m going to think about what the second option might look like.
Wow, I hadn’t thought of it like this. Maybe if AGI is sufficiently ridiculous in the eyes of world leaders, they won’t start an arms race until we’ve figured out how to align them. Maybe we want the issue to remain largely a laughingstock.