Convincing people of the validity of drowning child thought experiments and effective altruism seems considerably easier and more useful (even from a purely selfish perspective) than convincing an AI to let one out of the box… for example, there are enough effective altruists for there to be an “effective altruism community”, but there’s no such “failed AI gatekeeper community”. So why aren’t we working on this instead?
because the people who get convinced by drowning child thought experiments and then give to the givewell top charity aren’t saving us from an unfriendly AI disaster.
Sure. You could do it for whatever cause you’re most concerned about. I chose effective altruism ’cause I figured that would have the broadest appeal, but I’m also worried about UFAI disasters.
Convincing people of the validity of drowning child thought experiments and effective altruism seems considerably easier and more useful (even from a purely selfish perspective) than convincing an AI to let one out of the box… for example, there are enough effective altruists for there to be an “effective altruism community”, but there’s no such “failed AI gatekeeper community”. So why aren’t we working on this instead?
because the people who get convinced by drowning child thought experiments and then give to the givewell top charity aren’t saving us from an unfriendly AI disaster.
Sure. You could do it for whatever cause you’re most concerned about. I chose effective altruism ’cause I figured that would have the broadest appeal, but I’m also worried about UFAI disasters.
We’re not willing to use dark arts to get people to do things. The AI in a box experiment is just to show that dark arts works.
The AI box experiments are research, not outreach.