One crux is how soon do we need to handle the philosophical problems? My intuition says that something, most likely corrigibility in the Max Harms sense, will enable us to get pretty powerful AIs while postponing the big philosophical questions.
Are there any pivotal acts that aren’t philosophically loaded?
My intuition says there will be pivotal processes that don’t require any special inventions. I expect that AIs will be obedient when they initially become capable enough to convince governments that further AI development would be harmful (if it would in fact be harmful).
The combination of worried governments and massive AI-enhanced surveillance seems likely to be effective.
If we need a decades-long-pause, then even the world will need to successfully notice and orient to that fact. By default I expect tons of economic and political pressure towards various actors trying to to get more AI power even if there’s broad agreement that it’s dangerous.
I expect this to get easier to deal with over time. Maybe job disruptions will get voters to make AI concerns their top priority. Maybe the AIs will make sufficiently convincing arguments. Maybe a serious mistake by an AI will create a fire alarm.
I expect that AIs will be obedient when they initially become capable enough to convince governments that further AI development would be harmful (if it would in fact be harmful).
Seems like “the AIs are good enough at persuasion to persuade governments and someone is deploying them for that” is right when you need to be very high confidence they’re obedient (and, don’t have some kind of agenda). If they can persuade governments, they can also persuade you of things.
I also think it gets into a point where I’d sure feel way more comfortable if we had more satisfying answers to “where exactly are we supposed to draw the line between ‘informing’ and ‘manipulating’” (I’m not 100% sure what you’re imagining here tho)
I’m assuming that the AI can accomplish its goal by honestly informing governments. Possibly that would include some sort of demonstration that the of the AI’s power that would provide compelling evidence that the AI would be dangerous if it wasn’t obedient.
I’m not encouraging you to be comfortable. I’m encouraging you to mix a bit more hope in with your concerns.
One crux is how soon do we need to handle the philosophical problems? My intuition says that something, most likely corrigibility in the Max Harms sense, will enable us to get pretty powerful AIs while postponing the big philosophical questions.
My intuition says there will be pivotal processes that don’t require any special inventions. I expect that AIs will be obedient when they initially become capable enough to convince governments that further AI development would be harmful (if it would in fact be harmful).
The combination of worried governments and massive AI-enhanced surveillance seems likely to be effective.
I expect this to get easier to deal with over time. Maybe job disruptions will get voters to make AI concerns their top priority. Maybe the AIs will make sufficiently convincing arguments. Maybe a serious mistake by an AI will create a fire alarm.
Seems like “the AIs are good enough at persuasion to persuade governments and someone is deploying them for that” is right when you need to be very high confidence they’re obedient (and, don’t have some kind of agenda). If they can persuade governments, they can also persuade you of things.
I also think it gets into a point where I’d sure feel way more comfortable if we had more satisfying answers to “where exactly are we supposed to draw the line between ‘informing’ and ‘manipulating’” (I’m not 100% sure what you’re imagining here tho)
I’m assuming that the AI can accomplish its goal by honestly informing governments. Possibly that would include some sort of demonstration that the of the AI’s power that would provide compelling evidence that the AI would be dangerous if it wasn’t obedient.
I’m not encouraging you to be comfortable. I’m encouraging you to mix a bit more hope in with your concerns.