In a game of chicken, do the smart have an advantage over the stupid?
The AI’s intelligence allows it to devise convincing commitments, but it also allows it to fake them. You know in advance that if the AI throws a fake commitment at you it’s going to look like a real commitment beyond your ability to discriminate, so should you trust any commitment you observe?
And if you choose to unplug, presumably the AI knew you would do that and would therefore have not made a real commitment that would backfire?
I’m going to assume that there is some ability on your part to understand something about the level of intelligence and ability on the part of the AI—that’s what we bayesians do. If it might be enough smarter than you to convince you to do anything, you probably shouldn’t interact with it if you can avoid it.
In a game of chicken, do the smart have an advantage over the stupid?
The AI’s intelligence allows it to devise convincing commitments, but it also allows it to fake them. You know in advance that if the AI throws a fake commitment at you it’s going to look like a real commitment beyond your ability to discriminate, so should you trust any commitment you observe?
And if you choose to unplug, presumably the AI knew you would do that and would therefore have not made a real commitment that would backfire?
I’m going to assume that there is some ability on your part to understand something about the level of intelligence and ability on the part of the AI—that’s what we bayesians do. If it might be enough smarter than you to convince you to do anything, you probably shouldn’t interact with it if you can avoid it.