The whole world isn’t going to listen to the Singularity Institute just because they’ve got a Friendly AI
If an AGI wants you to listen, you won’t have any choice. If it doesn’t want you to listen, you won’t have the option. The set of “problems for us after we get FAI” is the null set.
That doesn’t sound very likely to me for either a friendly or an unfriendly AI. Letting people feel disenfranchised might be bad Fun Theory, but it would take a lot more than distribution of relevant information to get ordinary, biased humans to stop fucking up our own society.
As a general rule, I’d say that if a plan sounds unlikely to effectively fix our problems, an FAI is probably not going to do that.
That doesn’t sound like something I’d infer from his previous comment
We’ve got to deal with politics eventually. The whole world isn’t going to listen to the Singularity Institute just because they’ve got a Friendly AI, and it’s not like those cognitive biases will disappear by that time.
If an AGI wants you to listen, you won’t have any choice. If it doesn’t want you to listen, you won’t have the option. The set of “problems for us after we get FAI” is the null set.
Kind of, almost. It could be that we (implicitly) choose to have problems for ourselves.
In case it’s not clear. This means the FAI causing problems for us on our behalf, not literally making a choice we are aware of.
(Or ‘choosing not to intervene to solve all problems’. The difference matters to some, even if it is somewhat arbitrary.)
Are you saying that an AGI would distribute relevant information to the public, compelling them to make sound political choices?
That doesn’t sound very likely to me for either a friendly or an unfriendly AI. Letting people feel disenfranchised might be bad Fun Theory, but it would take a lot more than distribution of relevant information to get ordinary, biased humans to stop fucking up our own society.
As a general rule, I’d say that if a plan sounds unlikely to effectively fix our problems, an FAI is probably not going to do that.
I thought he was saying that once you have a Super AI, you don’t have to deal with politics.
That doesn’t sound like something I’d infer from his previous comment