That doesn’t sound very likely to me for either a friendly or an unfriendly AI. Letting people feel disenfranchised might be bad Fun Theory, but it would take a lot more than distribution of relevant information to get ordinary, biased humans to stop fucking up our own society.
As a general rule, I’d say that if a plan sounds unlikely to effectively fix our problems, an FAI is probably not going to do that.
That doesn’t sound like something I’d infer from his previous comment
We’ve got to deal with politics eventually. The whole world isn’t going to listen to the Singularity Institute just because they’ve got a Friendly AI, and it’s not like those cognitive biases will disappear by that time.
Are you saying that an AGI would distribute relevant information to the public, compelling them to make sound political choices?
That doesn’t sound very likely to me for either a friendly or an unfriendly AI. Letting people feel disenfranchised might be bad Fun Theory, but it would take a lot more than distribution of relevant information to get ordinary, biased humans to stop fucking up our own society.
As a general rule, I’d say that if a plan sounds unlikely to effectively fix our problems, an FAI is probably not going to do that.
I thought he was saying that once you have a Super AI, you don’t have to deal with politics.
That doesn’t sound like something I’d infer from his previous comment