Does anyone have some good primary/canonical/especially insightful sources on the question of “Once we make a superintelligent AI, how do we get people to do what it says?”
I’m trying to hold the question to the question posed, rather than get into the weeds on “how would we know the AI’s solutions were good” and “how do we know it’s benign” and “evil AI in a box” as I know where to look for that information.
So assume (if you will) all other problems with AI are solved and that the AI’s solutions are perfect except that they are totally opaque. “To fix global warming, inject 5.024 mol of boron into the ionosphere at the following GPS coordinates via a clone of Buzz Aldrin in a dirigible...”. And then maybe global warming would be solved, but Exxon’s PR team spends $30 million on a campaign to convince people it was actually because we all used fewer plastic straws, because Exxon’s baby AI is telling them that the superintelligence is about tell us to dismantle Exxon and execute its board of directors by burning at the stake.
Once one species of primate evolves to be much smarter than the others, how will it come about that the others do as it says?
—For the most part, it doesn’t matter whether the others do as it says. The other primates aren’t the ones in the drivers seat, literally and figuratively.
—But when it matters, the super-apes (humans) will figure out a variety of tricks and bribes that work most of the time.
Does anyone have some good primary/canonical/especially insightful sources on the question of “Once we make a superintelligent AI, how do we get people to do what it says?”
I’m trying to hold the question to the question posed, rather than get into the weeds on “how would we know the AI’s solutions were good” and “how do we know it’s benign” and “evil AI in a box” as I know where to look for that information.
So assume (if you will) all other problems with AI are solved and that the AI’s solutions are perfect except that they are totally opaque. “To fix global warming, inject 5.024 mol of boron into the ionosphere at the following GPS coordinates via a clone of Buzz Aldrin in a dirigible...”. And then maybe global warming would be solved, but Exxon’s PR team spends $30 million on a campaign to convince people it was actually because we all used fewer plastic straws, because Exxon’s baby AI is telling them that the superintelligence is about tell us to dismantle Exxon and execute its board of directors by burning at the stake.
Or give me some key words to google.
Once one species of primate evolves to be much smarter than the others, how will it come about that the others do as it says? —For the most part, it doesn’t matter whether the others do as it says. The other primates aren’t the ones in the drivers seat, literally and figuratively. —But when it matters, the super-apes (humans) will figure out a variety of tricks and bribes that work most of the time.