I haven’t read much in the super-intelligent AI realm, but perhaps a relatively naive observer has some positive value. If we get to the point of producing AI that seems remotely super-intelligent, we’ll stick firewalls around it. I don’t think the suggested actions of a super-intelligent AI will be harmful in an incomprehensible way. An exception would if it created something like the world’s funniest joke. The problem with HAL was that they gave him control of spacecraft functions. I say we don’t give ‘hands’ to the big brains, and we don’t give big brains to the hands, and then I won’t lose much sleep.
I believe the standard objections are that it’s far more intelligent and quick-of-thought than us, so: it can beat your firewalls; it’s ludicrously persuasive; it can outwit us with advice that subtly serves its ends; it could invent “basilisks” like the world’s funniest joke; and even if we left it alone on a mainframe with no remote access and no input or output, it could work out how to escape and/or kill us with clever use of cooling fans or something.
Here’s an example of why Eliezer suggests that you be much more paranoid.
Thanks for pointers into what is a large and complex subject. I’m not remotely worried about things coming in from the stars. As for letting the AI out of the jar, I’m a bit perplexed. The transcripts are not available for review? If not, what seems relevant is the idea that an ideal encryption system has to be public so the very smartest people can try to poke holes in it. Of course, the political will to keep an AI in the box may be lacking—if you don’t let it out, someone else will let another one out somewhere else. Seems related to commercial release of genetically modified plants, which in some cases may have been imprudent.
Sounds like you’ve got the “things from the stars” story flipped—in that parable, we (or our more-intelligent doppelgangers) are the AI, being simulated in some computer by weird 5-dimensional aliens. The point of the story is that high processing speed and power relative to whoever’s outside the computer is a ridiculously great advantage.
Yeah, I think the idea behind keeping the transcripts unavailable is to force an outside view—“these people thought they wouldn’t be convinced, and they were” rather than “but I wouldn’t be convinced by that argument”. Though possibly there are other, shadier reasons! As for the encryption metaphor, I guess in this case the encryption is known (people) but the attack is unknown—and in fact whatever attack would actually be used by an AI would be different and better, so we don’t really get a chance to prepare to defend against it.
And yep, that’s another standard objection—we can’t just make safely constrained AIs, because someone else will make an unconstrained AI, therefore the most important problem to work on is how to make a safe and unconstrained AI before we die horribly.
I haven’t read much in the super-intelligent AI realm, but perhaps a relatively naive observer has some positive value. If we get to the point of producing AI that seems remotely super-intelligent, we’ll stick firewalls around it. I don’t think the suggested actions of a super-intelligent AI will be harmful in an incomprehensible way. An exception would if it created something like the world’s funniest joke. The problem with HAL was that they gave him control of spacecraft functions. I say we don’t give ‘hands’ to the big brains, and we don’t give big brains to the hands, and then I won’t lose much sleep.
I believe the standard objections are that it’s far more intelligent and quick-of-thought than us, so: it can beat your firewalls; it’s ludicrously persuasive; it can outwit us with advice that subtly serves its ends; it could invent “basilisks” like the world’s funniest joke; and even if we left it alone on a mainframe with no remote access and no input or output, it could work out how to escape and/or kill us with clever use of cooling fans or something.
Here’s an example of why Eliezer suggests that you be much more paranoid.
Thanks for pointers into what is a large and complex subject. I’m not remotely worried about things coming in from the stars. As for letting the AI out of the jar, I’m a bit perplexed. The transcripts are not available for review? If not, what seems relevant is the idea that an ideal encryption system has to be public so the very smartest people can try to poke holes in it. Of course, the political will to keep an AI in the box may be lacking—if you don’t let it out, someone else will let another one out somewhere else. Seems related to commercial release of genetically modified plants, which in some cases may have been imprudent.
Sounds like you’ve got the “things from the stars” story flipped—in that parable, we (or our more-intelligent doppelgangers) are the AI, being simulated in some computer by weird 5-dimensional aliens. The point of the story is that high processing speed and power relative to whoever’s outside the computer is a ridiculously great advantage.
Yeah, I think the idea behind keeping the transcripts unavailable is to force an outside view—“these people thought they wouldn’t be convinced, and they were” rather than “but I wouldn’t be convinced by that argument”. Though possibly there are other, shadier reasons! As for the encryption metaphor, I guess in this case the encryption is known (people) but the attack is unknown—and in fact whatever attack would actually be used by an AI would be different and better, so we don’t really get a chance to prepare to defend against it.
And yep, that’s another standard objection—we can’t just make safely constrained AIs, because someone else will make an unconstrained AI, therefore the most important problem to work on is how to make a safe and unconstrained AI before we die horribly.