1) Its not well understood outside of AI researchers, so the scientists who create it will build what they think is the most friendly AI possible. I understand what Eliezer is saying about not using his personal values, so instead he uses his personal interpretation of something else. Eliezer says that making a world which works by “better rules” then fading away would not be a “god to rule us all”, but who’s decided on those rules (or the processes by which the AI decides on those rules)? Ultimately its the coders who design the thing. Its a very small group of people with specialized knowledge changing the fate of the entire human race.
2) Do we have any reason to believe that a single foom will drastically increase an AI’s intelligence, as opposed to making it just a bit smarter? Typically, recursive self-improvement does make significant headway, until the marginal return on investment in more improvement is eclipsed by other (generally newer) projects.
3) If an AGI could become so powerful as to rule the world in a short time span, any group which disagrees with how an AGI project is going will try to create their own before the first one is finished. This is a prisoner’s dilemma arms-race scenario. Considerations about its future friendliness could be put on hold in order to get it out “before those damn commies do”.
4) In order to create an AGI before the opposition, vast resources would be required. The process would almost certainly be undertaken by governments. I’m imagining the cast of characters from Dr. Strangelove sitting in the War Room and telling the programmers and scientist how to design their AI.
In short, I think the biggest hurdles are political, and so I’m not very optimistic they’ll be solved. Trying to create a friendly AI in response to someone else creating a perceived unfriendly AI is a rational thing to do, but starting the first friendly AI project may not be rational.
I don’t see whats so bad about a race of machines wiping us out though; we’re all going to die and be replaced by our children in one way or another anyways.
The problems that I see with friendly AGI are:
1) Its not well understood outside of AI researchers, so the scientists who create it will build what they think is the most friendly AI possible. I understand what Eliezer is saying about not using his personal values, so instead he uses his personal interpretation of something else. Eliezer says that making a world which works by “better rules” then fading away would not be a “god to rule us all”, but who’s decided on those rules (or the processes by which the AI decides on those rules)? Ultimately its the coders who design the thing. Its a very small group of people with specialized knowledge changing the fate of the entire human race.
2) Do we have any reason to believe that a single foom will drastically increase an AI’s intelligence, as opposed to making it just a bit smarter? Typically, recursive self-improvement does make significant headway, until the marginal return on investment in more improvement is eclipsed by other (generally newer) projects.
3) If an AGI could become so powerful as to rule the world in a short time span, any group which disagrees with how an AGI project is going will try to create their own before the first one is finished. This is a prisoner’s dilemma arms-race scenario. Considerations about its future friendliness could be put on hold in order to get it out “before those damn commies do”.
4) In order to create an AGI before the opposition, vast resources would be required. The process would almost certainly be undertaken by governments. I’m imagining the cast of characters from Dr. Strangelove sitting in the War Room and telling the programmers and scientist how to design their AI.
In short, I think the biggest hurdles are political, and so I’m not very optimistic they’ll be solved. Trying to create a friendly AI in response to someone else creating a perceived unfriendly AI is a rational thing to do, but starting the first friendly AI project may not be rational.
I don’t see whats so bad about a race of machines wiping us out though; we’re all going to die and be replaced by our children in one way or another anyways.