I can’t remember where I first came across the idea (maybe Daniel Dennett) but the main argument against AI is that it’s simply not worth the cost for the foreseeable future. Sure, we could possibly create an intelligent, self-aware machine now, if we put nearly all the relevant world’s resources and scientists onto it. But who would pay for such a thing?
What’s the ROI for a super-intelligent, self-aware machine? Not very much, I should think—especially considering the potential dangers.
So yeah, we’ll certainly produce machines like the robots in Interstellar—clever expert systems with a simulacrum of self-awareness. Because there’s money in it.
But the real thing? Not likely. The only way it will be likely is much further down the line when it becomes cheap enough to do so for fun. And I think by that time, experience with less powerful genies will have given us enough feedback to be able to do so safely.
What’s the ROI for a super-intelligent, self-aware machine?
That clearly depends on how super is “super-intelligent”. For a trivial example, imagine an AI which can successfully trade in the global financial markets.
What happens if it doesn’t want to—if it decides to do digital art or start life in another galaxy?
That’s the thing, a self-aware intelligent thing isn’t bound to do the tasks you ask of it, hence a poor ROI. Humans are already such entities, but far cheaper to make, so a few who go off and become monks isn’t a big problem.
I don’t think AI will be incredibly expensive. There is a tendency to believe that hard problems require expensive and laborious solutions.
Building a flying machine was a hard problem. An impossible problem. But two guys from a bicycle shop built the first airplane on their own. A lot of hard math problems are solved by lone geniuses. Or by the iterative work of a lot of lone geniuses building on each other. But rarely by large organized projects.
And there is a ton of gain in building smarter and smarter AIs. You can use them to automate more and more jobs, or do things even humans can’t do.
The robots in interstellar were AGI. They could fully understand English and work in unrestricted environments. They are already at, or very close to, human level AI. But there’s no reason advancement has to stop at human level AI. People will continue to tweak it, run it on bigger and faster computer, and eventually have it work on it’s own code.
I can’t remember where I first came across the idea (maybe Daniel Dennett) but the main argument against AI is that it’s simply not worth the cost for the foreseeable future. Sure, we could possibly create an intelligent, self-aware machine now, if we put nearly all the relevant world’s resources and scientists onto it. But who would pay for such a thing?
What’s the ROI for a super-intelligent, self-aware machine? Not very much, I should think—especially considering the potential dangers.
So yeah, we’ll certainly produce machines like the robots in Interstellar—clever expert systems with a simulacrum of self-awareness. Because there’s money in it.
But the real thing? Not likely. The only way it will be likely is much further down the line when it becomes cheap enough to do so for fun. And I think by that time, experience with less powerful genies will have given us enough feedback to be able to do so safely.
That clearly depends on how super is “super-intelligent”. For a trivial example, imagine an AI which can successfully trade in the global financial markets.
What happens if it doesn’t want to—if it decides to do digital art or start life in another galaxy?
That’s the thing, a self-aware intelligent thing isn’t bound to do the tasks you ask of it, hence a poor ROI. Humans are already such entities, but far cheaper to make, so a few who go off and become monks isn’t a big problem.
You give it proper incentives :-)
Or, even simpler X-)
Nope, remember, we’re talking about a super-intelligent entity.
I don’t think AI will be incredibly expensive. There is a tendency to believe that hard problems require expensive and laborious solutions.
Building a flying machine was a hard problem. An impossible problem. But two guys from a bicycle shop built the first airplane on their own. A lot of hard math problems are solved by lone geniuses. Or by the iterative work of a lot of lone geniuses building on each other. But rarely by large organized projects.
And there is a ton of gain in building smarter and smarter AIs. You can use them to automate more and more jobs, or do things even humans can’t do.
The robots in interstellar were AGI. They could fully understand English and work in unrestricted environments. They are already at, or very close to, human level AI. But there’s no reason advancement has to stop at human level AI. People will continue to tweak it, run it on bigger and faster computer, and eventually have it work on it’s own code.