Actually, you can spell out the argument very briefly. Most people, however, will immediately reject one or more of the premises due to cognitive biases that are hard to overcome.
It seems like you’re essentially saying “This argument is correct. Anyone who thinks it is wrong is irrational.” Could probably do without that; the argument is far from as simple as you present it. Specifically, the last point:
At minimum, this means any AI as smart as a human, can be expected to become MUCH smarter than human beings—probably smarter than all of the smartest minds the entire human race has ever produced, combined, without even breaking a sweat.
So I agree that there’s no reason to assume an upper bound on intelligence, but it seems like you’re arguing that hard takeoff is inevitable, which as far as I’m aware has never been shown convincingly.
Furthermore, even if you suppose that Foom is likely, it’s not clear where the threshold for Foom is. Could a sub-human level AI foom? What about human-level intelligence? Or maybe we need super-human intelligence? Do we have good evidence for where the Foom-threshold would be?
I think the problems with resolving the Foom debate stem from the fact that “intelligence” is still largely a black box. It’s very nice to say that intelligence is an “optimization process”, but that is a fake explanation if I’ve ever seen one because it fails to explain in any way what is being optimized.
I think you paint in broad strokes. The Foom issue is not resolved.
It seems like you’re essentially saying “This argument is correct. Anyone who thinks it is wrong is irrational.”
No, what I’m saying is, I haven’t yet seen anyone provide any counterarguments to the argument itself, vs. “using arguments as soldiers”.
The problem is that it’s not enough to argue that a million things could stop a foom from going supercritical. To downgrade AGI as an existential threat, you have to argue that no human being will ever succeed in building a human or even near-human AGI. (Just like to downgrade bioweapons as an existential threat, you have to argue that no individual or lab will ever accidentally or on purpose release something especially contagious or virulent.)
Furthermore, even if you suppose that Foom is likely, it’s not clear where the threshold for Foom is. Could a sub-human level AI foom? What about human-level intelligence? Or maybe we need super-human intelligence? Do we have good evidence for where the Foom-threshold would be?
It’s fairly irrelevant to the argument: there are many possible ways to get there. The killer argument, however, is that if a human can build a human-level intelligence, then it is already super-human, as soon as you can make it run faster than a human. And you can limit the self-improvement to just finding ways to make it run faster: you still end up with something that can and will kick humanity’s butt unless it has a reason not to.
Even ems—human emulations—have this same problem, and they might actually be worse in some ways, as humans are known for doing worse things to each other than mere killing.
It’s possible that there are also sub-human foom points, but it’s not necessary for the overall argument to remain solid: unFriendly AGI is no less an existential risk than bioweapons are.
The killer argument, however, is that if a human can build a human-level intelligence, then it is already super-human, as soon as you can make it run faster than a human.
Personally, what I find hardest to argue against is that a digital intelligence can make itself run in more places.
In the inconvenient case of a human upload running at human speed or slower on a building’s worth of computers, you’ve still got a human who can spend most of their waking hours earning money, with none of the overhead associated with maintaining a body and with the advantage of global celebrity status as the first upload. As soon as they can afford to run a copy of theirself, the two of them together can immediately start earning twice as fast. Then, after as much time again, four times as fast; then eight times; and so on until the copies have grabbed all the storage space and CPU time that anyone’s willing to sell or rent out (assuming they don’t run out of potential income sources).
Put another way: it seems to me that “fooming” doesn’t really require self-improvement in the sense of optimizing code or redesigning hardware; it just requires fast reproduction, which is made easier in our particular situation by the huge and growing supply of low-hanging storage-space and CPU-time fruit ready for the first digital intelligence that claims it.
This assumes that every CPU architecture is suitable for the theoretical AGI, it assumes that it can run on every computational substrate. It also assumes that it can easily acquire more computational substrate or create new one. I do not believe that those assumptions are reasonable economically or by means of social engineering. Without enabling technologies like advanced real-world nanotechnology the AGI won’t be able to create new computational substrate without the whole economy of the world supporting it.
Supercomputers like the one to simulate the IBM Blue Brain project cannot simply be replaced by taking control of a few botnets. They use highly optimized architecture that needs for example a memory latency and bandwidth bounded below a certain threshold.
If you accept the Church–Turing thesis that everything computable is computable by a Turing machine then yes. But even then the speed-improvements are highly dependent on the architecture available. But if you rather adhere to the stronger Church–Turing–Deutsch principle then the ultimate computational substrate an artificial general intelligence may need might be one incorporating non-classical physics, e.g. a quantum computer. This would significantly reduce its ability to make use of most available resources to seed copies of itself or for high-level reasoning.
I just don’t see there being enough unused computational resources available in the world that, even in the case that all computational architecture is suitable, it could produce more than a few copies of itself. Which would then also be highly susceptible to brute force used by humans to reduce the necessary bandwidth.
I’m simply trying to show that there are arguments to weaken most of the dangerous pathways that could lead to existential risks from superhuman AI.
You’re right, but exponential slowdown eats a lot of gains in processor speed and memory. This could be a problem toward arguments of substrate independence.
Straight forward simulation is exponentially slower—n qubits require simulating amplitudes of 2^n basis states. We haven’t actually been able to prove that that’s the best possible we can do, however. BQP certainly isn’t expected to be able to solve NP-complete problems efficiently, for instance. We’ve only really been able to get exponential speedups on very carefully structured problems with high degrees of symmetry. (Lesser speedups have also been found on less structured problems, it’s true).
Just like to downgrade bioweapons as an existential threat, you have to argue that no individual or lab will ever accidentally or on purpose release something especially contagious or virulent.
The problem here is not that destruction is easier than benevolence, everyone agrees on that. The problem is that the SIAI is not arguing about grey goo scenarios but something that is not just very difficult to produce but that also needs the incentive to do so. The SIAI is not arguing about the possibility of the bursting of a dam but that the dam failure is additionally deliberately caused by the dam itself. So why isn’t for example nanotechnology a more likely and therefore bigger existential risk than AGI?
Even ems—human emulations—have this same problem, and they might actually be worse in some ways, as humans are known for doing worse things to each other than mere killing.
As I said in other comments, an argument one should take serious. But there are also arguments to outweigh this path and all others to some extent. It may very well be the case that once we are at the point of human emulation that we either already merged with our machines, that we are faster and better than our machines and simulations alone. It may also very well be that the first emulations, as it is the case today, run at much slower speeds than the original and that until any emulation reaches a standard-human level we’re already a step further ourselves or in our understanding and security measures.
unFriendly AGI is no less an existential risk than bioweapons are.
Antimatter weapons are less an existential risk than nuclear weapons although it is really hard to destroy the world with nukes and really easy to do so with antimatter weapons. The difference is that antimatter weapons are as much harder to produce, acquire and use than nuclear weapons as they are more efficient tools of destruction.
So why isn’t for example nanotechnology a more likely and therefore bigger existential risk than AGI?
If you define “nanotechnology” to include all forms of bioengineering, then it probably is.
The difference, from an awareness point of view, is that the people doing bioengineering (or creating antimatter weapons) have a much better idea that what they’re doing is potentially dangerous/world-ending, than AI developers are likely to be. The fact that many AI advocates put forth pure fantasy reasons why superintelligence will be nice and friendly by itself (see mwaser’s ethics claims, for example) is evidence that they are not taking the threat seriously.
Antimatter weapons are less an existential risk than nuclear weapons although it is really hard to destroy the world with nukes and really easy to do so with antimatter weapons. The difference is that antimatter weapons are as much harder to produce, acquire and use than nuclear weapons as they are more efficient tools of destruction.
Presumably, if you are researching antimatter weapons, you have at least some idea that what you are doing is really, really dangerous.
The issue is that AGI development is a bit like trying to build a nuclear power plant, without having any idea where “critical mass” is, in a world whose critical mass is discontinuous (i.e., you may not have any advance warning signs that you are approaching it, like overheating in a reactor), using nuclear engineers who insist that the very idea of critical mass is just a silly science fiction story.
What led you to believe that the space of possible outcomes where an AI consumes all resources (including humans) is larger than the number of outcomes where the AI doesn’t? For some reason(s) you seem to assume that the unbounded incentive to foom and consume the universe comes naturally to any constructed intelligence but any other incentive is very difficult to be implemented. What I see is a much larger number of outcomes where an intelligence does nothing without some hardcoded or evolved incentive. Crude machines do things because that’s all they can do, the number of different ways for them to behave is very limited. Intelligent machines however have high degrees of freedom to behave (pathways to follow) and with this freedom comes choice and choice needs volition, it needs incentive, the urge to follow one way but not another. You seem to assume that somehow the will to foom and consume is given, does not have to be carefully and deliberately hardcoded or evolved, yet the will to constrain itself to given parameters is really hard to achieve. I just don’t think that this premise is reasonable and it is what you base all your arguments on.
I suspect the difference in opinions here is based on different answers to the question of whether the AI should be assumed to be a recursive self-improver.
So why isn’t for example nanotechnology a more likely and therefore bigger existential risk than AGI?
That is a good question and I have no idea. The degree of existential threat there is most significantly determined by relative ease of creation. I don’t know enough to be able to predict which would be produced first—self replicating nano-technology or an AGI. SIAI believes the former is likely to be produced first and I do not know whether or not they have supported that claim.
Other factors contributing to the risk are:
Complexity—the number of ways the engineer could screw up while creating it in a way that would be catastrophic. The ‘grey goo’ risk is concentrated more specifically to the self replication mechanism of the nanotech while just about any mistake in an AI could kill us.
Awareness of the risks. It is not too difficult to understand the risks when creating a self replicating nano-bot. It is hard to imagine an engineer creating one not seeing the problem and being damn careful. Unfortunately it is not hard to imagine Ben.
I find myself confused at the fact that Drexlerian nanotechnology of any sort is advocated as possible by people who think physics and chemistry work. Materials scientists—i.e. the chemists who actually work with nanotechnology in real life—have documented at length why his ideas would need to violate both.
This is the sort of claim that makes me ask advocates to document their Bayesian network. Do their priors include the expert opinions of materials scientists, who (pretty much universally as far as I can tell) consider Drexler and fans to be clueless?
(The RW article on nanotechnology is mostly written by a very annoyed materials scientist who works at nanoscale for a living. It talks about what real-life nanotechnology is and includes lots of references that advocates can go argue with. He was inspired to write it by arguing with cryonics advocates who would literally answer almost any objection to its feasibility with “But, nanobots!”)
That RationalWiki article is a farce. The central “argument” seems to be:
imagine a car production line with its hi-tech robotic arms that work fine at our macroscopic scale. To get a glimpse of what it would be like to operate a production line on the microscopic scale, imagine filling the factory completely with gravel and trying to watch the mechanical arms move through it—and then imagine if the gravel was also sticky.
So: they don’t even know that Drexler-style nanofactories operate in a vacuum!
Drexler-style nanofactories don’t operate in a vacuum, because they don’t exist and no-one has any idea whatsoever how to make such a thing exist, at all. They are presently a purely hypothetical concept with no actual scientific or technological grounding.
The gravel analogy is not so much an argument as a very simple example for the beginner that a nanotechnology fantasist might be able to get their head around; the implicit actual argument would be “please, learn some chemistry and physics so you have some idea what you’re talking about.” Which is not an argument that people will tend to accept (in general people don’t take any sort of advice on any topic, ever), but when experts tell you you’re verging on not even wrong and there remains absolutely nothing to show for the concept after 25 years, it might be worth allowing for the possibility that Drexlerian nanotechnology is, even if the requisite hypothetical technology and hypothetical scientific breakthroughs happen, ridiculously far ahead of anything we have the slightest understanding of.
“The proposal for Drexler-style nanofactories has them operating in a vacuum”, then.
If these wannabe-critics don’t understand that then they have a very superficial understanding of Drexler’s proposals—but are sufficiently unaware of that to parade their ignorance in public.
The “wannabe-critics” are actual chemists and physicists who actually work at nanoscale—Drexler advocates tend to fit neither qualification—and who have written long lists of reasons why this stuff can’t possibly work and why Drexler is to engineering what Ayn Rand is to philosophy.
I’m sure they’ll change their tune when there’s the slightest visible progress on any of Drexler’s proposals; the existence proof would be pretty convincing.
Yep. Mostly written by Armondikov, who is said annoyed material scientist. I am not, but spent some effort asking other material scientists who work or have worked at nanoscale their expert opinions.
Thankfully, the article on the wiki has references, as I noted in my original comment.
It’s fairly irrelevant to the argument: there are many possible ways to get there
I don’t see how you can say that. It’s exceedingly relevant to the question at hand, which is: “Should Ben Goertzel avoid making OpenCog due to concerns of friendliness?”. If the Foom-threshold is exceedingly high (several to dozens times the “level” of human intelligence), then it is overwhelmingly unlikely that OpenCog has a chance to Foom. It’d be something akin to the Wright brothers building a Boeing 777 instead of the Wright flyer. Total nonsense.
it seems like you’re arguing that hard takeoff is inevitable, which as far as I’m aware has never been shown convincingly.
So when did the goalposts get moved to proving that hard takeoff is inevitable?
The claim that research into FAI theory is useful requires only that it be shown that uFAI might be dangerous. Showing that is pretty much a slam dunk.
The claim that research into FAI theory is urgent requires only that it be shown that hard takeoff might be possible (with a probability > 2% or so).
And, as the nightmare scenarios of de Garis suggest, even if the fastest possible takeoff turns out to take years to accomplish, such a soft, but reckless, takeoff may still be difficult to stop short of war.
Good point. Certainly the research strategy that SIAI seems to currently be pursuing is not the only possible approach to Friendly AI, and FAI is not the only approach to human-value-positive AI. I would like to see more attention paid to a balance-of-power approach—relying on AIs to monitor other AIs for incipient megalomania.
Calls to slow down, not publish, not fund seem common in the name of friendliness.
However, unless those are internationally coordinated, a highly likely effect will be to ensure that superintelligence is developed elsewhere.
What is needed most—IMO—is for good researchers to be first. So—advising good researchers to slow down in the name of safety is probably one of the very worst possible things that spectators can do.
So when did the goalposts get moved to proving that hard takeoff is inevitable?
It doesn’t even seem hard to prevent. Topple civilization for example. It’s something that humans have managed to achieve regularly thus far and it is entirely possible that we would never recover sufficiently to construct a hard takeoff scenario if we nuked ourselves back to another dark age.
Furthermore, even if you suppose that Foom is likely, it’s not clear where the threshold for Foom is. Could a sub-human level AI foom? What about human-level intelligence? Or maybe we need super-human intelligence? Do we have good evidence for where the Foom-threshold would be?
A “threshold” implies a linear scale for intelligence, which is far from given, especially for non-human minds. For example, say you reverse engineer a mouse’s brain, but then speed it up, and give it much more memory (short-term and long-term—if those are just ram and/or disk space on a computer, expanding those is easy). How intelligent is the result? It thinks way faster than a human, remembers more, can make complex plans … but is it smarter than a human?
Probably not, but it may still be dangerous. Same for a “toddler AI” with those modifications.
Human level intelligence is fairly clearly just above the critical point (just look at what is happening now). However, machine brains have different strengths and weaknesses. Sub-human machines could accelerate the ongoing explosion a lot—if they are better than humans at just one thing—and such machines seem common.
Replace “threshold” with “critical point.” I’m using this terminology because EY himself uses it to frame his arguments. See Cascades, Cycles, Insight, where Eliezer draws an analogy between a fission reaction going critical and an AI FOOMing.
It thinks way faster than a human, remembers more, can make complex plans … but is it smarter than a human?
This seems to be tangential, but I’m gonna say no, as long as we assume that the rat brain doesn’t spontaneously acquire language or human-level abstract reasoning skills.
It seems like you’re essentially saying “This argument is correct. Anyone who thinks it is wrong is irrational.” Could probably do without that; the argument is far from as simple as you present it. Specifically, the last point:
So I agree that there’s no reason to assume an upper bound on intelligence, but it seems like you’re arguing that hard takeoff is inevitable, which as far as I’m aware has never been shown convincingly.
Furthermore, even if you suppose that Foom is likely, it’s not clear where the threshold for Foom is. Could a sub-human level AI foom? What about human-level intelligence? Or maybe we need super-human intelligence? Do we have good evidence for where the Foom-threshold would be?
I think the problems with resolving the Foom debate stem from the fact that “intelligence” is still largely a black box. It’s very nice to say that intelligence is an “optimization process”, but that is a fake explanation if I’ve ever seen one because it fails to explain in any way what is being optimized.
I think you paint in broad strokes. The Foom issue is not resolved.
No, what I’m saying is, I haven’t yet seen anyone provide any counterarguments to the argument itself, vs. “using arguments as soldiers”.
The problem is that it’s not enough to argue that a million things could stop a foom from going supercritical. To downgrade AGI as an existential threat, you have to argue that no human being will ever succeed in building a human or even near-human AGI. (Just like to downgrade bioweapons as an existential threat, you have to argue that no individual or lab will ever accidentally or on purpose release something especially contagious or virulent.)
It’s fairly irrelevant to the argument: there are many possible ways to get there. The killer argument, however, is that if a human can build a human-level intelligence, then it is already super-human, as soon as you can make it run faster than a human. And you can limit the self-improvement to just finding ways to make it run faster: you still end up with something that can and will kick humanity’s butt unless it has a reason not to.
Even ems—human emulations—have this same problem, and they might actually be worse in some ways, as humans are known for doing worse things to each other than mere killing.
It’s possible that there are also sub-human foom points, but it’s not necessary for the overall argument to remain solid: unFriendly AGI is no less an existential risk than bioweapons are.
Personally, what I find hardest to argue against is that a digital intelligence can make itself run in more places.
In the inconvenient case of a human upload running at human speed or slower on a building’s worth of computers, you’ve still got a human who can spend most of their waking hours earning money, with none of the overhead associated with maintaining a body and with the advantage of global celebrity status as the first upload. As soon as they can afford to run a copy of theirself, the two of them together can immediately start earning twice as fast. Then, after as much time again, four times as fast; then eight times; and so on until the copies have grabbed all the storage space and CPU time that anyone’s willing to sell or rent out (assuming they don’t run out of potential income sources).
Put another way: it seems to me that “fooming” doesn’t really require self-improvement in the sense of optimizing code or redesigning hardware; it just requires fast reproduction, which is made easier in our particular situation by the huge and growing supply of low-hanging storage-space and CPU-time fruit ready for the first digital intelligence that claims it.
This assumes that every CPU architecture is suitable for the theoretical AGI, it assumes that it can run on every computational substrate. It also assumes that it can easily acquire more computational substrate or create new one. I do not believe that those assumptions are reasonable economically or by means of social engineering. Without enabling technologies like advanced real-world nanotechnology the AGI won’t be able to create new computational substrate without the whole economy of the world supporting it.
Supercomputers like the one to simulate the IBM Blue Brain project cannot simply be replaced by taking control of a few botnets. They use highly optimized architecture that needs for example a memory latency and bandwidth bounded below a certain threshold.
Actually, every CPU architecture will suffice for the theoretical AGI, if you’re willing to wait long enough for its thoughts. ;-)
If you accept the Church–Turing thesis that everything computable is computable by a Turing machine then yes. But even then the speed-improvements are highly dependent on the architecture available. But if you rather adhere to the stronger Church–Turing–Deutsch principle then the ultimate computational substrate an artificial general intelligence may need might be one incorporating non-classical physics, e.g. a quantum computer. This would significantly reduce its ability to make use of most available resources to seed copies of itself or for high-level reasoning.
I just don’t see there being enough unused computational resources available in the world that, even in the case that all computational architecture is suitable, it could produce more than a few copies of itself. Which would then also be highly susceptible to brute force used by humans to reduce the necessary bandwidth.
I’m simply trying to show that there are arguments to weaken most of the dangerous pathways that could lead to existential risks from superhuman AI.
A classical computer can simulate a quantum one—just slowly.
You’re right, but exponential slowdown eats a lot of gains in processor speed and memory. This could be a problem toward arguments of substrate independence.
Straight forward simulation is exponentially slower—n qubits require simulating amplitudes of 2^n basis states. We haven’t actually been able to prove that that’s the best possible we can do, however. BQP certainly isn’t expected to be able to solve NP-complete problems efficiently, for instance. We’ve only really been able to get exponential speedups on very carefully structured problems with high degrees of symmetry. (Lesser speedups have also been found on less structured problems, it’s true).
The problem here is not that destruction is easier than benevolence, everyone agrees on that. The problem is that the SIAI is not arguing about grey goo scenarios but something that is not just very difficult to produce but that also needs the incentive to do so. The SIAI is not arguing about the possibility of the bursting of a dam but that the dam failure is additionally deliberately caused by the dam itself. So why isn’t for example nanotechnology a more likely and therefore bigger existential risk than AGI?
As I said in other comments, an argument one should take serious. But there are also arguments to outweigh this path and all others to some extent. It may very well be the case that once we are at the point of human emulation that we either already merged with our machines, that we are faster and better than our machines and simulations alone. It may also very well be that the first emulations, as it is the case today, run at much slower speeds than the original and that until any emulation reaches a standard-human level we’re already a step further ourselves or in our understanding and security measures.
Antimatter weapons are less an existential risk than nuclear weapons although it is really hard to destroy the world with nukes and really easy to do so with antimatter weapons. The difference is that antimatter weapons are as much harder to produce, acquire and use than nuclear weapons as they are more efficient tools of destruction.
If you define “nanotechnology” to include all forms of bioengineering, then it probably is.
The difference, from an awareness point of view, is that the people doing bioengineering (or creating antimatter weapons) have a much better idea that what they’re doing is potentially dangerous/world-ending, than AI developers are likely to be. The fact that many AI advocates put forth pure fantasy reasons why superintelligence will be nice and friendly by itself (see mwaser’s ethics claims, for example) is evidence that they are not taking the threat seriously.
Presumably, if you are researching antimatter weapons, you have at least some idea that what you are doing is really, really dangerous.
The issue is that AGI development is a bit like trying to build a nuclear power plant, without having any idea where “critical mass” is, in a world whose critical mass is discontinuous (i.e., you may not have any advance warning signs that you are approaching it, like overheating in a reactor), using nuclear engineers who insist that the very idea of critical mass is just a silly science fiction story.
What led you to believe that the space of possible outcomes where an AI consumes all resources (including humans) is larger than the number of outcomes where the AI doesn’t? For some reason(s) you seem to assume that the unbounded incentive to foom and consume the universe comes naturally to any constructed intelligence but any other incentive is very difficult to be implemented. What I see is a much larger number of outcomes where an intelligence does nothing without some hardcoded or evolved incentive. Crude machines do things because that’s all they can do, the number of different ways for them to behave is very limited. Intelligent machines however have high degrees of freedom to behave (pathways to follow) and with this freedom comes choice and choice needs volition, it needs incentive, the urge to follow one way but not another. You seem to assume that somehow the will to foom and consume is given, does not have to be carefully and deliberately hardcoded or evolved, yet the will to constrain itself to given parameters is really hard to achieve. I just don’t think that this premise is reasonable and it is what you base all your arguments on.
Have you read The Basic AI Drives?
I suspect the difference in opinions here is based on different answers to the question of whether the AI should be assumed to be a recursive self-improver.
That is a good question and I have no idea. The degree of existential threat there is most significantly determined by relative ease of creation. I don’t know enough to be able to predict which would be produced first—self replicating nano-technology or an AGI. SIAI believes the former is likely to be produced first and I do not know whether or not they have supported that claim.
Other factors contributing to the risk are:
Complexity—the number of ways the engineer could screw up while creating it in a way that would be catastrophic. The ‘grey goo’ risk is concentrated more specifically to the self replication mechanism of the nanotech while just about any mistake in an AI could kill us.
Awareness of the risks. It is not too difficult to understand the risks when creating a self replicating nano-bot. It is hard to imagine an engineer creating one not seeing the problem and being damn careful. Unfortunately it is not hard to imagine Ben.
I find myself confused at the fact that Drexlerian nanotechnology of any sort is advocated as possible by people who think physics and chemistry work. Materials scientists—i.e. the chemists who actually work with nanotechnology in real life—have documented at length why his ideas would need to violate both.
This is the sort of claim that makes me ask advocates to document their Bayesian network. Do their priors include the expert opinions of materials scientists, who (pretty much universally as far as I can tell) consider Drexler and fans to be clueless?
(The RW article on nanotechnology is mostly written by a very annoyed materials scientist who works at nanoscale for a living. It talks about what real-life nanotechnology is and includes lots of references that advocates can go argue with. He was inspired to write it by arguing with cryonics advocates who would literally answer almost any objection to its feasibility with “But, nanobots!”)
That RationalWiki article is a farce. The central “argument” seems to be:
So: they don’t even know that Drexler-style nanofactories operate in a vacuum!
They also need to look up “Kinesin Transport Protein”.
Drexler-style nanofactories don’t operate in a vacuum, because they don’t exist and no-one has any idea whatsoever how to make such a thing exist, at all. They are presently a purely hypothetical concept with no actual scientific or technological grounding.
The gravel analogy is not so much an argument as a very simple example for the beginner that a nanotechnology fantasist might be able to get their head around; the implicit actual argument would be “please, learn some chemistry and physics so you have some idea what you’re talking about.” Which is not an argument that people will tend to accept (in general people don’t take any sort of advice on any topic, ever), but when experts tell you you’re verging on not even wrong and there remains absolutely nothing to show for the concept after 25 years, it might be worth allowing for the possibility that Drexlerian nanotechnology is, even if the requisite hypothetical technology and hypothetical scientific breakthroughs happen, ridiculously far ahead of anything we have the slightest understanding of.
“The proposal for Drexler-style nanofactories has them operating in a vacuum”, then.
If these wannabe-critics don’t understand that then they have a very superficial understanding of Drexler’s proposals—but are sufficiently unaware of that to parade their ignorance in public.
The “wannabe-critics” are actual chemists and physicists who actually work at nanoscale—Drexler advocates tend to fit neither qualification—and who have written long lists of reasons why this stuff can’t possibly work and why Drexler is to engineering what Ayn Rand is to philosophy.
I’m sure they’ll change their tune when there’s the slightest visible progress on any of Drexler’s proposals; the existence proof would be pretty convincing.
Hah! A lot of the edits on that article seem to have been made by you!
Yep. Mostly written by Armondikov, who is said annoyed material scientist. I am not, but spent some effort asking other material scientists who work or have worked at nanoscale their expert opinions.
Thankfully, the article on the wiki has references, as I noted in my original comment.
So what were the priors that went into your considered opinion?
I don’t see how you can say that. It’s exceedingly relevant to the question at hand, which is: “Should Ben Goertzel avoid making OpenCog due to concerns of friendliness?”. If the Foom-threshold is exceedingly high (several to dozens times the “level” of human intelligence), then it is overwhelmingly unlikely that OpenCog has a chance to Foom. It’d be something akin to the Wright brothers building a Boeing 777 instead of the Wright flyer. Total nonsense.
Ah. Well, that wasn’t the question I was discussing. ;-)
(And I would think that the answer to that question would depend heavily on what OpenCog consists of.)
So when did the goalposts get moved to proving that hard takeoff is inevitable?
The claim that research into FAI theory is useful requires only that it be shown that uFAI might be dangerous. Showing that is pretty much a slam dunk.
The claim that research into FAI theory is urgent requires only that it be shown that hard takeoff might be possible (with a probability > 2% or so).
And, as the nightmare scenarios of de Garis suggest, even if the fastest possible takeoff turns out to take years to accomplish, such a soft, but reckless, takeoff may still be difficult to stop short of war.
Assuming there aren’t better avenues to ensuring a positive hard takeoff.
Good point. Certainly the research strategy that SIAI seems to currently be pursuing is not the only possible approach to Friendly AI, and FAI is not the only approach to human-value-positive AI. I would like to see more attention paid to a balance-of-power approach—relying on AIs to monitor other AIs for incipient megalomania.
Calls to slow down, not publish, not fund seem common in the name of friendliness.
However, unless those are internationally coordinated, a highly likely effect will be to ensure that superintelligence is developed elsewhere.
What is needed most—IMO—is for good researchers to be first. So—advising good researchers to slow down in the name of safety is probably one of the very worst possible things that spectators can do.
It doesn’t even seem hard to prevent. Topple civilization for example. It’s something that humans have managed to achieve regularly thus far and it is entirely possible that we would never recover sufficiently to construct a hard takeoff scenario if we nuked ourselves back to another dark age.
A “threshold” implies a linear scale for intelligence, which is far from given, especially for non-human minds. For example, say you reverse engineer a mouse’s brain, but then speed it up, and give it much more memory (short-term and long-term—if those are just ram and/or disk space on a computer, expanding those is easy). How intelligent is the result? It thinks way faster than a human, remembers more, can make complex plans … but is it smarter than a human?
Probably not, but it may still be dangerous. Same for a “toddler AI” with those modifications.
Human level intelligence is fairly clearly just above the critical point (just look at what is happening now). However, machine brains have different strengths and weaknesses. Sub-human machines could accelerate the ongoing explosion a lot—if they are better than humans at just one thing—and such machines seem common.
Even the Einstein of monkeys is still just a monkey.
Replace “threshold” with “critical point.” I’m using this terminology because EY himself uses it to frame his arguments. See Cascades, Cycles, Insight, where Eliezer draws an analogy between a fission reaction going critical and an AI FOOMing.
This seems to be tangential, but I’m gonna say no, as long as we assume that the rat brain doesn’t spontaneously acquire language or human-level abstract reasoning skills.