I generally agree with your position on the Sequences, but it seems to me that it is possible to hang around this website and have meaningful discussions without worshiping the Sequences or Eliezer Yudkowsky. At least it works for me. As for being a highly involved/high status member of the community, especially the offline one, I don’t know.
Anyway, regarding the point about super-intelligence that you raised, I charitably interpret the position of the AI-risk advocates not as the claim that super-intelligence would be in principle outside the scope of human scientific inquiry, but as the claim that a super-intelligent agent would be more efficient at understanding humans that humans would be at understanding it, giving the super-intelligent agent and edge over humans.
I think that the AI-risk advocates tend to exaggerate various elements of their analysis: they probably underestimate time to human-level AI and time to super-human AI, they may overestimate the speed and upper bounds to recursive self-improvement (their core arguments based on exponential growth seem, at best, unsupported).
Moreover, it seems that they tend to conflate super-intelligence with a sort of near-omniscience: They seem to assume that a super-intelligent agent will be a near-optimal Bayesian reasoner with an extremely strong prior that will allow it to gain a very accurate model of the world, including all the nuances of human psychology, from a very small amount of observational evidence and little or no interventional experiments. Recent discussion here. Maybe this is the community bias that you were talking about, the over-reliance on abstract thought rather than evidence, projected on an hypothetical future AI. It seems dubious to me that this kind of extreme inference is even physically possible, and if it is, we are certainly not anywhere close to implementing it. All the recent advances in machine learning, for instance, rely on processing very large datasets.
Anyway, as much as they exaggerate the magnitude and urgency of the issue, I think that the AI-risk advocates have a point when they claim that keeping a system much intelligent than ourselves under control would be a non-trivial problem.
I think that the AI-risk advocates tend to exaggerate various elements of their analysis: they probably underestimate time to human-level AI and time to super-human AI
It’s worth keeping in mind that AI-risk advocates tend to be less confident that AGI is nigh than the top-cited scientists within AI are. People I know at MIRI and FHI are worried about AGI because it looks like a technology that’s many decades away, but one where associated safety technologies are even more decades away.
That’s consistent with the possibility that your criticism could turn out to be right. It could be that we’re less wrong than others on this metric and yet still very badly wrong in absolute terms. To make a strong prediction in this area is to claim to already have a pretty good computational understanding of how general intelligence works.
Moreover, it seems that they tend to conflate super-intelligence with a sort of near-omniscience: They seem to assume that a super-intelligent agent will be a near-optimal Bayesian reasoner
Can you give an example of a statement by a MIRI researcher that is better predicted by ‘X is speaking of the AI as a near-optimal Bayesian’ than by ‘X is speaking of the AI as an agent that’s as much smarter than humans as humans are smarter than chimpanzees, but is still nowhere near optimal’? (Or ‘an agent that’s as much smarter than humans as humans are smarter than dogs’...) I’m not seeing why saying ‘Bob the AI could be 100x more powerful than a human’, for example, commits one to a view about how close Bob is to optimal.
Can you give an example of a statement by a MIRI research that is better predicted by ‘X is speaking of the AI as a near-optimal Bayesian’ than by ‘X is speaking of the AI as an agent that’s as much smarter than humans as humans are smarter than chimpanzees, but is still nowhere near optimal’?
Maybe they are not explicitly saying “near-optimal”, but it seems to me that they are using models like Solomonoff Induction and AIXI as intuition pumps, and they are getting these beliefs of extreme intelligence from there. Anyway, do you disagree that MIRI in general expects the kind of low-data, low-experimentation, prior-driven learning that I talked about to be practically possible?
Maybe they are not explicitly saying “near-optimal”, but it seems to me that they are using models like Solomonoff Induction and AIXI as intuition pumps, and they are getting these beliefs of extreme intelligence from there.
I don’t think anyone at MIRI arrived at worries like ‘AI might be able to deceive their programmers’ or ‘AI might be able to design powerful pathogens’ by staring at the equation for AIXI or AIXItl. AIXI is a useful idea because it’s well-specified enough to let us have conversations that are more than just ‘here are my vague intuitions vs. your vague-intuitions’; it’s math that isn’t quite the right math to directly answer our questions, but at least gets us outside of our own heads, in much the same way that an empirical study can be useful even if it can’t directly answer our questions.
Investigating mathematical and scientific problems that are near to the philosophical problems we care about is a good idea, when we still don’t understand the philosophical problem well enough to directly formalize or test it, because it serves as a point of contact with a domain that isn’t just ‘more vague human intuitions’. Historically this has often been a good way to make intellectual progress, though it’s important to keep in mind just how limited our results are.
AIXI is also useful because the problems we couldn’t solve even if we (impossibly) had recourse to AIXI often overlap with the problems where our theoretical understanding of intelligence is especially lacking, and where we may therefore want to concentrate our early research efforts.
The idea that AI will have various ‘superpowers’ comes more from:
(a) the thought that humans often vary a lot in how much they exhibit the power (without appearing to vary all that much in hardware);
(b) the thought that human brains have known hardware limitations, where existing machines (and a fortiori machines 50 or 100 years down the line) can surpass humans by many orders of magnitude; and
(c) the thought that humans have many unnecessary software limitations, including cases where machines currently outperform humans. There’s also no special reason to expect evolution’s first stab at technology-capable intelligence to have stumbled on all the best possible software ideas.
A more common intuition pump is to simply note that limitations in human brains suggest speed superintelligence is possible, and it’s relatively easy to imagine speed superintelligence allowing one to perform extraordinary feats without imagining other, less well-understood forms of cognitive achievement. Rates of cultural and technological progress in human societies are a better (though still very imperfect) source of data than AIXI about how much improvement intelligence makes possible.
Anyway, do you disagree that MIRI in general expects the kind of low-data, low-experimentation, prior-driven learning that I talked about to be practically possible?
This should be possible to some extent, especially when it comes to progress in mathematics. We should also distinguish software experiments from physical experiments, since it’s a lot harder to keep an AI from performing the former, and the former are much easier to speed up in proportion to speed-ups in the experimenter’s ability to analyze results.
I don’t think there’s any specific consensus view about how much progress requires waiting for results from slow experiments. I frequently hear Luke raise the possibility that slow natural processes could limit rates of self-improvement in AI, but I don’t know whether he considers that a major consideration or a minor one.
I don’t think anyone at MIRI arrived at worries like ‘AI might be able to deceive their programmers’ or ‘AI might be able to design powerful pathogens’ by staring at the equation for AIXI or AIXItl.
In his quantum physics sequence, where he constantly talks (rants, actually) about Solomonoff Induction, Yudkowsky writes:
A Bayesian superintelligence, hooked up to a webcam, would invent General Relativity as a hypothesis—perhaps not the dominant hypothesis, compared to Newtonian mechanics, but still a hypothesis under direct consideration—by the time it had seen the third frame of a falling apple. It might guess it from the first frame, if it saw the statics of a bent blade of grass.
Mind you, I’m not saying that AIXI is not an interesting and possibly theoretically useful model, my objection is that MIRI people seem to have used it to set a reference class for their intuitions about super-intelligence.
Rates of cultural and technological progress in human societies are a better (though still very imperfect) source of data than AIXI about how much improvement intelligence makes possible.
Extrapolation is always an epistemically questionable endeavor. Intelligence is intrinsically limited by how predictable the world is. Efficiency (time complexity/space complexity/energy complexity/etc.) of algorithms for any computational task is bounded. Hardware resources also have physical limits.
This doesn’t mean that given our current understanding we can claim that human-level intelligence is an upper bound. That would be most likely false. But there is no particular reason to assume that the physically attainable bound will be enormously higher than human-level. The more extreme the scenario, the less probability we should assign to it, reasonably according to a light-tailed distribution.
This should be possible to some extent, especially when it comes to progress in mathematics.
Ok, but my point is that it has not been established that progress in mathematics will automatically grant an AI “superpowers” in the physical world. And I’d even say that even superpowers by raw cognitive power alone are questionable. Theorem proving can be sped up, but there is more to math than theorem proving.
I think Eliezer mostly just used “Bayesian superintelligence” as a synonym for “superintelligence.” The “Bayesian” is there to emphasize the fact that he has Bayes-optimality as a background idea in his model of what-makes-cognition-work and what-makes-some-cognition-work-better-than-other-kinds, but Eliezer thought AI could take over the world long before he knew about AIXI or thought Bayesian models of cognition were important.
I don’t know as much about Anna’s views. Maybe she does assign more weight to AIXI as a source of data; the example you cited supports that. Though since she immediately follows up her AIXI example with “AIXI is a theoretical toy. How plausible are smarter systems in the real world?” and proceeds to cite some of the examples I mentioned above, I’m going to guess she isn’t getting most of her intuitions about superintelligence from AIXI either.
there is no particular reason to assume that the physically attainable bound will be enormously higher than human-level. The more extreme the scenario, the less probability we should assign to it, reasonably according to a light-tailed distribution.
I think our disagreement is about what counts as “extreme” or “extraordinary”, in the “extraordinary claims require extraordinary evidence” sense.
If I’m understanding your perspective, you think we should assume at the outset that humans are about halfway between ‘minimal intelligence’ and ‘maximal intelligence’—a very uninformed prior—and we should then update only very weakly in the direction of ‘humans are closer to the minimum than to the maximum’. Claiming that there’s plenty of room above us is an ‘extreme’ claim relative to that uninformed prior, so the epistemically modest thing to do is to stick pretty close the assumption that humans are ‘average’, that the range of intelligence exhibited in humans with different cognitive abilities and disabilities represents a non-tiny portion of the range of physically possible intelligence.
My view is that we should already have updated strongly away from the ‘humans are average’ prior as soon as we acquired information about how humans arose—through evolution, a process that has computational resources and perseverance but no engineering ingenuity, constructing our current advantages out of chimpanzees’ over the course of just 250,000 generations. At that point we are no longer a randomly selected mind; our prior is swamped by all the rich information we have about ourselves as an evolved species, enhanced by culture but not by any deliberate genetic engineering or fine-grained neurosurgery. We have no more reason in the year 2015 to think that we represent 1⁄10 (or for that matter 1⁄10,000) of what is cognitively possible, than a person in the year 1850 would have reason to think that the fastest possible flying machine was only 10x as fast as the fastest bird, or that the most powerful possible bomb was only 10x as strong as the most powerful existing bomb.
Our knowledge of physics and of present technological capabilities, though still extremely flawed as data to generalize from, is rich enough to strongly shift a flat prior (informed by nothing more than ‘this bird / bomb / brain exists and the rest of the universe is unknown to me’). So although we should not be confident of any specific prediction, ‘humans are near the practical intelligence maximum’ is the extreme view that requires a lot of evidence to move toward. Superintelligence is unusual relative to what we can directly observe, not ‘extreme’ in an evidential sense.
We should also distinguish software experiments from physical experiments, since it’s a lot harder to keep an AI from performing the former, and the former are much easier to speed up in proportion to speed-ups in the experimenter’s ability to analyze results.
This is actually completely untrue and is an example of a typical misconception about programming—which is far closer to engineering than math. Every time you compile a program, you are physically testing a theory exactly equivalent to building and testing a physical machine. Every single time you compile and run a program.
If you speedup an AI—by speeding up its mental algorithms or giving it more hardware, you actually slow down the subjective speed of the world and all other software systems in exact proportion. This has enormous consequences—some of which I explored here and here. Human brains operate at 1000hz or or less, which suggests that a near optimal (in terms of raw speed) human-level AGI could run at 1 million X time dilation. However that would effectively mean that the AGI’s computers it had access to would be subjectively slower by 1 million times—so if it’s compiling code for 10 GHZ CPUs, those subjectively run at 10 kilohertz.
Define a “high-level machine intelligence” (HLMI) as one that can carry out most human professions at least as well as a typical human. [...] For the purposes of this question, assume that human scientific activity continues without major negative disruption. By what year would you see a (10% / 50% / 90%) probability for such HLMI to exist?
29 of the authors responded. Their median answer was a 10% probability of HLMI by 2024, a 50% probability of HLMI by 2050, and a 90% probability by 2070.
(This excludes how many said “never”; I can’t find info on whether any of the authors gave that answer, but in pooled results that also include 141 people from surveys of a “Philosophy and Theory of AI” conference, an “Artificial General Intelligence” conference, an “Impacts and Risks of Artificial General Intelligence” conference, and members of the Greek Association for Artificial Intelligence, 1.2% of the people in the overall pool (2 / 170) said we’d “never” have a 10% chance of HLMI, 4.1% (7 / 170) said “never” for 50% probability, and 16.5% (28 / 170) said “never” for 90%.)
In Bostrom’s Superintelligence (pp. 19-20), he cites the pooled results:
The combined sample gave the following (median) estimate: 10% probability of HLMI by 2022, 50% probability by 2040, and 90% probability by 2075. [...]
These numbers should be taken with some grains of salt: sample sizes are quite small and not necessarily representative of the general expert population. They are, however, in concordance with results from other surveys.
The survey results are also in line with some recently published interviews with about two-dozen researchers in AI-related fields. For example, Nils Nilsson has spent a long and productive career working on problems in search, planning, knowledge representation, and robotics, he has authored textbooks in artificial intelligence; and he has recently completed the most comprehensive history of the field written to date. When asked about arrival dates for [AI able to perform around 80% of jobs as well or better than humans perform], he offered the following opinion: 10% chance: 2030[;] 50% chance: 2050[;] 90% chance: 2100[.]
Judging from the published interview transcripts, Professor Nilsson’s probability distribution appears to be quite representative of many experts in the area—though again it must be emphasized that there is a wide spread of opinion: there are practitioners who are substantially more boosterish, confidently expecting HLMI in the 2020-40 range, and others who are confident either that it will never happen or that it is indefinitely far off. In addition, some interviewees feel that the notion of a “human level” of artificial intelligence is ill-defined or misleading, or are for other reasons reluctant to go on record with a quantitative prediction.
My own view is that the median numbers reported in the expert survey do not have enough probability mass on later arrival dates. A 10% probability of HLMI not having been developed by 2075 or even 2100 (after conditionalizing on “human scientific activity continuing without major negative disruption”) seems too low.
Luke has pretty much the same view as Bostrom. I don’t know as much about Eliezer’s views, but the last time I talked with him about this (in 2014), he didn’t expect AGI to be here in 20 years. I think a pretty widely accepted view at MIRI and FHI is Luke’s: “We can’t be confident AI will come in the next 30 years, and we can’t be confident it’ll take more than 100 years, and anyone who is confident of either claim is pretending to know too much.”
Of course, there is a huge problem with expert surveys—at the meta-level they have a very poor predictive track record. There is the famous example that Stuart Russell likes to cite, where rutherford said “anyone who looked for a source of power in the transformation of the atoms was talking moonshine”—a day before leo szilard created a successful fission chain reaction. There is also the similar example of the Wright Brothers—some unknown guys without credentials claim to have cracked aviation when recognized experts like Langley have just failed in a major way and respected scientists such as Lord Kelvin claim the whole thing is impossible. The wright brothers then report their first successful manned flight and no newspaper will even publish it.
Maybe this is the community bias that you were talking to, the over-reliance on abstract thought rather than evidence, projected on an hypothetical future AI.
You nailed it. (Your other points too.)
The claim [is] that a super-intelligent agent would be more efficient at understanding humans that humans would be at understanding it, giving the super-intelligent agent[’s] edge over humans.
The problem here is that intelligence is not some linear scale, even general intelligence. We human beings are insanely optimized for social intelligence in a way that is not easy for a machine to learn to replicate, especially without detection. It is possible for a general AI to be powerful enough to provide meaningful acceleration of molecular nanotechnology and medical science research whilst being utterly befuddled by social conventions and generally how humans think, simply because it was not programmed for social intelligence.
Anyway, as much as they exaggerate the magnitude and urgency of the issue, I think that the AI-risk advocates have a point when they claim that keeping a system much intelligent than ourselves under control would be a non-trivial problem.
There is however a substantial difference between a non-trivial problem and an impossible problem. Non-trivial we can work with. I solve non-trivial problems for a living. You solve a non-trivial problem by hacking at it repeatedly until it breaks into components that are themselves well understood enough to be trivial problems. It takes a lot of work, and the solution is to simply to do a lot of work.
But in my experience the AI-risk advocates claim that safe / controlled UFAI is an impossibility. You can’t solve an impossibility! What’s more, in that frame of mind any work done towards making AGI is risk-increasing. Thus people are actively persuaded to NOT work on artificial intelligence, and instead work of fields of basic mathematics which is at this time too basic or speculative to say for certain whether it would have a part in making a safe or controllable AGI.
So smart people who could be contributing to an AGI project, are now off fiddling with basic mathematics research on chalkboards instead. That is, in the view of someone who believes safe / controllable UFAI is non-trivial but possible mechanism to accelerate the arrival of life-saving anti-aging technologies, a humanitarian disaster.
The problem here is that intelligence is not some linear scale, even general intelligence. We human beings are insanely optimized for social intelligence in a way that is not easy for a machine to learn to replicate, especially without detection. It is possible for a general AI to be powerful enough to provide meaningful acceleration of molecular nanotechnology and medical science research whilst being utterly befuddled by social conventions and generally how humans think, simply because it was not programmed for social intelligence.
Agree.
I think that since many AI risk advocates have little or no experience in computer science and specifically AI research, they tend to anthropomorphize AI to some extent. They get that an AI could have goals different than human goals but they seem to think that it’s intelligence would be more or less like human intelligence, only faster and with more memory. In particular they assume that an AI will easily develop a theory of mind and social intelligence from little human interaction.
But in my experience the AI-risk advocates claim that safe / controlled UFAI is an impossibility.
I think they used to claim that safe AGI was pretty much an impossibility unless they were the ones who built it, so gib monies plox! Anyway, it seems that in recent times they have taken a somewhat less heavy handed approach.
I generally agree with your position on the Sequences, but it seems to me that it is possible to hang around this website and have meaningful discussions without worshiping the Sequences or Eliezer Yudkowsky. At least it works for me.
As for being a highly involved/high status member of the community, especially the offline one, I don’t know.
Anyway, regarding the point about super-intelligence that you raised, I charitably interpret the position of the AI-risk advocates not as the claim that super-intelligence would be in principle outside the scope of human scientific inquiry, but as the claim that a super-intelligent agent would be more efficient at understanding humans that humans would be at understanding it, giving the super-intelligent agent and edge over humans.
I think that the AI-risk advocates tend to exaggerate various elements of their analysis: they probably underestimate time to human-level AI and time to super-human AI, they may overestimate the speed and upper bounds to recursive self-improvement (their core arguments based on exponential growth seem, at best, unsupported).
Moreover, it seems that they tend to conflate super-intelligence with a sort of near-omniscience:
They seem to assume that a super-intelligent agent will be a near-optimal Bayesian reasoner with an extremely strong prior that will allow it to gain a very accurate model of the world, including all the nuances of human psychology, from a very small amount of observational evidence and little or no interventional experiments. Recent discussion here.
Maybe this is the community bias that you were talking about, the over-reliance on abstract thought rather than evidence, projected on an hypothetical future AI.
It seems dubious to me that this kind of extreme inference is even physically possible, and if it is, we are certainly not anywhere close to implementing it. All the recent advances in machine learning, for instance, rely on processing very large datasets.
Anyway, as much as they exaggerate the magnitude and urgency of the issue, I think that the AI-risk advocates have a point when they claim that keeping a system much intelligent than ourselves under control would be a non-trivial problem.
It’s worth keeping in mind that AI-risk advocates tend to be less confident that AGI is nigh than the top-cited scientists within AI are. People I know at MIRI and FHI are worried about AGI because it looks like a technology that’s many decades away, but one where associated safety technologies are even more decades away.
That’s consistent with the possibility that your criticism could turn out to be right. It could be that we’re less wrong than others on this metric and yet still very badly wrong in absolute terms. To make a strong prediction in this area is to claim to already have a pretty good computational understanding of how general intelligence works.
Can you give an example of a statement by a MIRI researcher that is better predicted by ‘X is speaking of the AI as a near-optimal Bayesian’ than by ‘X is speaking of the AI as an agent that’s as much smarter than humans as humans are smarter than chimpanzees, but is still nowhere near optimal’? (Or ‘an agent that’s as much smarter than humans as humans are smarter than dogs’...) I’m not seeing why saying ‘Bob the AI could be 100x more powerful than a human’, for example, commits one to a view about how close Bob is to optimal.
Cite? I think I remember Eliezer Yudkowsky and Luke Muehlhauser going for the usual “20 years from now” (in 2009) time to AGI prediction.
By contrast Andrew Ng says “Maybe hundreds of years from now, maybe thousands of years from now”.
Maybe they are not explicitly saying “near-optimal”, but it seems to me that they are using models like Solomonoff Induction and AIXI as intuition pumps, and they are getting these beliefs of extreme intelligence from there.
Anyway, do you disagree that MIRI in general expects the kind of low-data, low-experimentation, prior-driven learning that I talked about to be practically possible?
I don’t think anyone at MIRI arrived at worries like ‘AI might be able to deceive their programmers’ or ‘AI might be able to design powerful pathogens’ by staring at the equation for AIXI or AIXItl. AIXI is a useful idea because it’s well-specified enough to let us have conversations that are more than just ‘here are my vague intuitions vs. your vague-intuitions’; it’s math that isn’t quite the right math to directly answer our questions, but at least gets us outside of our own heads, in much the same way that an empirical study can be useful even if it can’t directly answer our questions.
Investigating mathematical and scientific problems that are near to the philosophical problems we care about is a good idea, when we still don’t understand the philosophical problem well enough to directly formalize or test it, because it serves as a point of contact with a domain that isn’t just ‘more vague human intuitions’. Historically this has often been a good way to make intellectual progress, though it’s important to keep in mind just how limited our results are.
AIXI is also useful because the problems we couldn’t solve even if we (impossibly) had recourse to AIXI often overlap with the problems where our theoretical understanding of intelligence is especially lacking, and where we may therefore want to concentrate our early research efforts.
The idea that AI will have various ‘superpowers’ comes more from:
(a) the thought that humans often vary a lot in how much they exhibit the power (without appearing to vary all that much in hardware);
(b) the thought that human brains have known hardware limitations, where existing machines (and a fortiori machines 50 or 100 years down the line) can surpass humans by many orders of magnitude; and
(c) the thought that humans have many unnecessary software limitations, including cases where machines currently outperform humans. There’s also no special reason to expect evolution’s first stab at technology-capable intelligence to have stumbled on all the best possible software ideas.
A more common intuition pump is to simply note that limitations in human brains suggest speed superintelligence is possible, and it’s relatively easy to imagine speed superintelligence allowing one to perform extraordinary feats without imagining other, less well-understood forms of cognitive achievement. Rates of cultural and technological progress in human societies are a better (though still very imperfect) source of data than AIXI about how much improvement intelligence makes possible.
This should be possible to some extent, especially when it comes to progress in mathematics. We should also distinguish software experiments from physical experiments, since it’s a lot harder to keep an AI from performing the former, and the former are much easier to speed up in proportion to speed-ups in the experimenter’s ability to analyze results.
I don’t think there’s any specific consensus view about how much progress requires waiting for results from slow experiments. I frequently hear Luke raise the possibility that slow natural processes could limit rates of self-improvement in AI, but I don’t know whether he considers that a major consideration or a minor one.
In his quantum physics sequence, where he constantly talks (rants, actually) about Solomonoff Induction, Yudkowsky writes:
Anna Salamon also mentions AIXI when discussing the feasibility of super-intelligence.
Mind you, I’m not saying that AIXI is not an interesting and possibly theoretically useful model, my objection is that MIRI people seem to have used it to set a reference class for their intuitions about super-intelligence.
Extrapolation is always an epistemically questionable endeavor.
Intelligence is intrinsically limited by how predictable the world is. Efficiency (time complexity/space complexity/energy complexity/etc.) of algorithms for any computational task is bounded. Hardware resources also have physical limits.
This doesn’t mean that given our current understanding we can claim that human-level intelligence is an upper bound. That would be most likely false. But there is no particular reason to assume that the physically attainable bound will be enormously higher than human-level. The more extreme the scenario, the less probability we should assign to it, reasonably according to a light-tailed distribution.
Ok, but my point is that it has not been established that progress in mathematics will automatically grant an AI “superpowers” in the physical world.
And I’d even say that even superpowers by raw cognitive power alone are questionable. Theorem proving can be sped up, but there is more to math than theorem proving.
I think Eliezer mostly just used “Bayesian superintelligence” as a synonym for “superintelligence.” The “Bayesian” is there to emphasize the fact that he has Bayes-optimality as a background idea in his model of what-makes-cognition-work and what-makes-some-cognition-work-better-than-other-kinds, but Eliezer thought AI could take over the world long before he knew about AIXI or thought Bayesian models of cognition were important.
I don’t know as much about Anna’s views. Maybe she does assign more weight to AIXI as a source of data; the example you cited supports that. Though since she immediately follows up her AIXI example with “AIXI is a theoretical toy. How plausible are smarter systems in the real world?” and proceeds to cite some of the examples I mentioned above, I’m going to guess she isn’t getting most of her intuitions about superintelligence from AIXI either.
I think our disagreement is about what counts as “extreme” or “extraordinary”, in the “extraordinary claims require extraordinary evidence” sense.
If I’m understanding your perspective, you think we should assume at the outset that humans are about halfway between ‘minimal intelligence’ and ‘maximal intelligence’—a very uninformed prior—and we should then update only very weakly in the direction of ‘humans are closer to the minimum than to the maximum’. Claiming that there’s plenty of room above us is an ‘extreme’ claim relative to that uninformed prior, so the epistemically modest thing to do is to stick pretty close the assumption that humans are ‘average’, that the range of intelligence exhibited in humans with different cognitive abilities and disabilities represents a non-tiny portion of the range of physically possible intelligence.
My view is that we should already have updated strongly away from the ‘humans are average’ prior as soon as we acquired information about how humans arose—through evolution, a process that has computational resources and perseverance but no engineering ingenuity, constructing our current advantages out of chimpanzees’ over the course of just 250,000 generations. At that point we are no longer a randomly selected mind; our prior is swamped by all the rich information we have about ourselves as an evolved species, enhanced by culture but not by any deliberate genetic engineering or fine-grained neurosurgery. We have no more reason in the year 2015 to think that we represent 1⁄10 (or for that matter 1⁄10,000) of what is cognitively possible, than a person in the year 1850 would have reason to think that the fastest possible flying machine was only 10x as fast as the fastest bird, or that the most powerful possible bomb was only 10x as strong as the most powerful existing bomb.
Our knowledge of physics and of present technological capabilities, though still extremely flawed as data to generalize from, is rich enough to strongly shift a flat prior (informed by nothing more than ‘this bird / bomb / brain exists and the rest of the universe is unknown to me’). So although we should not be confident of any specific prediction, ‘humans are near the practical intelligence maximum’ is the extreme view that requires a lot of evidence to move toward. Superintelligence is unusual relative to what we can directly observe, not ‘extreme’ in an evidential sense.
This is actually completely untrue and is an example of a typical misconception about programming—which is far closer to engineering than math. Every time you compile a program, you are physically testing a theory exactly equivalent to building and testing a physical machine. Every single time you compile and run a program.
If you speedup an AI—by speeding up its mental algorithms or giving it more hardware, you actually slow down the subjective speed of the world and all other software systems in exact proportion. This has enormous consequences—some of which I explored here and here. Human brains operate at 1000hz or or less, which suggests that a near optimal (in terms of raw speed) human-level AGI could run at 1 million X time dilation. However that would effectively mean that the AGI’s computers it had access to would be subjectively slower by 1 million times—so if it’s compiling code for 10 GHZ CPUs, those subjectively run at 10 kilohertz.
Müller and Bostrom’s 2014 ‘Future progress in artificial intelligence: A survey of expert opinion’ surveyed the 100 top-cited living authors in Microsoft Academic Search’s “artificial intelligence” category, asking the question:
29 of the authors responded. Their median answer was a 10% probability of HLMI by 2024, a 50% probability of HLMI by 2050, and a 90% probability by 2070.
(This excludes how many said “never”; I can’t find info on whether any of the authors gave that answer, but in pooled results that also include 141 people from surveys of a “Philosophy and Theory of AI” conference, an “Artificial General Intelligence” conference, an “Impacts and Risks of Artificial General Intelligence” conference, and members of the Greek Association for Artificial Intelligence, 1.2% of the people in the overall pool (2 / 170) said we’d “never” have a 10% chance of HLMI, 4.1% (7 / 170) said “never” for 50% probability, and 16.5% (28 / 170) said “never” for 90%.)
In Bostrom’s Superintelligence (pp. 19-20), he cites the pooled results:
Luke has pretty much the same view as Bostrom. I don’t know as much about Eliezer’s views, but the last time I talked with him about this (in 2014), he didn’t expect AGI to be here in 20 years. I think a pretty widely accepted view at MIRI and FHI is Luke’s: “We can’t be confident AI will come in the next 30 years, and we can’t be confident it’ll take more than 100 years, and anyone who is confident of either claim is pretending to know too much.”
Thanks!
Of course, there is a huge problem with expert surveys—at the meta-level they have a very poor predictive track record. There is the famous example that Stuart Russell likes to cite, where rutherford said “anyone who looked for a source of power in the transformation of the atoms was talking moonshine”—a day before leo szilard created a successful fission chain reaction. There is also the similar example of the Wright Brothers—some unknown guys without credentials claim to have cracked aviation when recognized experts like Langley have just failed in a major way and respected scientists such as Lord Kelvin claim the whole thing is impossible. The wright brothers then report their first successful manned flight and no newspaper will even publish it.
You nailed it. (Your other points too.)
The problem here is that intelligence is not some linear scale, even general intelligence. We human beings are insanely optimized for social intelligence in a way that is not easy for a machine to learn to replicate, especially without detection. It is possible for a general AI to be powerful enough to provide meaningful acceleration of molecular nanotechnology and medical science research whilst being utterly befuddled by social conventions and generally how humans think, simply because it was not programmed for social intelligence.
There is however a substantial difference between a non-trivial problem and an impossible problem. Non-trivial we can work with. I solve non-trivial problems for a living. You solve a non-trivial problem by hacking at it repeatedly until it breaks into components that are themselves well understood enough to be trivial problems. It takes a lot of work, and the solution is to simply to do a lot of work.
But in my experience the AI-risk advocates claim that safe / controlled UFAI is an impossibility. You can’t solve an impossibility! What’s more, in that frame of mind any work done towards making AGI is risk-increasing. Thus people are actively persuaded to NOT work on artificial intelligence, and instead work of fields of basic mathematics which is at this time too basic or speculative to say for certain whether it would have a part in making a safe or controllable AGI.
So smart people who could be contributing to an AGI project, are now off fiddling with basic mathematics research on chalkboards instead. That is, in the view of someone who believes safe / controllable UFAI is non-trivial but possible mechanism to accelerate the arrival of life-saving anti-aging technologies, a humanitarian disaster.
Agree.
I think that since many AI risk advocates have little or no experience in computer science and specifically AI research, they tend to anthropomorphize AI to some extent. They get that an AI could have goals different than human goals but they seem to think that it’s intelligence would be more or less like human intelligence, only faster and with more memory. In particular they assume that an AI will easily develop a theory of mind and social intelligence from little human interaction.
I think they used to claim that safe AGI was pretty much an impossibility unless they were the ones who built it, so gib monies plox!
Anyway, it seems that in recent times they have taken a somewhat less heavy handed approach.