I don’t think anyone at MIRI arrived at worries like ‘AI might be able to deceive their programmers’ or ‘AI might be able to design powerful pathogens’ by staring at the equation for AIXI or AIXItl.
In his quantum physics sequence, where he constantly talks (rants, actually) about Solomonoff Induction, Yudkowsky writes:
A Bayesian superintelligence, hooked up to a webcam, would invent General Relativity as a hypothesis—perhaps not the dominant hypothesis, compared to Newtonian mechanics, but still a hypothesis under direct consideration—by the time it had seen the third frame of a falling apple. It might guess it from the first frame, if it saw the statics of a bent blade of grass.
Mind you, I’m not saying that AIXI is not an interesting and possibly theoretically useful model, my objection is that MIRI people seem to have used it to set a reference class for their intuitions about super-intelligence.
Rates of cultural and technological progress in human societies are a better (though still very imperfect) source of data than AIXI about how much improvement intelligence makes possible.
Extrapolation is always an epistemically questionable endeavor. Intelligence is intrinsically limited by how predictable the world is. Efficiency (time complexity/space complexity/energy complexity/etc.) of algorithms for any computational task is bounded. Hardware resources also have physical limits.
This doesn’t mean that given our current understanding we can claim that human-level intelligence is an upper bound. That would be most likely false. But there is no particular reason to assume that the physically attainable bound will be enormously higher than human-level. The more extreme the scenario, the less probability we should assign to it, reasonably according to a light-tailed distribution.
This should be possible to some extent, especially when it comes to progress in mathematics.
Ok, but my point is that it has not been established that progress in mathematics will automatically grant an AI “superpowers” in the physical world. And I’d even say that even superpowers by raw cognitive power alone are questionable. Theorem proving can be sped up, but there is more to math than theorem proving.
I think Eliezer mostly just used “Bayesian superintelligence” as a synonym for “superintelligence.” The “Bayesian” is there to emphasize the fact that he has Bayes-optimality as a background idea in his model of what-makes-cognition-work and what-makes-some-cognition-work-better-than-other-kinds, but Eliezer thought AI could take over the world long before he knew about AIXI or thought Bayesian models of cognition were important.
I don’t know as much about Anna’s views. Maybe she does assign more weight to AIXI as a source of data; the example you cited supports that. Though since she immediately follows up her AIXI example with “AIXI is a theoretical toy. How plausible are smarter systems in the real world?” and proceeds to cite some of the examples I mentioned above, I’m going to guess she isn’t getting most of her intuitions about superintelligence from AIXI either.
there is no particular reason to assume that the physically attainable bound will be enormously higher than human-level. The more extreme the scenario, the less probability we should assign to it, reasonably according to a light-tailed distribution.
I think our disagreement is about what counts as “extreme” or “extraordinary”, in the “extraordinary claims require extraordinary evidence” sense.
If I’m understanding your perspective, you think we should assume at the outset that humans are about halfway between ‘minimal intelligence’ and ‘maximal intelligence’—a very uninformed prior—and we should then update only very weakly in the direction of ‘humans are closer to the minimum than to the maximum’. Claiming that there’s plenty of room above us is an ‘extreme’ claim relative to that uninformed prior, so the epistemically modest thing to do is to stick pretty close the assumption that humans are ‘average’, that the range of intelligence exhibited in humans with different cognitive abilities and disabilities represents a non-tiny portion of the range of physically possible intelligence.
My view is that we should already have updated strongly away from the ‘humans are average’ prior as soon as we acquired information about how humans arose—through evolution, a process that has computational resources and perseverance but no engineering ingenuity, constructing our current advantages out of chimpanzees’ over the course of just 250,000 generations. At that point we are no longer a randomly selected mind; our prior is swamped by all the rich information we have about ourselves as an evolved species, enhanced by culture but not by any deliberate genetic engineering or fine-grained neurosurgery. We have no more reason in the year 2015 to think that we represent 1⁄10 (or for that matter 1⁄10,000) of what is cognitively possible, than a person in the year 1850 would have reason to think that the fastest possible flying machine was only 10x as fast as the fastest bird, or that the most powerful possible bomb was only 10x as strong as the most powerful existing bomb.
Our knowledge of physics and of present technological capabilities, though still extremely flawed as data to generalize from, is rich enough to strongly shift a flat prior (informed by nothing more than ‘this bird / bomb / brain exists and the rest of the universe is unknown to me’). So although we should not be confident of any specific prediction, ‘humans are near the practical intelligence maximum’ is the extreme view that requires a lot of evidence to move toward. Superintelligence is unusual relative to what we can directly observe, not ‘extreme’ in an evidential sense.
In his quantum physics sequence, where he constantly talks (rants, actually) about Solomonoff Induction, Yudkowsky writes:
Anna Salamon also mentions AIXI when discussing the feasibility of super-intelligence.
Mind you, I’m not saying that AIXI is not an interesting and possibly theoretically useful model, my objection is that MIRI people seem to have used it to set a reference class for their intuitions about super-intelligence.
Extrapolation is always an epistemically questionable endeavor.
Intelligence is intrinsically limited by how predictable the world is. Efficiency (time complexity/space complexity/energy complexity/etc.) of algorithms for any computational task is bounded. Hardware resources also have physical limits.
This doesn’t mean that given our current understanding we can claim that human-level intelligence is an upper bound. That would be most likely false. But there is no particular reason to assume that the physically attainable bound will be enormously higher than human-level. The more extreme the scenario, the less probability we should assign to it, reasonably according to a light-tailed distribution.
Ok, but my point is that it has not been established that progress in mathematics will automatically grant an AI “superpowers” in the physical world.
And I’d even say that even superpowers by raw cognitive power alone are questionable. Theorem proving can be sped up, but there is more to math than theorem proving.
I think Eliezer mostly just used “Bayesian superintelligence” as a synonym for “superintelligence.” The “Bayesian” is there to emphasize the fact that he has Bayes-optimality as a background idea in his model of what-makes-cognition-work and what-makes-some-cognition-work-better-than-other-kinds, but Eliezer thought AI could take over the world long before he knew about AIXI or thought Bayesian models of cognition were important.
I don’t know as much about Anna’s views. Maybe she does assign more weight to AIXI as a source of data; the example you cited supports that. Though since she immediately follows up her AIXI example with “AIXI is a theoretical toy. How plausible are smarter systems in the real world?” and proceeds to cite some of the examples I mentioned above, I’m going to guess she isn’t getting most of her intuitions about superintelligence from AIXI either.
I think our disagreement is about what counts as “extreme” or “extraordinary”, in the “extraordinary claims require extraordinary evidence” sense.
If I’m understanding your perspective, you think we should assume at the outset that humans are about halfway between ‘minimal intelligence’ and ‘maximal intelligence’—a very uninformed prior—and we should then update only very weakly in the direction of ‘humans are closer to the minimum than to the maximum’. Claiming that there’s plenty of room above us is an ‘extreme’ claim relative to that uninformed prior, so the epistemically modest thing to do is to stick pretty close the assumption that humans are ‘average’, that the range of intelligence exhibited in humans with different cognitive abilities and disabilities represents a non-tiny portion of the range of physically possible intelligence.
My view is that we should already have updated strongly away from the ‘humans are average’ prior as soon as we acquired information about how humans arose—through evolution, a process that has computational resources and perseverance but no engineering ingenuity, constructing our current advantages out of chimpanzees’ over the course of just 250,000 generations. At that point we are no longer a randomly selected mind; our prior is swamped by all the rich information we have about ourselves as an evolved species, enhanced by culture but not by any deliberate genetic engineering or fine-grained neurosurgery. We have no more reason in the year 2015 to think that we represent 1⁄10 (or for that matter 1⁄10,000) of what is cognitively possible, than a person in the year 1850 would have reason to think that the fastest possible flying machine was only 10x as fast as the fastest bird, or that the most powerful possible bomb was only 10x as strong as the most powerful existing bomb.
Our knowledge of physics and of present technological capabilities, though still extremely flawed as data to generalize from, is rich enough to strongly shift a flat prior (informed by nothing more than ‘this bird / bomb / brain exists and the rest of the universe is unknown to me’). So although we should not be confident of any specific prediction, ‘humans are near the practical intelligence maximum’ is the extreme view that requires a lot of evidence to move toward. Superintelligence is unusual relative to what we can directly observe, not ‘extreme’ in an evidential sense.