(In a perfect world The Singularity Is Far would’ve been a Robin Hanson post...)
When people talk about cognitive hierarchies and UTMs and how humans are the first example of an entity on the smarter side of the general intelligence Rubicon, it’s not obvious that someone like Eliezer would disagree. Or to speak more easily, I would not disagree, though with caveats that the metaphysics of this abstraction called ‘computation’ still seem confusing unto me. Unfortunately my weak impression is that the people who talk about such things seem to think these fringe Singularity people would disagree, because they hear pseudo-Vingean claims about unpredictability and the like. I’d imagine many of those pseudo-Vingean ‘predictions’ pattern match to that idea everyone has when they’re eight years old about some kind of superconsciousness, the equivalent of universality on top of universality, a triumph over Goedel. They probably also pattern match the claims to even sillier ideas.
Again this is a guess, but it is non-negligibly likely that folk at least similar to Greg Egan have an implicit model of Singularitarian folk (whose diverse members and positions will unfortunately get lumped together to some rather non-negligibly biased extent) as having differing and very probably wrong intuitions about theoretical computer science, instead of having differing but at least plausible beliefs about rates/bursts of technological progress or problem solving ability as measured by biological subjective time. That said, if you asked Egan if he explicitly thinks that Eliezer explicitly thinks that super-Turingness-like-stuff is probable, I don’t think he’d say yes, so it seems probable that my model is at least somewhat wrong.
I think that Aaronson’s comments are correct, though the human thought serial speed-up thing is perhaps a misleading example for this post since the sort of singularity Eliezer et al are interested in is not really about serial speed-ups themselves so much as using speed-ups to search for more speed-ups among other things. Aaronson’s P versus NP example is an okay theoretical example of an insight that smart Singularitarians might talk about as in-hindsight practically incompressible but still powerful enough to ‘change the nature of the optimization game’ as I think Eliezer put it.
That humans might in principle follow any argument a superintelligence could put forth up to the Goedelian limit is maybe true if you’re willing to really really stretch the definition of human. This implies next to nothing about whether or not hardware-enhanced humans or even ems can stay ahead of de novo Bayesian seed AIs or even hacked-together but still non-biologically-originated AIs, whose architectures are wayyyyy more flexible and conducive to software improvements. I have a hard time coming up with a decent counterargument Greg Egan could make here conditional on his accepting the practical possibility of engineering AGI. I’m probably missing nuances of his position. I wish he could have a structured discussion with Eliezer. (Bloggingheads is too much like a debate, no one goes in expecting to change their mind.)
I agree that the chimpanzee-to-human phase transition example is potentially misleading unless there’s something I’m missing. If you’ve seen it used in an SIAI paper in a way that doesn’t mention the possible narrow-to-general intelligence phase transition argument, please link to it.
(In a perfect world The Singularity Is Far would’ve been a Robin Hanson post...)
When people talk about cognitive hierarchies and UTMs and how humans are the first example of an entity on the smarter side of the general intelligence Rubicon, it’s not obvious that someone like Eliezer would disagree. Or to speak more easily, I would not disagree, though with caveats that the metaphysics of this abstraction called ‘computation’ still seem confusing unto me. Unfortunately my weak impression is that the people who talk about such things seem to think these fringe Singularity people would disagree, because they hear pseudo-Vingean claims about unpredictability and the like. I’d imagine many of those pseudo-Vingean ‘predictions’ pattern match to that idea everyone has when they’re eight years old about some kind of superconsciousness, the equivalent of universality on top of universality, a triumph over Goedel. They probably also pattern match the claims to even sillier ideas.
Again this is a guess, but it is non-negligibly likely that folk at least similar to Greg Egan have an implicit model of Singularitarian folk (whose diverse members and positions will unfortunately get lumped together to some rather non-negligibly biased extent) as having differing and very probably wrong intuitions about theoretical computer science, instead of having differing but at least plausible beliefs about rates/bursts of technological progress or problem solving ability as measured by biological subjective time. That said, if you asked Egan if he explicitly thinks that Eliezer explicitly thinks that super-Turingness-like-stuff is probable, I don’t think he’d say yes, so it seems probable that my model is at least somewhat wrong.
I think that Aaronson’s comments are correct, though the human thought serial speed-up thing is perhaps a misleading example for this post since the sort of singularity Eliezer et al are interested in is not really about serial speed-ups themselves so much as using speed-ups to search for more speed-ups among other things. Aaronson’s P versus NP example is an okay theoretical example of an insight that smart Singularitarians might talk about as in-hindsight practically incompressible but still powerful enough to ‘change the nature of the optimization game’ as I think Eliezer put it.
That humans might in principle follow any argument a superintelligence could put forth up to the Goedelian limit is maybe true if you’re willing to really really stretch the definition of human. This implies next to nothing about whether or not hardware-enhanced humans or even ems can stay ahead of de novo Bayesian seed AIs or even hacked-together but still non-biologically-originated AIs, whose architectures are wayyyyy more flexible and conducive to software improvements. I have a hard time coming up with a decent counterargument Greg Egan could make here conditional on his accepting the practical possibility of engineering AGI. I’m probably missing nuances of his position. I wish he could have a structured discussion with Eliezer. (Bloggingheads is too much like a debate, no one goes in expecting to change their mind.)
I agree that the chimpanzee-to-human phase transition example is potentially misleading unless there’s something I’m missing. If you’ve seen it used in an SIAI paper in a way that doesn’t mention the possible narrow-to-general intelligence phase transition argument, please link to it.