(Reaction to the first sentence: “Is this going to be an argument that would imply that humans can’t improve their own intelligence?”)
Yeah, his first wrong statement in the argument is “a more intelligent program p2 necessarily has more complexity than a less intelligent p1”. I would use an example along the lines of “p1 has a hundred data points about the path of a ball thrown over the surface of the Moon, and uses linear interpolation; p2 describes that path using a parabola defined by the initial position and velocity of the projectile and the gravitational pull at the surface of the Moon”. Or “rigid projectiles A and B will collide in a vacuum, and the task is to predict their paths; p1 has data down to the atom about projectile A, and no data at all about projectile B; p2 has the mass, position, and velocity of both projectiles”. Or, for that matter, “p1 has several megabytes of incorrect data which it incorporates into its predictions”.
It seems he may have confused himself into assuming that p1 is the most intelligent possible program of Kolmogorov complexity k1. (He later says ”… then we have a contradiction since k1 was supposed to be the minimal expression of intelligence at that level”. Wrong; k1 was supposed to be the minimal expression of that particular intelligence p1, not the minimal expression of some set of possible intelligences.) Then it would follow that any more intelligent (i.e. better-predicting, by his definition) program must be more complex.
(Reaction to the first sentence: “Is this going to be an argument that would imply that humans can’t improve their own intelligence?”)
Yeah, his first wrong statement in the argument is “a more intelligent program p2 necessarily has more complexity than a less intelligent p1”. I would use an example along the lines of “p1 has a hundred data points about the path of a ball thrown over the surface of the Moon, and uses linear interpolation; p2 describes that path using a parabola defined by the initial position and velocity of the projectile and the gravitational pull at the surface of the Moon”. Or “rigid projectiles A and B will collide in a vacuum, and the task is to predict their paths; p1 has data down to the atom about projectile A, and no data at all about projectile B; p2 has the mass, position, and velocity of both projectiles”. Or, for that matter, “p1 has several megabytes of incorrect data which it incorporates into its predictions”.
It seems he may have confused himself into assuming that p1 is the most intelligent possible program of Kolmogorov complexity k1. (He later says ”… then we have a contradiction since k1 was supposed to be the minimal expression of intelligence at that level”. Wrong; k1 was supposed to be the minimal expression of that particular intelligence p1, not the minimal expression of some set of possible intelligences.) Then it would follow that any more intelligent (i.e. better-predicting, by his definition) program must be more complex.