There is probably not enough computational power in the entire visible universe (assuming maximal theoretical efficiency) to power a reasonable AIXI-like algorithm. A few steps of combinatorial growth makes mere exponential growth look like standing very very still.
Changing the topic slightly, I always interpreted the Godel argument as saying there weren’t good reasons to expect faster algorithms—thus, no super-human AI.
As you implied, the argument that Godel-ian issues prevent human-level intelligence is obviously disprove by the existence of actual humans.
There is probably not enough computational power in the entire visible universe (assuming maximal theoretical efficiency) to power a reasonable AIXI-like algorithm. A few steps of combinatorial growth makes mere exponential growth look like standing very very still.
Changing the topic slightly, I always interpreted the Godel argument as saying there weren’t good reasons to expect faster algorithms—thus, no super-human AI.
As you implied, the argument that Godel-ian issues prevent human-level intelligence is obviously disprove by the existence of actual humans.
Who would you re-interpret as making this argument?
It’s my own position—I’m not aware of anyone in the literature making this argument (I’m not exactly up on the literature).
Then why write “I...interpreted the Godel argument” when you were not interpreting others, and had in mind an argument that is unrelated to Godel?