How is that such a weak argument? I’m all for smarter algorithms—as opposed to just increasing raw computing power—but given the algorithms that are already in existence (e.g. AIXItl, others) we’d strongly—and based on theoretical results—expect there to exist some hardware threshold, that once crossed, would empower even the current algorithms sufficiently for an AGI-like phenomenon to emerge.
Since we know that exponential growth is, well, quite fast, it seems a sensible conclusion to say “(If) Moore’s Law, (then eventually) AGI”, without even mandating more efficient programming. That or dispute established machine learning algorithms and theoretical models. While the software-side is the bottle-neck, it is one that scales with computing power and can thus be compensated.
Of course smarter algorithms would greatly lower the aforementioned threshold, but if (admittedly a big if) Moore’s Law were to hold true for a few more iterations, that might not be as relevant as we assume it to be.
The number of steps for current algorithms/agents to converge on an acceptable model of their environment may still be very large, but compared to future approaches, we’d expect that to be a difference in degree, not in kind. Nothing that some computronium shouldn’t be able to compensate.
This may be important because as long as there’s any kind of consistent hardware improvement—not even exponential—that argument would establish that AGI is just a matter of time, not some obscure eventuality.
Assuming the worst case on the algorithmic side, a standstill, the computational cost—even that of a combinatorial explosion—remains constant. The gap can only narrow down. That makes it a question of how many doubling cycles it would take to close it. We’re not necessarily talking desktop computers here (disregarding their goal predictions).
Exponential growth with such a short doubling time with some unknown goal threshold to be reached is enough to make any provably optimal approach work eventually. If it continues.
There is probably not enough computational power in the entire visible universe (assuming maximal theoretical efficiency) to power a reasonable AIXI-like algorithm. A few steps of combinatorial growth makes mere exponential growth look like standing very very still.
Changing the topic slightly, I always interpreted the Godel argument as saying there weren’t good reasons to expect faster algorithms—thus, no super-human AI.
As you implied, the argument that Godel-ian issues prevent human-level intelligence is obviously disprove by the existence of actual humans.
And there you’ve given a better theory than most AI experts. It’s not Moores’s law + reasonable explanation hence AI that’s weak, it’s just Moores’s law on its own...
How is that such a weak argument? I’m all for smarter algorithms—as opposed to just increasing raw computing power—but given the algorithms that are already in existence (e.g. AIXItl, others) we’d strongly—and based on theoretical results—expect there to exist some hardware threshold, that once crossed, would empower even the current algorithms sufficiently for an AGI-like phenomenon to emerge.
Since we know that exponential growth is, well, quite fast, it seems a sensible conclusion to say “(If) Moore’s Law, (then eventually) AGI”, without even mandating more efficient programming. That or dispute established machine learning algorithms and theoretical models. While the software-side is the bottle-neck, it is one that scales with computing power and can thus be compensated.
Of course smarter algorithms would greatly lower the aforementioned threshold, but if (admittedly a big if) Moore’s Law were to hold true for a few more iterations, that might not be as relevant as we assume it to be.
The number of steps for current algorithms/agents to converge on an acceptable model of their environment may still be very large, but compared to future approaches, we’d expect that to be a difference in degree, not in kind. Nothing that some computronium shouldn’t be able to compensate.
This may be important because as long as there’s any kind of consistent hardware improvement—not even exponential—that argument would establish that AGI is just a matter of time, not some obscure eventuality.
Moore’s Law is not enough to make AIXI-style brute force work. A few more orders of magnitude won’t beat combinatorial explosion.
Assuming the worst case on the algorithmic side, a standstill, the computational cost—even that of a combinatorial explosion—remains constant. The gap can only narrow down. That makes it a question of how many doubling cycles it would take to close it. We’re not necessarily talking desktop computers here (disregarding their goal predictions).
Exponential growth with such a short doubling time with some unknown goal threshold to be reached is enough to make any provably optimal approach work eventually. If it continues.
There is probably not enough computational power in the entire visible universe (assuming maximal theoretical efficiency) to power a reasonable AIXI-like algorithm. A few steps of combinatorial growth makes mere exponential growth look like standing very very still.
Changing the topic slightly, I always interpreted the Godel argument as saying there weren’t good reasons to expect faster algorithms—thus, no super-human AI.
As you implied, the argument that Godel-ian issues prevent human-level intelligence is obviously disprove by the existence of actual humans.
Who would you re-interpret as making this argument?
It’s my own position—I’m not aware of anyone in the literature making this argument (I’m not exactly up on the literature).
Then why write “I...interpreted the Godel argument” when you were not interpreting others, and had in mind an argument that is unrelated to Godel?
And there you’ve given a better theory than most AI experts. It’s not Moores’s law + reasonable explanation hence AI that’s weak, it’s just Moores’s law on its own...