But it sure looks like tractable constant time token predictors already capture a bunch of what we often call intelligence, even when those same systems can’t divide!
This is crazy! I’m raising my eyebrows right now to emphasize it! Consider also doing so! This is weird enough to warrant it!
Why is this crazy? Humans can’t do integer division in one step either.
And no finite system could, for arbitrary integers. So why should we find this surprising at all?
Of course naively, if you hadn’t really considered it, it might be surprising. But in hindsight shouldn’t we just be saying, “Oh, yeah that makes sense.”?
A constant time architecture failing to divide arbitrary integers in one step isn’t surprising at all. The surprising part is being able to do all the other things with the same architecture. Those other things are apparently computationally simple.
Even with the benefit of hindsight, I don’t look back to my 2015 self and think, “how silly I was being! Of course this was possible!”
2015-me couldn’t just look at humans and conclude that constant time algorithms would include a large chunk of human intuition or reasoning. It’s true that humans tend to suck at arbitrary arithmetic, but we can’t conclude much from that. Human brains aren’t constant time- they’re giant messy sometimes-cyclic graphs where neuronal behavior over time is a critical feature of its computation. Even when the brain is working on a problem that could obviously be solved in constant time, the implementation the brain uses isn’t the one a maximally simple sequential constant time program would use (even if you could establish a mapping between the two).
And then there’s savants. Clearly, the brain’s architecture can express various forms of rapid non-constant time calculation. Most of us just don’t work that way by default, and most of the rest of us don’t practice it.
Even 2005-me did think that intelligence was much easier than the people claiming “AI is impossible!” and so on, but I don’t see how I could have strongly believed at that point that it was going to be this easy.
Why is this crazy? Humans can’t do integer division in one step either.
And no finite system could, for arbitrary integers. So why should we find this surprising at all?
Of course naively, if you hadn’t really considered it, it might be surprising. But in hindsight shouldn’t we just be saying, “Oh, yeah that makes sense.”?
A constant time architecture failing to divide arbitrary integers in one step isn’t surprising at all. The surprising part is being able to do all the other things with the same architecture. Those other things are apparently computationally simple.
Even with the benefit of hindsight, I don’t look back to my 2015 self and think, “how silly I was being! Of course this was possible!”
2015-me couldn’t just look at humans and conclude that constant time algorithms would include a large chunk of human intuition or reasoning. It’s true that humans tend to suck at arbitrary arithmetic, but we can’t conclude much from that. Human brains aren’t constant time- they’re giant messy sometimes-cyclic graphs where neuronal behavior over time is a critical feature of its computation. Even when the brain is working on a problem that could obviously be solved in constant time, the implementation the brain uses isn’t the one a maximally simple sequential constant time program would use (even if you could establish a mapping between the two).
And then there’s savants. Clearly, the brain’s architecture can express various forms of rapid non-constant time calculation. Most of us just don’t work that way by default, and most of the rest of us don’t practice it.
Even 2005-me did think that intelligence was much easier than the people claiming “AI is impossible!” and so on, but I don’t see how I could have strongly believed at that point that it was going to be this easy.