It is not totally clear why humans are this bad at math. It is almost certainly unrelated to brains computing using neurons instead of transistors.
Why do you think that? Adding numbers is highly challenging for RNNs and is a standard challenge in recent papers investigating various kinds of differentiable memory and attention mechanisms, precisely because RNNs do so badly at it (like humans).
It’s a bit hard for RNN’s to learn, but they can end up much better than humans. (Also, the reason it is being used as a challenge is because it is a bit tricky but not very tricky.)
It is probably also easy to “teach” humans to be much better at math than we currently are (over evolutionary time), there’s just no pressure for math performance. That seems like the most likely difference between humans and computers.
It’s a bit hard for RNN’s to learn, but they can end up much better than humans.
After some engineering effort. Researchers didn’t just throw a random RNN at the problem in 1990 and found they worked as great as transistors at arithmetic… Plus, if you want to pick extremes (the best RNNs now), are the best RNNs better at adding or multiplying extremely large numbers than human savants?
This raises a really interesting point that I wanted to include in the top level post, but couldn’t find a place for. It seems plausible/likely that human savants are implementing arithmetic using different, and much more efficient algorithms than those used by neurotypical humans. This was actually one of the examples I considered in support of the argument that neurons can’t be the underlying reason humans struggle so much with math.
It has only been in recent generations that arithmetic involving numbers of more than 2 or 3 digits has mattered to peoples wellbeing and survival. I doubt our brains are terribly well wired up for large numbers.
Why do you think that? Adding numbers is highly challenging for RNNs and is a standard challenge in recent papers investigating various kinds of differentiable memory and attention mechanisms, precisely because RNNs do so badly at it (like humans).
It’s a bit hard for RNN’s to learn, but they can end up much better than humans. (Also, the reason it is being used as a challenge is because it is a bit tricky but not very tricky.)
It is probably also easy to “teach” humans to be much better at math than we currently are (over evolutionary time), there’s just no pressure for math performance. That seems like the most likely difference between humans and computers.
After some engineering effort. Researchers didn’t just throw a random RNN at the problem in 1990 and found they worked as great as transistors at arithmetic… Plus, if you want to pick extremes (the best RNNs now), are the best RNNs better at adding or multiplying extremely large numbers than human savants?
This raises a really interesting point that I wanted to include in the top level post, but couldn’t find a place for. It seems plausible/likely that human savants are implementing arithmetic using different, and much more efficient algorithms than those used by neurotypical humans. This was actually one of the examples I considered in support of the argument that neurons can’t be the underlying reason humans struggle so much with math.
It has only been in recent generations that arithmetic involving numbers of more than 2 or 3 digits has mattered to peoples wellbeing and survival. I doubt our brains are terribly well wired up for large numbers.