My main issue with the source text is that it ignores what is possibly the greater bottleneck in processing speed, which is the time it takes to move information from one area to another. (If my model is right, one of the big advantages of a MoE architecture is to reduce the degree of thrashing weights across the bus to and from the GPU as much, which can be a major bottleneck.) However, on this front I think nerves are still clearly inferior to wires? Even mylenated neurons have a typical speed of only about 100 m/s, while information flows across wires at >50% the speed of light.
Good point actually, and yeah the ability to move information from one area to the other much faster than brains do is arguably why NNs make different tradeoffs than human brains.
I certainly agree that if we’re trying to evaluate power we need to consider throughput and total computation. Suppose that a synapse is not a simple numerical weight, and instead we needed to consider each dendritic neurotransmitter gate as a computational unit. This would force us to use many more FLOPs to model a synapse. But would it change the maximum speed? I agree that on a machine of a given size, if you have twice as many floating point operations to do, it will take twice as much time to get through them all. But if we consider the limit where we are not forced to do parallelizable computations in serial, I expect most of the arguments about computational richness are irrelevant?
For what it’s worth, I wasn’t depending on the premise that a synapse is computationally more powerful than an artificial neuron.
More context: I do think that the human brain is way more powerful (and WAY more efficient) than any current AI system. The extremely crude BOTEC of comparing weights and neocortex synapses says there’s something like a 100x difference, and my guess is that the brain is doing significantly fancier things than a modern transformer, algorithmically.
I actually agree with this take, but critically not in the domain of allowing AIs to think faster, which was my original objection.
@Alexander Gietelink Oldenziel and @S. Alex Bradt and @Max Harms: The thing I was talking about when I agreed with the claim that the brain is more powerful and doing fancier things is basically the fact that the brain always learns and thinks, called continual learning or continual thinking (there is no knowledge cutoff for brains the way current LLMs have), as well as better long-term memory/keeping things in context.
I do agree that in general, human brains aren’t too special algorithmically.
And of course, training/learning speed may be much more relevant than processing speed, and AFAIK humans are just wildly more data efficient.
Do we actually have a source for this, or is this just a commonly believed fact about AIs? I’m getting worried that this claim isn’t actually supported by much evidence and is instead a social belief around AIs due to our previous prediction errors.
I do think AIs can run quite a bit faster than humans, I’m just making the claim that the transistor argument is locally invalid.
Edit: @Max Harms I no longer endorse this objection, and now think my comment on it being utterly false that AI thinking speeds would increase drastically was not correct, and Max H explains why.
Sweet. Thanks for the thoughtful reply! Seems like we mostly agree.
I don’t have a good source on data efficiency, and it’s tagged in my brain as a combination of “a commonly believed thing” and “somewhat apparent in how many epochs of training on a statement it takes to internalize it combined with how weak LLMs are at in-context learning for things like novel board games” but neither of those is very solid and I would not be that surprised to learn that humans are not more data efficient than large transformers that can do similar levels of transfer learning or something. idk.
So it sounds like your issue is not any of the facts (transistor speeds, neuron speeds, AIs faster that humans) but rather the notion that comparing clock speeds and how many times a neuron can spike in a second is not a valid way to reason about whether AI will think faster than humans?
I’m curious what sort of argument you would make to a general audience to convey the idea that AIs will be able to think much faster than humans. Like, what do you think the valid version of the argument looks like?
I actually now think the direct argument given in IABIED was just directionally correct, and I was being confused in my objection, which Max H explains.
My response.
Good point actually, and yeah the ability to move information from one area to the other much faster than brains do is arguably why NNs make different tradeoffs than human brains.
For what it’s worth, I wasn’t depending on the premise that a synapse is computationally more powerful than an artificial neuron.
I actually agree with this take, but critically not in the domain of allowing AIs to think faster, which was my original objection.
@Alexander Gietelink Oldenziel and @S. Alex Bradt and @Max Harms: The thing I was talking about when I agreed with the claim that the brain is more powerful and doing fancier things is basically the fact that the brain always learns and thinks, called continual learning or continual thinking (there is no knowledge cutoff for brains the way current LLMs have), as well as better long-term memory/keeping things in context.
I do agree that in general, human brains aren’t too special algorithmically.
Here are some links as to why below:
lc on why the current lack of long-term memory creates problems, especially for benchmarking
Dwarkesh Patel and Gwern on continual learning/thinking.
Do we actually have a source for this, or is this just a commonly believed fact about AIs? I’m getting worried that this claim isn’t actually supported by much evidence and is instead a social belief around AIs due to our previous prediction errors.
I do think AIs can run quite a bit faster than humans, I’m just making the claim that the transistor argument is locally invalid.Edit: @Max Harms I no longer endorse this objection, and now think my comment on it being utterly false that AI thinking speeds would increase drastically was not correct, and Max H explains why.
Sweet. Thanks for the thoughtful reply! Seems like we mostly agree.
I don’t have a good source on data efficiency, and it’s tagged in my brain as a combination of “a commonly believed thing” and “somewhat apparent in how many epochs of training on a statement it takes to internalize it combined with how weak LLMs are at in-context learning for things like novel board games” but neither of those is very solid and I would not be that surprised to learn that humans are not more data efficient than large transformers that can do similar levels of transfer learning or something. idk.
So it sounds like your issue is not any of the facts (transistor speeds, neuron speeds, AIs faster that humans) but rather the notion that comparing clock speeds and how many times a neuron can spike in a second is not a valid way to reason about whether AI will think faster than humans?
I’m curious what sort of argument you would make to a general audience to convey the idea that AIs will be able to think much faster than humans. Like, what do you think the valid version of the argument looks like?
I actually now think the direct argument given in IABIED was just directionally correct, and I was being confused in my objection, which Max H explains.
It’s fine to use the argument now.