Sweet. Thanks for the thoughtful reply! Seems like we mostly agree.
I don’t have a good source on data efficiency, and it’s tagged in my brain as a combination of “a commonly believed thing” and “somewhat apparent in how many epochs of training on a statement it takes to internalize it combined with how weak LLMs are at in-context learning for things like novel board games” but neither of those is very solid and I would not be that surprised to learn that humans are not more data efficient than large transformers that can do similar levels of transfer learning or something. idk.
So it sounds like your issue is not any of the facts (transistor speeds, neuron speeds, AIs faster that humans) but rather the notion that comparing clock speeds and how many times a neuron can spike in a second is not a valid way to reason about whether AI will think faster than humans?
I’m curious what sort of argument you would make to a general audience to convey the idea that AIs will be able to think much faster than humans. Like, what do you think the valid version of the argument looks like?
I actually now think the direct argument given in IABIED was just directionally correct, and I was being confused in my objection, which Max H explains.
Sweet. Thanks for the thoughtful reply! Seems like we mostly agree.
I don’t have a good source on data efficiency, and it’s tagged in my brain as a combination of “a commonly believed thing” and “somewhat apparent in how many epochs of training on a statement it takes to internalize it combined with how weak LLMs are at in-context learning for things like novel board games” but neither of those is very solid and I would not be that surprised to learn that humans are not more data efficient than large transformers that can do similar levels of transfer learning or something. idk.
So it sounds like your issue is not any of the facts (transistor speeds, neuron speeds, AIs faster that humans) but rather the notion that comparing clock speeds and how many times a neuron can spike in a second is not a valid way to reason about whether AI will think faster than humans?
I’m curious what sort of argument you would make to a general audience to convey the idea that AIs will be able to think much faster than humans. Like, what do you think the valid version of the argument looks like?
I actually now think the direct argument given in IABIED was just directionally correct, and I was being confused in my objection, which Max H explains.
It’s fine to use the argument now.