This claim is different from the claim that the brain is doing 1e20 FLOP/s of useful computation, which is the claim that the authors actually make.
Is it? I suppose they don’t say so explicitly, but it sounds like they’re using “2020-equivalent” FLOPs (or whatever it is Cotra and Carlsmith use), which has room for “algorithmic progress” baked in.
Perhaps you think the brain has massive architectural or algorithmic advantages over contemporary neural networks, but if you do, that is a position that has to be defended on very different grounds than “it would take X amount of FLOP/s to simulate one neuron at a high physical fidelity”.
I may be reading the essay wrong, but I think this is the claim being made and defended. “Simulating” a neuron at any level of physical detail is going to be irrelevantly difficult, and indeed in Beniaguev et al., running a DNN on a GPU that implements the computation a neuron is doing (four binary inputs, one output) is a 2000X speedup over solving PDEs (a combination of compression and hardware/software). They find it difficult to make the neural network smaller or shorter-memory, suggesting it’s hard to implement the same computation more efficiently with current methods.
I think you’re just reading the essay wrong. In the “executive summary” section, they explicitly state that
Our best anchor for how much compute an AGI needs is the human brain, which we estimate to perform 1e20–1e21 FLOPS.
and
In addition, we estimate that today’s computer hardware is ~5 orders of magnitude less cost efficient and energy efficient than brains.
I don’t know how you read those claims and arrived at your interpretation, and indeed I don’t know how the evidence they provide could support the interpretation you’re talking about. It would also be a strange omission to not mention the “effective” part of “effective FLOP” explicitly if that’s actually what you’re talking about.
Thanks, I see. I agree that a lot of confusion could be avoided with clearer language, but I think at least that they’re not making as simple an error as you describe in the root comment. Ted does say in the EA Forum thread that they don’t believe brains operate at the Landauer limit, but I’ll let him chime in here if he likes.
I think the “effective FLOP” concept is very muddy, but I’m even less sure what it would mean to alternatively describe what the brain is doing in “absolute” FLOPs. Meanwhile, the model they’re using gives a relatively well-defined equivalence between the logical function of the neuron and modern methods on a modern GPU.
The statement about cost and energy efficiency as they elaborate in the essay body is about getting human-equivalent task performance relative to paying a human worker $25/hour, not saying that the brain uses five orders of magnitude less energy per FLOP of any kind. Closing that gap of five orders of magnitude could come either from doing less computation than the logical-equivalent-neural-network or from decreasing the cost of computation.
Is it? I suppose they don’t say so explicitly, but it sounds like they’re using “2020-equivalent” FLOPs (or whatever it is Cotra and Carlsmith use), which has room for “algorithmic progress” baked in.
I may be reading the essay wrong, but I think this is the claim being made and defended. “Simulating” a neuron at any level of physical detail is going to be irrelevantly difficult, and indeed in Beniaguev et al., running a DNN on a GPU that implements the computation a neuron is doing (four binary inputs, one output) is a 2000X speedup over solving PDEs (a combination of compression and hardware/software). They find it difficult to make the neural network smaller or shorter-memory, suggesting it’s hard to implement the same computation more efficiently with current methods.
I think you’re just reading the essay wrong. In the “executive summary” section, they explicitly state that
and
I don’t know how you read those claims and arrived at your interpretation, and indeed I don’t know how the evidence they provide could support the interpretation you’re talking about. It would also be a strange omission to not mention the “effective” part of “effective FLOP” explicitly if that’s actually what you’re talking about.
Thanks, I see. I agree that a lot of confusion could be avoided with clearer language, but I think at least that they’re not making as simple an error as you describe in the root comment. Ted does say in the EA Forum thread that they don’t believe brains operate at the Landauer limit, but I’ll let him chime in here if he likes.
I think the “effective FLOP” concept is very muddy, but I’m even less sure what it would mean to alternatively describe what the brain is doing in “absolute” FLOPs. Meanwhile, the model they’re using gives a relatively well-defined equivalence between the logical function of the neuron and modern methods on a modern GPU.
The statement about cost and energy efficiency as they elaborate in the essay body is about getting human-equivalent task performance relative to paying a human worker $25/hour, not saying that the brain uses five orders of magnitude less energy per FLOP of any kind. Closing that gap of five orders of magnitude could come either from doing less computation than the logical-equivalent-neural-network or from decreasing the cost of computation.