I think the linked tweet is possibly just misinterpreting what the authors meant by “transistor operations”? My reading is that “1000″ binds to “operations”; the actual number of transistors in each operation is unspecified. That’s how they get the 10,000x number—if a CPU runs at 1 GHz, neurons run at 100 Hz, then even if it takes 1000 clock cycles to do the work of neuron, the CPU can still do it 10,000x faster.
(IDK what the rationale was in the editorial process for using “transistor operations” instead of a more standard term like “clock cycles”, but a priori it seems defensible. Speculating, “transistors” was already introduced in the sentence immediately prior, so maybe the thinking was that the meaning and binding of “transistor operations” would be self-evident in context. Whereas if you use “clock cycles” you have to spend a sentence explaining what that means. So using “transistor operations” reduces the total number of new jargon-y / technical terms in the paragraph by one, and also saves a sentence of explanation.)
Anyway, depending on the architecture, precision, etc. a single floating point multiplication can take around 8 clock cycles. So even if a single neuron spike is doing something complicated that requires several high-precision multiply + accumulate operations in serial to replicate, that can easily fit into 1000 clock cycles on a normal CPU, and much fewer if you use specialized hardware.
As for the actual number of transistors themselves needed to do the work of a neuron spike, it again depends on exactly what the neuron spike is doing and how much precision etc. you need to capture the actual work, but “billions” seems too high by a few OOM at least. Some reference points: a single NAND gate is 4 transistors, and a general-purpose 16-bit floating point multiplier unit is ~5k NAND gates.
I think the linked tweet is possibly just misinterpreting what the authors meant by “transistor operations”? My reading is that “1000″ binds to “operations”; the actual number of transistors in each operation is unspecified. That’s how they get the 10,000x number—if a CPU runs at 1 GHz, neurons run at 100 Hz, then even if it takes 1000 clock cycles to do the work of neuron, the CPU can still do it 10,000x faster.
Hmm I see it. I thought it was making a distinct argument from the one Ege was responding to here, but if you’re right it’s the same one.
Then the claim is that an AI run on some (potentially large) cluster of GPUs can think far faster than any human in serial speed. You do lose the rough equivalency between transistors and neurons: a GPU, which is roughly equal to a person in resource costs, happens to have about the same number of transistors as a human brain has neurons. It’s potentially a big deal that AI has a much faster maximum serial speed than humans, but it’s far from clear that such an AI can outwit human society.
I think the linked tweet is possibly just misinterpreting what the authors meant by “transistor operations”? My reading is that “1000″ binds to “operations”; the actual number of transistors in each operation is unspecified. That’s how they get the 10,000x number—if a CPU runs at 1 GHz, neurons run at 100 Hz, then even if it takes 1000 clock cycles to do the work of neuron, the CPU can still do it 10,000x faster.
(IDK what the rationale was in the editorial process for using “transistor operations” instead of a more standard term like “clock cycles”, but a priori it seems defensible. Speculating, “transistors” was already introduced in the sentence immediately prior, so maybe the thinking was that the meaning and binding of “transistor operations” would be self-evident in context. Whereas if you use “clock cycles” you have to spend a sentence explaining what that means. So using “transistor operations” reduces the total number of new jargon-y / technical terms in the paragraph by one, and also saves a sentence of explanation.)
Anyway, depending on the architecture, precision, etc. a single floating point multiplication can take around 8 clock cycles. So even if a single neuron spike is doing something complicated that requires several high-precision multiply + accumulate operations in serial to replicate, that can easily fit into 1000 clock cycles on a normal CPU, and much fewer if you use specialized hardware.
As for the actual number of transistors themselves needed to do the work of a neuron spike, it again depends on exactly what the neuron spike is doing and how much precision etc. you need to capture the actual work, but “billions” seems too high by a few OOM at least. Some reference points: a single NAND gate is 4 transistors, and a general-purpose 16-bit floating point multiplier unit is ~5k NAND gates.
Hmm I see it. I thought it was making a distinct argument from the one Ege was responding to here, but if you’re right it’s the same one.
Then the claim is that an AI run on some (potentially large) cluster of GPUs can think far faster than any human in serial speed. You do lose the rough equivalency between transistors and neurons: a GPU, which is roughly equal to a person in resource costs, happens to have about the same number of transistors as a human brain has neurons. It’s potentially a big deal that AI has a much faster maximum serial speed than humans, but it’s far from clear that such an AI can outwit human society.