It took 35,000 processor cores running to render Avatar. If we assume that a Six-Core Opteron 2400 (2009, same year as Avatar) has roughly 10^9 transistors, then we have (35,000/6)*10^9 = 5.83*10^12 transistors.
The primary visual cortex has 280 million neurons, while a typical neuron has 1.000 to 10.000 synapses. That makes 2.8*10^8*10^4 synapses, if we assume 10.000 per neuron, or 2.8*10^12.
By this calculation it takes 5.83*10^12 transistors to render Avatar and 2.8*10^12 synapses to simulate something similar on the fly. Which is roughly the same amount.
Since the clock rate of a processor is about 10^9 Hz and that of a neuron is 200 Hz, does this mean that the algorithms that our brain uses are very roughly (10^9)/200 = 5000000 times more efficient?
I don’t think this is a valid comparison, you have no idea whether rendering avatar is similar to processing visual information.
Also, without mentioning the rate at which those processors rendered avatar, the number of processors has much less meaning. You could probably do it with one 35,000 times slower.
1 ) What is the effective level of visual precision computed by those processors for Avatar, versus the level of detail that’s processed in the human visual cortex?
2) Is the synapse the equivalent of a transistor if we are to estimate the respective computing power of a brain and a computer chip?
(i.e., is there more hidden computation going, on other levels? As synapse use different neurotransmitters, does that add additional computational capability? Are there processes in the neurons that similarly do computational work too? Are other cells, such as glial cells, performing computationally relevant operations too?)
First, just because ~5x10^12 transistors was used to render Avatar (slower than real-time, btw) does not mean that it minimally requires ~5x10^12 transistors to render Avatar.
For example, I have done some prototyping for fast, high quality real-time volumetric rendering, and I’m pretty confident that the Avatar scenes (after appropriate database conversion) could be rendered in real-time on a single modern GPU using fast voxel cone tracing algorithms. That entail only 5*10^9 transistors, but we should also mention storage, because these techniques would require many gigabytes of off-chip storage for the data (stored on a flash drive for example).
Second, rendering and visual recognition are probably of roughly similar complexity, but it would be more accurate to do an apples to apples comparison of human V1 vs a fast algorithmic equivalent of V1.
Current published GPU neuron simulation techniques can handle a few million neurons per GPU, which would entail about 100 GPUs to simulate human V1.
Once again I don’t think current techniques are near the lower bound, and I have notions of how V1 equivalent work could be done on around one modern GPU, but this is more speculative.
It took 35,000 processor cores running to render Avatar. If we assume that a Six-Core Opteron 2400 (2009, same year as Avatar) has roughly 10^9 transistors, then we have (35,000/6)*10^9 = 5.83*10^12 transistors.
The primary visual cortex has 280 million neurons, while a typical neuron has 1.000 to 10.000 synapses. That makes 2.8*10^8*10^4 synapses, if we assume 10.000 per neuron, or 2.8*10^12.
By this calculation it takes 5.83*10^12 transistors to render Avatar and 2.8*10^12 synapses to simulate something similar on the fly. Which is roughly the same amount.
Since the clock rate of a processor is about 10^9 Hz and that of a neuron is 200 Hz, does this mean that the algorithms that our brain uses are very roughly (10^9)/200 = 5000000 times more efficient?
I don’t think this is a valid comparison, you have no idea whether rendering avatar is similar to processing visual information.
Also, without mentioning the rate at which those processors rendered avatar, the number of processors has much less meaning. You could probably do it with one 35,000 times slower.
Some questions which we need to answer then :
1 ) What is the effective level of visual precision computed by those processors for Avatar, versus the level of detail that’s processed in the human visual cortex?
2) Is the synapse the equivalent of a transistor if we are to estimate the respective computing power of a brain and a computer chip? (i.e., is there more hidden computation going, on other levels? As synapse use different neurotransmitters, does that add additional computational capability? Are there processes in the neurons that similarly do computational work too? Are other cells, such as glial cells, performing computationally relevant operations too?)
There are several issues here.
First, just because ~5x10^12 transistors was used to render Avatar (slower than real-time, btw) does not mean that it minimally requires ~5x10^12 transistors to render Avatar.
For example, I have done some prototyping for fast, high quality real-time volumetric rendering, and I’m pretty confident that the Avatar scenes (after appropriate database conversion) could be rendered in real-time on a single modern GPU using fast voxel cone tracing algorithms. That entail only 5*10^9 transistors, but we should also mention storage, because these techniques would require many gigabytes of off-chip storage for the data (stored on a flash drive for example).
Second, rendering and visual recognition are probably of roughly similar complexity, but it would be more accurate to do an apples to apples comparison of human V1 vs a fast algorithmic equivalent of V1.
Current published GPU neuron simulation techniques can handle a few million neurons per GPU, which would entail about 100 GPUs to simulate human V1.
Once again I don’t think current techniques are near the lower bound, and I have notions of how V1 equivalent work could be done on around one modern GPU, but this is more speculative.