First, just because ~5x10^12 transistors was used to render Avatar (slower than real-time, btw) does not mean that it minimally requires ~5x10^12 transistors to render Avatar.
For example, I have done some prototyping for fast, high quality real-time volumetric rendering, and I’m pretty confident that the Avatar scenes (after appropriate database conversion) could be rendered in real-time on a single modern GPU using fast voxel cone tracing algorithms. That entail only 5*10^9 transistors, but we should also mention storage, because these techniques would require many gigabytes of off-chip storage for the data (stored on a flash drive for example).
Second, rendering and visual recognition are probably of roughly similar complexity, but it would be more accurate to do an apples to apples comparison of human V1 vs a fast algorithmic equivalent of V1.
Current published GPU neuron simulation techniques can handle a few million neurons per GPU, which would entail about 100 GPUs to simulate human V1.
Once again I don’t think current techniques are near the lower bound, and I have notions of how V1 equivalent work could be done on around one modern GPU, but this is more speculative.
There are several issues here.
First, just because ~5x10^12 transistors was used to render Avatar (slower than real-time, btw) does not mean that it minimally requires ~5x10^12 transistors to render Avatar.
For example, I have done some prototyping for fast, high quality real-time volumetric rendering, and I’m pretty confident that the Avatar scenes (after appropriate database conversion) could be rendered in real-time on a single modern GPU using fast voxel cone tracing algorithms. That entail only 5*10^9 transistors, but we should also mention storage, because these techniques would require many gigabytes of off-chip storage for the data (stored on a flash drive for example).
Second, rendering and visual recognition are probably of roughly similar complexity, but it would be more accurate to do an apples to apples comparison of human V1 vs a fast algorithmic equivalent of V1.
Current published GPU neuron simulation techniques can handle a few million neurons per GPU, which would entail about 100 GPUs to simulate human V1.
Once again I don’t think current techniques are near the lower bound, and I have notions of how V1 equivalent work could be done on around one modern GPU, but this is more speculative.