A single 3090 can do 250 TF ideally with tensorcores, and A100 is 2x that, so 4 GPUs is > 10^15 flops theoretical. I’d also argue the brain is closer to 10^14, but this comparison is all kinda muck because they are so different. And as of today the GPU only hits those numbers on big dense matrix codes, where the brain is more fully sparse, so that’s probably another 2 to 4 OOM advantage for the brain.
Yes, thinking hard and fast in small simulated subspaces is a AGI/SIM superpower—related old post. But it’s still technically quantitative?
A single 3090 can do 250 TF ideally with tensorcores, and A100 is 2x that, so 4 GPUs is > 10^15 flops theoretical. I’d also argue the brain is closer to 10^14, but this comparison is all kinda muck because they are so different. And as of today the GPU only hits those numbers on big dense matrix codes, where the brain is more fully sparse, so that’s probably another 2 to 4 OOM advantage for the brain.
Yes, thinking hard and fast in small simulated subspaces is a AGI/SIM superpower—related old post. But it’s still technically quantitative?