Jacob something really bothers me about your analysis.
Are you accounting for the brains high error rate? Efficiently getting the wrong answer a high percent of the time isn’t useful, it slashes the number of bits of precision on every calculation and limits system performance.
If every synapse only has an effective 4 bits of precision, the lower order bits being random noise, it would limit throughput through the system and prevent human judgement, possibly on matters where the delta is smaller than 1⁄16. It would explain humans ignoring risks smaller than a few percent or having trouble making a decision between close alternatives.
(And this is true for any analog precision level obviously)
It would mean a digital system with a few more bits of precision and less redundant synapses, could significantly outperform a human brain at the same power level.
Note I also have a ton of skillpoints in this area, I have worked on analog data acquisition and control systems and filters for several years and work on inference accelerators now. (And masters CS/bachelor’s CE)
Note due to my high skillpoints here I also disagree with Yudkowsky on foom but for a different set of reasons, also tied to the real world. Like you I have noticed a shortage of inference compute—if an ASI existed today there aren’t enough of the right kind of accelerators to outthink the bulk of humans. (I have some numbers on this i can edit in this post if you show interest)
Remember Wikipedia says Yudkowsky didn’t even go to high school and I can find no reference to him building anything in the world of engineering in his life. Just writing sci Fi and the sequences. So it may be a case where he’s blind to certain domains and doesn’t know what he doesn’t know.
There is extensive work in DL on bit precision reduction, the industry started at 32b, moved to 16b, is moving to 8b, and will probably end up at 4b or so, similar to the brain.
Just the number of bits used to represent a quantity. The complexity of multiplying numbers is nonlinear in bit complexity, so 32b multipliers are much more expensive than 4b multipliers. Analog multipliers are more efficient in various respects at low signal to noise ratio equivalent to low bit precision, but blow up quickly (exponentially) with a crossover near 8 bits or so last I looked.
Jacob something really bothers me about your analysis.
Are you accounting for the brains high error rate? Efficiently getting the wrong answer a high percent of the time isn’t useful, it slashes the number of bits of precision on every calculation and limits system performance.
If every synapse only has an effective 4 bits of precision, the lower order bits being random noise, it would limit throughput through the system and prevent human judgement, possibly on matters where the delta is smaller than 1⁄16. It would explain humans ignoring risks smaller than a few percent or having trouble making a decision between close alternatives.
(And this is true for any analog precision level obviously)
It would mean a digital system with a few more bits of precision and less redundant synapses, could significantly outperform a human brain at the same power level.
Note I also have a ton of skillpoints in this area, I have worked on analog data acquisition and control systems and filters for several years and work on inference accelerators now. (And masters CS/bachelor’s CE)
Note due to my high skillpoints here I also disagree with Yudkowsky on foom but for a different set of reasons, also tied to the real world. Like you I have noticed a shortage of inference compute—if an ASI existed today there aren’t enough of the right kind of accelerators to outthink the bulk of humans. (I have some numbers on this i can edit in this post if you show interest)
Remember Wikipedia says Yudkowsky didn’t even go to high school and I can find no reference to him building anything in the world of engineering in his life. Just writing sci Fi and the sequences. So it may be a case where he’s blind to certain domains and doesn’t know what he doesn’t know.
There is extensive work in DL on bit precision reduction, the industry started at 32b, moved to 16b, is moving to 8b, and will probably end up at 4b or so, similar to the brain.
For my Noob understanding: what is bit precision exactly?
Just the number of bits used to represent a quantity. The complexity of multiplying numbers is nonlinear in bit complexity, so 32b multipliers are much more expensive than 4b multipliers. Analog multipliers are more efficient in various respects at low signal to noise ratio equivalent to low bit precision, but blow up quickly (exponentially) with a crossover near 8 bits or so last I looked.