$250 prize for checking Jake Cannell’s Brain Efficiency

This is to announce a $250 prize for spotchecking or otherwise indepth reviewing Jacob Cannell’s technical claims concerning thermodynamic & physical limits on computations and the claim of biological efficiency of the brain in his post Brain Efficiency: Much More Than You Wanted To Know

I’ve been quite impressed by Jake’s analysis ever since it came out. I have been puzzled why there has been so little discussion about his analysis since if true it seems to be quite important. That said I have to admit I personally cannot asses whether the analysis is correct. This is why I am announcing this prize.

Whether Jake’s claims concerning DOOM & FOOM really follow from his analysis is up for debate. Regardless, to me it seems to have large implications on how the future might go and how future AI will look like.

  1. I will personally judge whether I think an entry warrants a prize.[1]

  2. If you are also interested in seeing this situation resolved, I encourage you to increase the prize pool!

EDIT: some clarifications
- You are welcome to discuss DOOM& FOOM and the relevance or lack thereof of Jake’s analysis but note I will only consider (spot)checking of Jacob Cannel’s technical claims.
- in case of multiple serious entries I will do my best to fairly split the prize money.
- note I will not be judging who will be right. Instead, I will judge whether the entry has seriously engaged with Jacob Cannell’s technical claims in a way that moves the debate forward. That is I will reward points for ‘pushing the depth of the debate tree’ beyond what it was before.

- by technical claims I mean to encompass all technical claims made in the brain efficiency post, broadly construed, as well as claims made by Jacob Cannell in other posts/​ comments.
These claims includes especially: limits to energy efficiency, interconnect losses, Landauer limit, convection vs blackbody radiation, claims concerning the effective working memory of the human brain versus that of computers, end of Moore’s law, CPU vs GPU vs neuromorphic chips, etc etc.
Here’s Jacob Cannell’s own summary of his claims:

1.) Computers are built out of components which are also just simpler computers, which bottoms out at the limits of miniaturization in minimal molecular sized (few nm) computational elements (cellular automata/​tiles). Further shrinkage is believed impossible in practice due to various constraints (overcoming these constraints if even possible would require very exotic far future tech).

2.) At this scale the landauer bound represents the ambient temperature dependent noise (which can also manifest as a noise voltage). Reliable computation at speed is only possible using non-trivial multiples of this base energy, for the simple reasons described by landauer and elaborated on in the other refs in my article.

3.) Components can be classified as computing tiles or interconnect tiles, but the latter is simply a computer which computes the identity but moves the input to an output in some spatial direction. Interconnect tiles can be irreversible or reversible, but the latter has enormous tradeoffs in size (ie optical) and or speed or other variables and is thus not used by brains or GPUs/​CPUs.

4.) Fully reversible computers are possible in theory but have enormous negative tradeoffs in size/​speed due to 1.) the need to avoid erasing bits throughout intermediate computations, 2.) the lack of immediate error correction (achieved automatically in dissipative interconnect by erasing at each cycle) leading to error build up which must be corrected/​erased (costing energy), 3.) high sensitivity to noise/​disturbance due to 2

And the brain vs computer claims:

5.) The brain is near the pareto frontier for practical 10W computers, and makes reasonably good tradeoffs between size, speed, heat and energy as a computational platform for intelligence

6.) Computers are approaching the same pareto frontier (although currently in a different region of design space) - shrinkage is nearing its end

  1. ^

    As an example, DaemonicSigil’s recent post is in the right direction.
    However, after reading Jacob Cannell’s response I did not feel the post seriously engaged with the technical material, retreating to the much weaker claim that maybe exotic reversible computation could break the limits that Jacob’s posits which I found unconvincing. The original post is quite clear that the limits are only for nonexotic computing architectures.