I made a similar point (but without specific numbers—great to have them!) in a comment https://www.lesswrong.com/posts/Lwy7XKsDEEkjskZ77/?commentId=nQYirfRzhpgdfF775 on a post that posited human brain energy efficiency over AIs as a core anti-doom argument, and I also think that the energy efficiency comparisons are not particularly relevant either way:
Humanity is generating and consuming enormous amount of power—why is the power budget even relevant? And even if it was, energy for running brains ultimately comes from Sun—if you include the agriculture energy chain, and “grade” the energy efficiency of brains by the amount of solar energy it ultimately takes to power a brain, AI definitely has a potential to be more efficient. And even if a single human brain is fairly efficient, the human civilization is clearly not. With AI, you can quickly scale up the amount of compute you use, but scaling beyond a single brain is very inefficient.
The values come into the picture well before it’s an AGI. First, a random neural network is initialized, and its “values” is a completely arbitrary function chosen as random. Over time, NN is trained towards an AGI and it’s “values” take shape. By the time AGI emerges, it does not “take on values for the first time”, the values emerge from an extremely long sequence of tiny mutations, each creating something very similar to what already existed, becoming more complex and coherent over time.