If “the wavefunction is real, but it is a function over potential configurations, only one of which is real.” then you have the real configuration interacting with potential configurations. I don’t see how you can say something isn’t real (if only one of them is real then the others aren’t) is interacting with something that is. If that “potential” part of the wave function can interact with the other parts of the wave function, then it’s clearly real in every sense that the word “real” means anything at all.
I know they’re just cartoons and I get the gist, but the graphs labelled “naive scenario” and “actual performance” are a little confusing.
The X axis seems to be measuring performance, with benchmarks like “high schooler” and “college student”, but in that case, what’s the Y axis? Is it the number of tasks that the model performs at that particular level? Something like that?
I think it would be helpful if you labeled the Y axis, even with just a vague label.
Re: the dark matter analogy. I think the analogy works well, but would just like to point out that even in theories where dark matter doesn’t interact even with the weak force, and there is some other force that it does interact with that’s analogous to electromagnetism, so it could bind together to form an earth-like planet, it still interacts with gravity, and if this earth-sized dark matter planet really did overlap with ours, we’d feel it’s gravity and the earth would seem to be twice as massive as it is. Or, to state it slightly differently, the actual earth would be half as massive as we measure it to be. But that would be inconsistent with what we know of its composition and density. We know the mass of rocks, and the measurement of the mass of a rock of a particular size wouldn’t be subject to this error, so we can rule out there being a dark matter Earth coincident with ours.
This isn’t in any way a criticism of what I found to be a brilliant piece. And I’m not even sure that it’s reason enough not to use that particular analogy, which otherwise works great.
Related to this topic, with a similar outlook but also more discussion of specific approaches going forward, is Vitalik’s recent post on techno-optimism:
There is a lot at the link, but just to give a sense of the message here’s a quote:
“To me, the moral of the story is this. Often, it really is the case that version N of our civilization’s technology causes a problem, and version N+1 fixes it. However, this does not happen automatically, and requires intentional human effort. The ozone layer is recovering because, through international agreements like the Montreal Protocol, we made it recover. Air pollution is improving because we made it improve. And similarly, solar panels have not gotten massively better because it was a preordained part of the energy tech tree; solar panels have gotten massively better because decades of awareness of the importance of solving climate change have motivated both engineers to work on the problem, and companies and governments to fund their research. It is intentional action, coordinated through public discourse and culture shaping the perspectives of governments, scientists, philanthropists and businesses, and not an inexorable “techno-capital machine”, that had solved these problems.”
I’ve no real insight to add, but would just like to comment that this generally lines up with the picture Steven Pinker paints in books like “Better Angels of Our Nature” and “Enlightenment Now”.
Thanks for a good comment. My oversimplified thought process was that a 10x increase in energy usage for the brain would equate to a ~2x increase in total energy usage. Since we’re able to maintain that kind of energy use during exercise, and elite athletes can maintain that for many hours/day, it seems reasonable that the heart and other organs could maintain this kind of output.
However, the issue you bring up, of actually getting that much blood to the brain, evacuating waste products, doing the necessary metabolism there, and dealing with so much heat localized in the small area of the brain, are all valid. While it seems like the rest of the body wouldn’t be constrained by this level of energy use, a 10x power output in the brain probably might be a problem.
It’s worth a more detailed analysis of exactly where the max. power output constraint on the brain, without any major changes, lie.
“Extrapolating the historic 10x fall in $/FLOP every 7.7 years for 372 years yields a 10^48x increase in the amount of compute that can be purchased for that much money (we recognize that this extrapolation goes past physical limits).”
If you are aware that this extrapolation goes past physical limits, why are you using it in your models? Why not use a model where compute plateaus after it reaches those physical limits? That seems more useful than a model that knowingly breaks the laws of physics.