Pandemic Prediction Checklist: H5N1
Pandemic Prediction Checklist: Monkeypox
I have lost my trust in this community’s epistemic integrity, no longer see my values as being in accord with it, and don’t see hope for change. I am therefore taking an indefinite long-term hiatus from reading or posting here.
Correlation does imply some sort of causal link.
For guessing its direction, simple models help you think.
Controlled experiments, if they are well beyond the brink
Of .05 significance will make your unknowns shrink.
Replications prove there’s something new under the sun.
Did one cause the other? Did the other cause the one?
Are they both controlled by something already begun?
Or was it their coincidence that caused it to be done?
I’m going to expand on this.
Jacob’s conclusion to the speed section of his post on brain efficiency is this:
Let’s accept all Jacob’s analysis about the tradeoffs of clock speed, memory capacity and bandwidth.
The force of his conclusion depends on the superintelligence “running on equivalent hardware.” Obviously, core to Eliezer’s superintelligence argument, and habryka’s comment here, is the point that the hardware underpinning AI can be made large and expanded upon in a way that is not possible for human brains.
Jacob knows this, and addresses it in comments in response to Vaniver pointing out that birds may be more efficient than jet planes in terms of calories/mile flown, but that when the relevant metric is top speed or human passengers carried, the jet wins. Jacob responds:
So the crux here appears to be about the practicality of replacing human brains with factory-sized artificial ones, in terms of physical resource limitations.
Daniel Kokotajlo disagrees that this is important:
Jacob doubles down that it is:
So Jacob here admits that energy is neither a ‘taut constraint’ for early AGI, and that at the same time it will be a larger fraction of the cost. In other words, it’s not a bottleneck for AGI, and no other resource is either.
This is where Jacob’s discussion ended.
So I think Jacob has at least two jobs to do to convince me. I would be very pleased and appreciative if he achieved just one of them.
First, he needs to explain why any efficiency constraints can’t be overcome by just throwing a lot of material and energy resources into building and powering inefficient or as-efficient-as-human-brains GPUs. If energy is not a taut constraint for AGI, and it’s also expected to be an increasing fraction of costs over time, then that sounds like an argument that we can overcome any efficiency limits with increasing expenditures to achieve superhuman performance.
Second, he needs to explain why things like energy, size, or ops/sec efficiency are the most important efficiency metrics as opposed to things like “physical tasks/second,” or “brain-size intelligences produced per year,” or “speed at which information can be taken in and processed via sensors positioned around the globe.” There are so very many efficiency (“useful output/resource input”) metrics that we can construct, and on many of them, the human brain and body are demonstrably nowhere near the physical limit.
Right now, doubling down on physics-based efficiency arguments, as he’s doing here, don’t feel like a winning strategy to me.