Why, then, have we survived until now? Probably, because our local environment has been stable, at least on the timescale relevant to biological evolution and civilization development, and because of anthropic selection.
Now apply a combination of two principles. The anthropic principle tells us that we necessarily find ourselves in conditions compatible with our existence, so we should not be surprised that our local environment is friendly. The Copernican principle tells us that we are not in a special or privileged location in the space of possibilities; we should expect to be typical among observers.
Combining these yields a conclusion: we should expect to live in a minimally friendly universe, not a maximally or even an average-friendly one. The anthropic principle guarantees that our universe clears the bar for producing observers. The Copernican principle says we are typical among observer-containing universes. Since there are vastly more ways for a universe to barely clear the bar than to be friendly for humans on all levels and in all parts of the configuration space, typical means “barely clearing the bar.” We should expect our universe to be friendly enough to produce us and not much more than that.
Like the first thing the Lethal Reality hypothesis has to explain is “why are we alive at all”. And the argument given is anthropics.
I think the reason why ”we figured out how to survive in tribes” doesn’t hold is that with climbing the tech tree you distribute actuators to individual agents that are complex enough to have much longer feedback loops. Acting uncooperatively in a tribe is trivially observable as bad through short feedback loops. Releasing a bioagent to hurt a rival nation or a larger interest group is not trivially traceable and there’s less historic analogues to extrapolate from.
Your point on an omnipresent force that selects for survival preparedness holds for things that have analogues in near miss scenarios—e.g. pandemic preparedness. But I think the authors central thesis is one of the “near misses are the only thing driving survival preparedness“. In fact if we zoom in on AI—I think the best chance we have civilisationaly is that we increase AI ability sufficiently to have a near miss disaster of sufficient scale happen quickly enough to incentivise strong survival preparedness way ahead of ASI. This is what I see the natural consequence of the slowdown to be—buy yourself time to experience a near miss at social dynamics feedback loop level before you make the next ability jump. But the game is not progressing with that feedback loop in mind.
The first one did seem pretty central to me.
Like the first thing the Lethal Reality hypothesis has to explain is “why are we alive at all”. And the argument given is anthropics.
I think the reason why ”we figured out how to survive in tribes” doesn’t hold is that with climbing the tech tree you distribute actuators to individual agents that are complex enough to have much longer feedback loops. Acting uncooperatively in a tribe is trivially observable as bad through short feedback loops. Releasing a bioagent to hurt a rival nation or a larger interest group is not trivially traceable and there’s less historic analogues to extrapolate from.
Your point on an omnipresent force that selects for survival preparedness holds for things that have analogues in near miss scenarios—e.g. pandemic preparedness. But I think the authors central thesis is one of the “near misses are the only thing driving survival preparedness“. In fact if we zoom in on AI—I think the best chance we have civilisationaly is that we increase AI ability sufficiently to have a near miss disaster of sufficient scale happen quickly enough to incentivise strong survival preparedness way ahead of ASI. This is what I see the natural consequence of the slowdown to be—buy yourself time to experience a near miss at social dynamics feedback loop level before you make the next ability jump. But the game is not progressing with that feedback loop in mind.