(3) the agents who do spend energy fighting it are systematically outcompeted by those who do not, which means the system’s ability to fight it degrades over time even if some agents start out fighting it.
Civilization consists of some mixture of local myopic competition, and global organized agency.
Maximally local competition is bacteria. Each individual cell fighting for resources. And no cell can do anything else, or it will be outcompeted.
Maximally global organization is a singleton. An ASI or world government that can easily squash any subsystem that gets out of line.
The world is neither extreme, and fragments of both patterns can be found.
Look at the global moratorium on CFC’s. That’s one of those large scale, long term global problems, with a not-that-expensive solution. And it was met with a large scale coordinated response.
Combining these yields a conclusion: we should expect to live in a minimally friendly universe, not a maximally or even an average-friendly one. The anthropic principle guarantees that our universe clears the bar for producing observers. The Copernican principle says we are typical among observer-containing universes. Since there are vastly more ways for a universe to barely clear the bar than to be friendly for humans on all levels and in all parts of the configuration space, typical means “barely clearing the bar.”
Except, industrial civilization. The set of universes in which an industrial high tech civilization can propagate itself seems to be larger than the set of universes in which homonids can evolve. The set of universes in which self replicating nanotech can spread through the lightcone is FAR larger than the set of universes where the hunter gatherers have access to the right kind of flint and tasty large herbivores.
Also, the idea that there are “vastly more ways to barely clear the bar than to be friendly” sounds like a complicated and nontrivial assumption. It might be true, but there is no obvious-to-me reason why it MUST be true.
fundamental physical constants (but this is an illustration, I do not want to create impression that the argument is only about fundamental physics parameters fine-tuning
I am not convinced by fine tuning. I don’t think we know enough physics to know whether or not nuclear-physics life exists on the surface of neutron stars in our universe. Let alone in some other universe. I expect that most of the universes with different constants have a complexity similar to that of this universe.
in a multi-dimensional parameter space where the viable region is a tiny sliver, most of the volume of that sliver is near its boundaries, not deep in its interior.
As a property of high dimensional spaces, Yes. However, being at the edge of that sliver doesn’t automatically mean we are “barely surviving” and risk extinction. We could be in a situation where, if the strong force was just marginally weaker, stars couldn’t exist. (And suppose for a moment that life without stars is impossible) That puts us near the boundaries of survivable-parameter space, but it doesn’t mean we are risking extinction.
Also, I’m not actually convinced that the viable region is a tiny sliver.
Making everything go right is hard. Making something go wrong is easy. This is also, at root, an observation about the relative sizes of state spaces: the states in which a complex system continues to function are vastly outnumbered by the states in which it doesn’t, in the same way that the configurations of a watch that tell time are vastly outnumbered by the configurations that don’t.
The thing is, mosquitos are also a complex system. Applying this logic, it should be really really easy to wipe them all out.
The question for civilizational survival is therefore: is there a specific, powerful mechanism that keeps civilization within the narrow band of survival-compatible states?
Ability for humans to self replicate and rebuild. Active adaptions, both evolved and intelligent. Decisions and actions, both individual and coordinated.
The problem is that for existential threats, the feedback loops are, generally, not tight at all.
Every human starving to death because we just randomly decide we don’t want to eat, despite having food available. This is, in a sense, an existential threat. Unless a large fraction of humanity does some fairly specific actions of unwrapping, cooking and eating food, humanity goes extinct. But this isn’t on your list of x-risks, because this is an example where the feedback loop is tight.
make survival-oriented behavior a winning strategy in the competitive landscape.
Again, you seem to be assuming a world of perfect competition and 0 foresight and planning. Also, the groups that prepare against pandemics get less pandemics.
Think of how organisms evolved immune systems: pathogens are frequent, individual infections are survivable, and organisms with better defenses reliably outcompete those without them. The feedback loop is tight, the disaster distribution is right, and survival-competence gets selected for.
But notice how specific the required conditions are:
So long as there is at least one source of disasters like that, this selects against anything too myopically competitive.
You have one path through time, and extinction is an absorbing state: once you enter it, you do not leave.
Granted. A high competence singleton is also somewhat of an absorbing state.
And a civilization that has maxed out the tech tree will probably have a low ongoing risk of extinction.
A world with smarter humans is not a world where smart survival-oriented humans dominate, but a world where smart survival-oriented humans compete against equally smart growth-oriented, power-oriented, and profit-oriented humans, and lose, for exactly the same structural reasons they lose now, just at a higher cognitive level.
I don’t think that’s true. If x-risk reduction gets a constant fraction of resources, a richer civilization has more resources to throw at the problem.
You are taking a situation where 2 utility functions are mostly uncorrelated, and using “resources” to claim that the game is 0 sum. Uncorrelated != 0 sum. 2 agents with uncorrelated utility functions might find a way to achieve near maximum on both functions.
Generally you keep assuming near perfect competition, but also everyone has an end-the-universe button. This is quite an odd thing to assume. A world of perfect unrestricted military competition is one where various sides routinely throw nukes and bioweapons at each other. In this world, everyone has nuclear bunkers and bioweapon defenses.
It’s possible there is something that can destroy the whole world, but not be targeted to only destroy your enemies, but that’s a rather specific kind of thing.
It is quite possible that the techs necessary to come up with a solution to deal with the threat could end up having substantial spinoff applications, thus paying for itself quite comfortably.
Otherwise the society in question would have to be, among other things, a post-scarcity one where capitalism has been transcended and money no longer exists.
Thanks! That is really sound optimistic take and I think there is a real hope that the things are more like you described rather than how they are described in the original post. So, almost all what you wrote goes for me in the category “can easily be correct in the sense that this is how things actually play out in our Universe”.
A couple of arguments fall out of this category and constitute rather disagreements:
Every human starving to death because we just randomly decide we don’t want to eat, despite having food available. This is, in a sense, an existential threat. Unless a large fraction of humanity does some fairly specific actions of unwrapping, cooking and eating food, humanity goes extinct. But this isn’t on your list of x-risks, because this is an example where the feedback loop is tight.
As I wrote, “the feedback loops are, generally, not tight at all”. So the important word is generally—generally, existential threats don’t have this property. You gave an example of a threat which has this property, but my point is that we don’t need all threats not to have tight loops for extinction to happen, few are enough.
Generally you keep assuming near perfect competition, but also everyone has an end-the-universe button.
To be clear, I don’t think I am assuming the second thing, but now that you said it explicitly, it looks indeed highly likely that as civilizations grow technologically, more and more agents have an end-the-universe button.
You are taking a situation where 2 utility functions are mostly uncorrelated, and using “resources” to claim that the game is 0 sum. Uncorrelated != 0 sum. 2 agents with uncorrelated utility functions might find a way to achieve near maximum on both functions.
I may be wrong here, but “2 agents with uncorrelated utility functions might find a way to achieve near maximum on both functions” sounds like something very unlikely. I mean, even they might find a way, why would they?
If x-risk reduction gets a constant fraction of resources, a richer civilization has more resources to throw at the problem.
Well, the entire point of the post is that ” x-risk reduction gets a constant fraction of resources” is unlikely. Now, I think you argued succesfully elsewhere in your reply that it may not be the case, but here, if we accept the premise, then this particular argument should be correct.
Civilization consists of some mixture of local myopic competition, and global organized agency.
Maximally local competition is bacteria. Each individual cell fighting for resources. And no cell can do anything else, or it will be outcompeted.
Maximally global organization is a singleton. An ASI or world government that can easily squash any subsystem that gets out of line.
The world is neither extreme, and fragments of both patterns can be found.
Look at the global moratorium on CFC’s. That’s one of those large scale, long term global problems, with a not-that-expensive solution. And it was met with a large scale coordinated response.
Except, industrial civilization. The set of universes in which an industrial high tech civilization can propagate itself seems to be larger than the set of universes in which homonids can evolve. The set of universes in which self replicating nanotech can spread through the lightcone is FAR larger than the set of universes where the hunter gatherers have access to the right kind of flint and tasty large herbivores.
Also, the idea that there are “vastly more ways to barely clear the bar than to be friendly” sounds like a complicated and nontrivial assumption. It might be true, but there is no obvious-to-me reason why it MUST be true.
I am not convinced by fine tuning. I don’t think we know enough physics to know whether or not nuclear-physics life exists on the surface of neutron stars in our universe. Let alone in some other universe. I expect that most of the universes with different constants have a complexity similar to that of this universe.
As a property of high dimensional spaces, Yes. However, being at the edge of that sliver doesn’t automatically mean we are “barely surviving” and risk extinction. We could be in a situation where, if the strong force was just marginally weaker, stars couldn’t exist. (And suppose for a moment that life without stars is impossible) That puts us near the boundaries of survivable-parameter space, but it doesn’t mean we are risking extinction.
Also, I’m not actually convinced that the viable region is a tiny sliver.
The thing is, mosquitos are also a complex system. Applying this logic, it should be really really easy to wipe them all out.
Ability for humans to self replicate and rebuild. Active adaptions, both evolved and intelligent. Decisions and actions, both individual and coordinated.
Every human starving to death because we just randomly decide we don’t want to eat, despite having food available. This is, in a sense, an existential threat. Unless a large fraction of humanity does some fairly specific actions of unwrapping, cooking and eating food, humanity goes extinct. But this isn’t on your list of x-risks, because this is an example where the feedback loop is tight.
Again, you seem to be assuming a world of perfect competition and 0 foresight and planning. Also, the groups that prepare against pandemics get less pandemics.
So long as there is at least one source of disasters like that, this selects against anything too myopically competitive.
Granted. A high competence singleton is also somewhat of an absorbing state.
And a civilization that has maxed out the tech tree will probably have a low ongoing risk of extinction.
I don’t think that’s true. If x-risk reduction gets a constant fraction of resources, a richer civilization has more resources to throw at the problem.
You are taking a situation where 2 utility functions are mostly uncorrelated, and using “resources” to claim that the game is 0 sum. Uncorrelated != 0 sum. 2 agents with uncorrelated utility functions might find a way to achieve near maximum on both functions.
Generally you keep assuming near perfect competition, but also everyone has an end-the-universe button. This is quite an odd thing to assume. A world of perfect unrestricted military competition is one where various sides routinely throw nukes and bioweapons at each other. In this world, everyone has nuclear bunkers and bioweapon defenses.
It’s possible there is something that can destroy the whole world, but not be targeted to only destroy your enemies, but that’s a rather specific kind of thing.
It is quite possible that the techs necessary to come up with a solution to deal with the threat could end up having substantial spinoff applications, thus paying for itself quite comfortably.
Otherwise the society in question would have to be, among other things, a post-scarcity one where capitalism has been transcended and money no longer exists.
Thanks! That is really sound optimistic take and I think there is a real hope that the things are more like you described rather than how they are described in the original post. So, almost all what you wrote goes for me in the category “can easily be correct in the sense that this is how things actually play out in our Universe”.
A couple of arguments fall out of this category and constitute rather disagreements:
As I wrote, “the feedback loops are, generally, not tight at all”. So the important word is generally—generally, existential threats don’t have this property. You gave an example of a threat which has this property, but my point is that we don’t need all threats not to have tight loops for extinction to happen, few are enough.
To be clear, I don’t think I am assuming the second thing, but now that you said it explicitly, it looks indeed highly likely that as civilizations grow technologically, more and more agents have an end-the-universe button.
I may be wrong here, but “2 agents with uncorrelated utility functions might find a way to achieve near maximum on both functions” sounds like something very unlikely. I mean, even they might find a way, why would they?
Well, the entire point of the post is that ” x-risk reduction gets a constant fraction of resources” is unlikely. Now, I think you argued succesfully elsewhere in your reply that it may not be the case, but here, if we accept the premise, then this particular argument should be correct.