Thanks! That is really sound optimistic take and I think there is a real hope that the things are more like you described rather than how they are described in the original post. So, almost all what you wrote goes for me in the category “can easily be correct in the sense that this is how things actually play out in our Universe”.
A couple of arguments fall out of this category and constitute rather disagreements:
Every human starving to death because we just randomly decide we don’t want to eat, despite having food available. This is, in a sense, an existential threat. Unless a large fraction of humanity does some fairly specific actions of unwrapping, cooking and eating food, humanity goes extinct. But this isn’t on your list of x-risks, because this is an example where the feedback loop is tight.
As I wrote, “the feedback loops are, generally, not tight at all”. So the important word is generally—generally, existential threats don’t have this property. You gave an example of a threat which has this property, but my point is that we don’t need all threats not to have tight loops for extinction to happen, few are enough.
Generally you keep assuming near perfect competition, but also everyone has an end-the-universe button.
To be clear, I don’t think I am assuming the second thing, but now that you said it explicitly, it looks indeed highly likely that as civilizations grow technologically, more and more agents have an end-the-universe button.
You are taking a situation where 2 utility functions are mostly uncorrelated, and using “resources” to claim that the game is 0 sum. Uncorrelated != 0 sum. 2 agents with uncorrelated utility functions might find a way to achieve near maximum on both functions.
I may be wrong here, but “2 agents with uncorrelated utility functions might find a way to achieve near maximum on both functions” sounds like something very unlikely. I mean, even they might find a way, why would they?
If x-risk reduction gets a constant fraction of resources, a richer civilization has more resources to throw at the problem.
Well, the entire point of the post is that ” x-risk reduction gets a constant fraction of resources” is unlikely. Now, I think you argued succesfully elsewhere in your reply that it may not be the case, but here, if we accept the premise, then this particular argument should be correct.
Thanks! That is really sound optimistic take and I think there is a real hope that the things are more like you described rather than how they are described in the original post. So, almost all what you wrote goes for me in the category “can easily be correct in the sense that this is how things actually play out in our Universe”.
A couple of arguments fall out of this category and constitute rather disagreements:
As I wrote, “the feedback loops are, generally, not tight at all”. So the important word is generally—generally, existential threats don’t have this property. You gave an example of a threat which has this property, but my point is that we don’t need all threats not to have tight loops for extinction to happen, few are enough.
To be clear, I don’t think I am assuming the second thing, but now that you said it explicitly, it looks indeed highly likely that as civilizations grow technologically, more and more agents have an end-the-universe button.
I may be wrong here, but “2 agents with uncorrelated utility functions might find a way to achieve near maximum on both functions” sounds like something very unlikely. I mean, even they might find a way, why would they?
Well, the entire point of the post is that ” x-risk reduction gets a constant fraction of resources” is unlikely. Now, I think you argued succesfully elsewhere in your reply that it may not be the case, but here, if we accept the premise, then this particular argument should be correct.