After looking at the pattern of upvotes and downvotes on my replies, re-reading these comments, and thinking about this exchange I’ve concluded that I made some mistakes and would like to apologize.
I didn’t acknowledge some important truths in this comment. Surely, the reason people worry more about human extinction than other trajectory changes is because we can expect most possible flaws in civilization to be detected and repaired by people alive at the time, provided the people have the right values and are roughly on the right track. And very plausibly any very specific aspects of most standards will wash out in the long run. Partly this was me getting defensive, and not checking my “combat reflexes,” to use a phrase I learned from Julia Galef.
I also didn’t really answer the question, though it was a reasonable question. I agree with amcknight that values changes are a plausible example that meets all of the conditions. I think if you had something that was a significant utility loss in comparison with extinction it would be an existential catastrophe by definition. I’ll give some other examples of ways in which the future could be flawed, but not repairable or not repairable at reasonable costs. I am still thinking through these issues, but here are some tentative possibilities:
Resource distribution: You have some type of market set-up, and during early stages of a post-human civilization, shares of available resources are parceled out in a way that depends on individual wealth. People who want to spend the resources on good stuff have a smaller share of the wealth, and less resources ever get spent on good stuff. So the future is some fraction worse than it could have been. (This involves a values failure, but maybe you could change the outcome without changing values.)
Legal system: You have a constitutional legal system, and it requires a large supermajority of support to alter it. The current rules inefficiently favor certain classes of people over others. The support necessary to reform it never arrives, and the future is some fraction worse than it could have been. Perhaps you have some similar things happen with social norms, though less formally.
Standards that would have been better in the first place, but aren’t worth switching to: Maybe you could have a certain type of hardware or software that was developed first, but wasn’t totally optimal. After it is very generally used, there could be switching costs. You could pay them, but perhaps it wouldn’t be worth it. (Perhaps the US in a situation like this with the imperial system vs. the metric system.) This could only be an extremely small fraction of the possible value, as far as I can tell. But perhaps there could be many things like this in technology, government, and social norms.
I think I also should have done more to acknowledge the tentative nature of my claims. I certainly do not have quantitative arguments, and I don’t see a clear way to develop them. I have some intuitions that further thought on these topics could be fruitful, but I wouldn’t want to say that I’ve settled major issues by making this post. What I want to suggest is that further thought about smaller trajectory changes could lead us to see broad attempts to shape the far future more favorably. It is something I will think harder about in the future and try to defend better.
After looking at the pattern of upvotes and downvotes on my replies, re-reading these comments, and thinking about this exchange I’ve concluded that I made some mistakes and would like to apologize.
I didn’t acknowledge some important truths in this comment. Surely, the reason people worry more about human extinction than other trajectory changes is because we can expect most possible flaws in civilization to be detected and repaired by people alive at the time, provided the people have the right values and are roughly on the right track. And very plausibly any very specific aspects of most standards will wash out in the long run. Partly this was me getting defensive, and not checking my “combat reflexes,” to use a phrase I learned from Julia Galef.
I also didn’t really answer the question, though it was a reasonable question. I agree with amcknight that values changes are a plausible example that meets all of the conditions. I think if you had something that was a significant utility loss in comparison with extinction it would be an existential catastrophe by definition. I’ll give some other examples of ways in which the future could be flawed, but not repairable or not repairable at reasonable costs. I am still thinking through these issues, but here are some tentative possibilities:
Resource distribution: You have some type of market set-up, and during early stages of a post-human civilization, shares of available resources are parceled out in a way that depends on individual wealth. People who want to spend the resources on good stuff have a smaller share of the wealth, and less resources ever get spent on good stuff. So the future is some fraction worse than it could have been. (This involves a values failure, but maybe you could change the outcome without changing values.)
Legal system: You have a constitutional legal system, and it requires a large supermajority of support to alter it. The current rules inefficiently favor certain classes of people over others. The support necessary to reform it never arrives, and the future is some fraction worse than it could have been. Perhaps you have some similar things happen with social norms, though less formally.
Standards that would have been better in the first place, but aren’t worth switching to: Maybe you could have a certain type of hardware or software that was developed first, but wasn’t totally optimal. After it is very generally used, there could be switching costs. You could pay them, but perhaps it wouldn’t be worth it. (Perhaps the US in a situation like this with the imperial system vs. the metric system.) This could only be an extremely small fraction of the possible value, as far as I can tell. But perhaps there could be many things like this in technology, government, and social norms.
I think I also should have done more to acknowledge the tentative nature of my claims. I certainly do not have quantitative arguments, and I don’t see a clear way to develop them. I have some intuitions that further thought on these topics could be fruitful, but I wouldn’t want to say that I’ve settled major issues by making this post. What I want to suggest is that further thought about smaller trajectory changes could lead us to see broad attempts to shape the far future more favorably. It is something I will think harder about in the future and try to defend better.