This isn’t a simple marshmallow challenge scenario. If you have a society that has needs and limited resources, it’s not inherently “smart” to sacrifice those significantly for the sake of a long term project that might e.g. not benefit anyone who’s currently living. It’s a difference in values at that point; even if you’re smart enough you can still not believe it right.
For example, suppose in 1860 everyone knew and accepted global warming as a risk. Should they, or would they, have stopped using coal and natural gas in order to save us this problem? Even if it meant lesser living standards for themselves, and possibly more death?
it’s not inherently “smart” to sacrifice those significantly for the sake of a long term project
Your argument was that this hopeless trap might happen after a catastrophe and it’s so terrible that maybe it’s as bad or worse as everyone dying quickly. If it’s so terrible, in any decision-relevant sense, then it’s also smart to plot towards projects that dig humanity out of the trap.
No, sorry, I may have conveyed that wrong and mixed up two arguments. I don’t think stasis is straight up worse than extinction. For good or bad, people lived in the Middle Ages too. My point was more that if your guiding principle is “can we recover”, then there are more things than extinction to worry about. If you aspire at some kind of future in which humans grow exponentially then you won’t get it if we’re knocked back to preindustrial levels and can’t recover.
I don’t personally think that’s a great metric or goal to adopt, just following the logic to its endpoint. And I also expect that many smart people in the stasis wouldn’t plot with only that sort of long term benefit in mind. They’d seek relatively short term returns.
I see. Referring back to your argument was more an illustration of existence for this motivation. If a society forms around the motivation, at any one time in the billion years, and selects for intelligence to enable nontrivial long term institution design, that seems sufficient to escape stasis.
This isn’t a simple marshmallow challenge scenario. If you have a society that has needs and limited resources, it’s not inherently “smart” to sacrifice those significantly for the sake of a long term project that might e.g. not benefit anyone who’s currently living. It’s a difference in values at that point; even if you’re smart enough you can still not believe it right.
For example, suppose in 1860 everyone knew and accepted global warming as a risk. Should they, or would they, have stopped using coal and natural gas in order to save us this problem? Even if it meant lesser living standards for themselves, and possibly more death?
Your argument was that this hopeless trap might happen after a catastrophe and it’s so terrible that maybe it’s as bad or worse as everyone dying quickly. If it’s so terrible, in any decision-relevant sense, then it’s also smart to plot towards projects that dig humanity out of the trap.
No, sorry, I may have conveyed that wrong and mixed up two arguments. I don’t think stasis is straight up worse than extinction. For good or bad, people lived in the Middle Ages too. My point was more that if your guiding principle is “can we recover”, then there are more things than extinction to worry about. If you aspire at some kind of future in which humans grow exponentially then you won’t get it if we’re knocked back to preindustrial levels and can’t recover.
I don’t personally think that’s a great metric or goal to adopt, just following the logic to its endpoint. And I also expect that many smart people in the stasis wouldn’t plot with only that sort of long term benefit in mind. They’d seek relatively short term returns.
I see. Referring back to your argument was more an illustration of existence for this motivation. If a society forms around the motivation, at any one time in the billion years, and selects for intelligence to enable nontrivial long term institution design, that seems sufficient to escape stasis.