I am very skeptical about SIA, but I’ve always respected the doomsday argument, and lately I wonder if Bill Joy luddism is the right response.
If there’s a great filter ahead it is far more likely to be involved with the advanced technologies which are meant to make galactic civilization possible in the first place, rather than some unanticipated tripwire in the natural world. So if we interpret the doomsday argument as information about the danger of these advanced technologies—if we do this, we are overwhelmingly likely to die—then isn’t the logical action just to fight them down at every opportunity, rather than trying to be lucky by being ultra-smart about how we develop and deploy them? Yes, if we don’t go there we forego a future of cosmic expansion, but if such a future is overwhelmingly unlikely, then the rational thing to do may be precisely to stay within our own little bubble here in this solar system.
ETA: One other observation: Those hoping for a really long future lifespan may feel aggrieved by a civilizational strategy which seems to eschew the technologies you would need for radical life extension. In this regard I have noticed one thing. Suppose you had a civilization whose members stopped reproducing but which all lived for a million years. At the very beginning of those million years they might discover the doomsday argument and conclude that no-one would get to live so long. But if you are going to live for a million years, you first have to live for ten years, fifty years, a hundred years, and so on. So it is inevitable that such erroneous ideas would arise early. However, if you not only live for a million years, but plan on expanding into the universe and having lots of descendants who also live that long, then this argument is no longer valid, because the majority of observer-moments should still be in the distant future rather than back here on the planet of origin. Therefore, I see some hope that you can have very long lifespans without risking doom, if your society explicitly stops creating new observers. Thought I have to think that the technologies for radical life extension are intrinsically threatening anyway; it would require remarkable discipline to have rejuvenating biotechnology or a solid-state platform for consciousness, and not to develop dangerous forms of nanotechnology and artificial intelligence.
So if we interpret the doomsday argument as information about the danger of these advanced technologies—if we do this, we are overwhelmingly likely to die—then isn’t the logical action just to fight them down at every opportunity, rather than trying to be lucky by being ultra-smart about how we develop and deploy them?
This would make a lot of sense if there were any way to enforce it. As it stands defecting would be way too easy and the incentives to do so way too high. Further, the people most likely to defect would be those we least want deciding how new technology is deployed.
I am very skeptical about SIA, but I’ve always respected the doomsday argument, and lately I wonder if Bill Joy luddism is the right response.
If there’s a great filter ahead it is far more likely to be involved with the advanced technologies which are meant to make galactic civilization possible in the first place, rather than some unanticipated tripwire in the natural world. So if we interpret the doomsday argument as information about the danger of these advanced technologies—if we do this, we are overwhelmingly likely to die—then isn’t the logical action just to fight them down at every opportunity, rather than trying to be lucky by being ultra-smart about how we develop and deploy them? Yes, if we don’t go there we forego a future of cosmic expansion, but if such a future is overwhelmingly unlikely, then the rational thing to do may be precisely to stay within our own little bubble here in this solar system.
ETA: One other observation: Those hoping for a really long future lifespan may feel aggrieved by a civilizational strategy which seems to eschew the technologies you would need for radical life extension. In this regard I have noticed one thing. Suppose you had a civilization whose members stopped reproducing but which all lived for a million years. At the very beginning of those million years they might discover the doomsday argument and conclude that no-one would get to live so long. But if you are going to live for a million years, you first have to live for ten years, fifty years, a hundred years, and so on. So it is inevitable that such erroneous ideas would arise early. However, if you not only live for a million years, but plan on expanding into the universe and having lots of descendants who also live that long, then this argument is no longer valid, because the majority of observer-moments should still be in the distant future rather than back here on the planet of origin. Therefore, I see some hope that you can have very long lifespans without risking doom, if your society explicitly stops creating new observers. Thought I have to think that the technologies for radical life extension are intrinsically threatening anyway; it would require remarkable discipline to have rejuvenating biotechnology or a solid-state platform for consciousness, and not to develop dangerous forms of nanotechnology and artificial intelligence.
Righly so, since the SIA is false.
The Doomsday argument is correct as far as it goes, though my view of the most likely filter is environmental degradation + AI will have problems.
This would make a lot of sense if there were any way to enforce it. As it stands defecting would be way too easy and the incentives to do so way too high. Further, the people most likely to defect would be those we least want deciding how new technology is deployed.