There are some points that I dislike about this introduction: The first one is the implicit speciesism resulting from the focus on extinction of Homo Sapiens as a species. It would have made sense to use Bostrom’s definition of existential risk, which focuses on earth-originating intelligent life instead. Replacement of humans by posthumans is not existential risk. Transhumanism usually advocates the well- being of all sentience, not just humans. This can refer to both non-human animals (e.g. in natural ecosystems) and posthumans spreading into space.
Maybe more seriously, it is assumed without further justification that preventing existential risk is an ethical good, because colonizing the galaxy would create positive value structures on a great scale. This is, of course, incomplete without taking into consideration that it can also create negative value structures on a great scale. Currently, the galaxy isn’t filled with involuntarily existing suffering entities, except for planet earth (as far as we know). In the future, that may change, and it may partially be Stanislav Petrov’s fault.
We’d better get this right, because it really is important. Leaving out half of the equation in an introduction article like this doesn’t further that goal.
It would have made sense to use Bostrom’s definition of existential risk, which focuses on earth-originating intelligent life instead. Replacement of humans by posthumans is not existential risk.
Some people hereabouts are concerned about some types of posthuman and “earth-originating intelligent life”.
There are some points that I dislike about this introduction: The first one is the implicit speciesism resulting from the focus on extinction of Homo Sapiens as a species. It would have made sense to use Bostrom’s definition of existential risk, which focuses on earth-originating intelligent life instead. Replacement of humans by posthumans is not existential risk. Transhumanism usually advocates the well- being of all sentience, not just humans. This can refer to both non-human animals (e.g. in natural ecosystems) and posthumans spreading into space.
Maybe more seriously, it is assumed without further justification that preventing existential risk is an ethical good, because colonizing the galaxy would create positive value structures on a great scale. This is, of course, incomplete without taking into consideration that it can also create negative value structures on a great scale. Currently, the galaxy isn’t filled with involuntarily existing suffering entities, except for planet earth (as far as we know). In the future, that may change, and it may partially be Stanislav Petrov’s fault.
We’d better get this right, because it really is important. Leaving out half of the equation in an introduction article like this doesn’t further that goal.
Some people hereabouts are concerned about some types of posthuman and “earth-originating intelligent life”.