Thanks for links. My thought was that we may give higher negative utility to those x-risks which are able to become a-risks too, that is LHC and AI.
If you know Russian science fiction by Strugatsky, there is an idea in it of “Progressors”—the people who are implanted into other civilisations to help them develop quickly. At the end, the main character concluded that such actions violate value of any civilization to determine their own way and he returned to earth to search and stop possible alien progressors on here.
Iain Banks has similar themes in his books—e.g. Inversions. And generally speaking, in the Culture universe, the Special Circumstances are a meddlesome bunch.
Thanks for links. My thought was that we may give higher negative utility to those x-risks which are able to become a-risks too, that is LHC and AI.
If you know Russian science fiction by Strugatsky, there is an idea in it of “Progressors”—the people who are implanted into other civilisations to help them develop quickly. At the end, the main character concluded that such actions violate value of any civilization to determine their own way and he returned to earth to search and stop possible alien progressors on here.
Oh, in those cases, the considerations I mentioned don’t apply. But I still thought they were worth mentioning.
In Star Trek, the Federation has a “Prime Directive” against interfering with the development of alien civilizations.
The main role of which is to figure in this recurring dialogue:
-- Captain, but the Prime Directive!
-- Screw it, we’re going in.
Iain Banks has similar themes in his books—e.g. Inversions. And generally speaking, in the Culture universe, the Special Circumstances are a meddlesome bunch.