“risks are so severe that no level of benefit justifies them” nah, I like my VNM continuity axiom thank you very much, no ontologically incommensurate outcomes for me. I do think they’re severe enough that benefits on the order of “guaranteed worldwide paradise for a million years for every living human” don’t justify increasing them by 10% though!
What about… a hundred million years? What does your risk/benefit mapping actually look like?
What about… a hundred million years? What does your risk/benefit mapping actually look like?