Because any impact, even impact on non-people, ought have a prior for visibility analogous to its magnitude.
I don’t think that works. Consider a modification of laws of physics so that alternate universes exist, incompatible with advanced AI, with people and paperclips, each paired to a positron in our world. Or what ever would be the simplest modification which ties them to something that clippy can affect. It is conceivable that some such modification can be doable in 1 in a million.
There’s sane situations with low probability, by the way, for example if NASA calculates that an asteroid, based on measurement uncertainties, has 1 in a million chance of hitting the earth, we’d be willing to spend quite a bit of money on “refine measurements, if its still a threat, launch rockets” strategy. But we don’t want to start spending money any time someone who can’t get a normal job gets clever about crying 3^^^3 wolves, and even less so for speculative, untestable laws of physics under description length based prior.
I don’t think that works. Consider a modification of laws of physics so that alternate universes exist, incompatible with advanced AI, with people and paperclips, each paired to a positron in our world. Or what ever would be the simplest modification which ties them to something that clippy can affect. It is conceivable that some such modification can be doable in 1 in a million.
There’s sane situations with low probability, by the way, for example if NASA calculates that an asteroid, based on measurement uncertainties, has 1 in a million chance of hitting the earth, we’d be willing to spend quite a bit of money on “refine measurements, if its still a threat, launch rockets” strategy. But we don’t want to start spending money any time someone who can’t get a normal job gets clever about crying 3^^^3 wolves, and even less so for speculative, untestable laws of physics under description length based prior.