The argument is that, since you’re 3^^^3 times more likely to be one of the other people if there are indeed 3^^^3 other people, that’s powerful evidence that what he says is false.
Enh, that’s Hanson’s original argument, but what I attempted to do is generalize it so that we don’t actually need to be relying on the concept of “person”, nor be counting points of view. I would want the generalized argument to hopefully work even for a Clippy who would be threatened with the bending of 3^^^3 paperclips, even though paperclips don’t have a point of view. Because any impact, even impact on non-people, ought have a prior for visibility analogous to its
magnitude.
That’s not a generalization. That’s an entirely different argument. The original was about anthropic evidence. Yours is about prior probability. You can accept or reject them independently of each other. If you accept both, they stack.
Because any impact, even impact on non-people, ought have a prior for visibility analogous to its magnitude.
I don’t think that works. Consider a modification of laws of physics so that alternate universes exist, incompatible with advanced AI, with people and paperclips, each paired to a positron in our world. Or what ever would be the simplest modification which ties them to something that clippy can affect. It is conceivable that some such modification can be doable in 1 in a million.
There’s sane situations with low probability, by the way, for example if NASA calculates that an asteroid, based on measurement uncertainties, has 1 in a million chance of hitting the earth, we’d be willing to spend quite a bit of money on “refine measurements, if its still a threat, launch rockets” strategy. But we don’t want to start spending money any time someone who can’t get a normal job gets clever about crying 3^^^3 wolves, and even less so for speculative, untestable laws of physics under description length based prior.
Enh, that’s Hanson’s original argument, but what I attempted to do is generalize it so that we don’t actually need to be relying on the concept of “person”, nor be counting points of view. I would want the generalized argument to hopefully work even for a Clippy who would be threatened with the bending of 3^^^3 paperclips, even though paperclips don’t have a point of view. Because any impact, even impact on non-people, ought have a prior for visibility analogous to its magnitude.
That’s not a generalization. That’s an entirely different argument. The original was about anthropic evidence. Yours is about prior probability. You can accept or reject them independently of each other. If you accept both, they stack.
I don’t think that works. Consider a modification of laws of physics so that alternate universes exist, incompatible with advanced AI, with people and paperclips, each paired to a positron in our world. Or what ever would be the simplest modification which ties them to something that clippy can affect. It is conceivable that some such modification can be doable in 1 in a million.
There’s sane situations with low probability, by the way, for example if NASA calculates that an asteroid, based on measurement uncertainties, has 1 in a million chance of hitting the earth, we’d be willing to spend quite a bit of money on “refine measurements, if its still a threat, launch rockets” strategy. But we don’t want to start spending money any time someone who can’t get a normal job gets clever about crying 3^^^3 wolves, and even less so for speculative, untestable laws of physics under description length based prior.