It seems to me that winning the leverage lottery (by being at the dawn of an intergalactic civilization) is not like flipping a few hundred coins and getting a random bitstring that was not generated in that fashion, anywhere else in our Hubble volume. It is like flipping a few hundred coins and getting nothing but heads. The individual random bitstring is improbable, but it is not special, and getting some not-special bitstring through the coinflipping process is the expected outcome.
Therefore I think the analogy fails, and the proper conclusion is that models implying a “cosmic manifest destiny” for present-day Earthlings are wrong. How this relates to the whole Mugging/Muggle dialectic I do not know, I haven’t had time to see what’s really going on there. I am presently more interested in the practical consequences of this conclusion for our model of the universe, than I am in the epistemology.
It seems to me that winning the leverage lottery (by being at the dawn of an intergalactic civilization) is not like flipping a few hundred coins and getting a random bitstring that was not generated in that fashion, anywhere else in our Hubble volume. It is like flipping a few hundred coins and getting nothing but heads.
Yeah, exactly. The issue is not so much the 10^-80 prior, as the 10^-80 prior on obtaining it randomly vs much much larger prior of obtaining it because, say, you can’t visually discriminate between the coin sides.
My own position regarding this is that yet we haven’t really even started properly thinking how to use anthropic evidence. e.g. you’re seemingly just treating every single individual consciousness in the history of the universe as of equal probability to have been ‘you’, but that by itself implies an assumption that there exists a well-defined thing called ‘individual consciousness’ rather than a confusing combination of different processes in your brain… That they must each be given equal weight is an additional step that I don’t think can be properly supported (e.g. if MWI is correct and my consciousness splits into a trillion different people every second, some of which merge back together, what is the anthropic weight assigned to my past self vs the future self?)
Another possibility would e.g. be that for some reason anthropic evidence are heavily tilted to favour the early universe—that it’s more likely to ‘be’ someone in the early universe, the earlier the better. (e.g. easier to simulate the early than the late universe, hence more Universe-simulators do the former than the latter)
Or anthropic evidence could be tilted to favour simple intelligences. (e.g. easier to simulate simple intelligences than complex ones)
(The above is not meant to imply that I support the simulation hypothesis. I’m just using it as a way of demonstrating how some anthropic calculations may be off)
You could think of the “utilities” in your utilitarianism. Why would one unit of global utility that you can sacrifice be able to produce 10^80 - ish units of utility gain? It’s unlikely to come across an unit of utility that you can so profitably sacrifice (if it is bounded and doesn’t just exponentially stack up in influences ad infinitum). This removes the anthropic considerations from the leverage problem.
Why would one unit of global utility that you can sacrifice be able to produce 10^80 - ish units of utility gain?
Since utility isn’t an inherent concept in the physical laws of the universe but just a calculation inside our minds, I don’t see your meaning here: You don’t “come across” a unit of utility to sacrifice, you seek it out. An architect that seeks to design a skyscraper is more likely to succeed in designing a skyscraper than a random monkey doodling.
To estimate the architect’s chances of success I see no point in starting out by thinking “how likely is a monkey to be able to randomly design a skyscraper?”.
It seems to me that there’s considerably less search in “not buy a porche” than in “build a skyscraper”.
Let’s suppose you value paperclips. Someone takes 10 paperclips from you, unbends them, but makes 10^90 paperclips later thanks to their use of 10 paperclips. In this hypothetical universe, these 10 paperclips are very special, and if someone gives you coordinates of a paperclip and claims it’s one of those legendary 10 paperclips (that are going to be turned into 10^90 paperclips), you’d be wise to be quite skeptical—you need evidence that the paperclips you’re looking at are so oddly located within the totality of paperclips. edit: or if someone gives you papers with paperclip marks left on them and says it’s the papers that were held together by said legendary paperclips.
edit2: albeit i do agree—if we actually seek out something, we may be able to overcome very large priors against. In this case though, the issue is that we have a claim that our existing intrinsic values are in a necessarily very unusual relation to the vast majority of what’s intrinsically valuable.
I want to respond directly now…
It seems to me that winning the leverage lottery (by being at the dawn of an intergalactic civilization) is not like flipping a few hundred coins and getting a random bitstring that was not generated in that fashion, anywhere else in our Hubble volume. It is like flipping a few hundred coins and getting nothing but heads. The individual random bitstring is improbable, but it is not special, and getting some not-special bitstring through the coinflipping process is the expected outcome.
Therefore I think the analogy fails, and the proper conclusion is that models implying a “cosmic manifest destiny” for present-day Earthlings are wrong. How this relates to the whole Mugging/Muggle dialectic I do not know, I haven’t had time to see what’s really going on there. I am presently more interested in the practical consequences of this conclusion for our model of the universe, than I am in the epistemology.
Yeah, exactly. The issue is not so much the 10^-80 prior, as the 10^-80 prior on obtaining it randomly vs much much larger prior of obtaining it because, say, you can’t visually discriminate between the coin sides.
My own position regarding this is that yet we haven’t really even started properly thinking how to use anthropic evidence. e.g. you’re seemingly just treating every single individual consciousness in the history of the universe as of equal probability to have been ‘you’, but that by itself implies an assumption that there exists a well-defined thing called ‘individual consciousness’ rather than a confusing combination of different processes in your brain… That they must each be given equal weight is an additional step that I don’t think can be properly supported (e.g. if MWI is correct and my consciousness splits into a trillion different people every second, some of which merge back together, what is the anthropic weight assigned to my past self vs the future self?)
Another possibility would e.g. be that for some reason anthropic evidence are heavily tilted to favour the early universe—that it’s more likely to ‘be’ someone in the early universe, the earlier the better. (e.g. easier to simulate the early than the late universe, hence more Universe-simulators do the former than the latter)
Or anthropic evidence could be tilted to favour simple intelligences. (e.g. easier to simulate simple intelligences than complex ones)
(The above is not meant to imply that I support the simulation hypothesis. I’m just using it as a way of demonstrating how some anthropic calculations may be off)
You could think of the “utilities” in your utilitarianism. Why would one unit of global utility that you can sacrifice be able to produce 10^80 - ish units of utility gain? It’s unlikely to come across an unit of utility that you can so profitably sacrifice (if it is bounded and doesn’t just exponentially stack up in influences ad infinitum). This removes the anthropic considerations from the leverage problem.
Since utility isn’t an inherent concept in the physical laws of the universe but just a calculation inside our minds, I don’t see your meaning here: You don’t “come across” a unit of utility to sacrifice, you seek it out. An architect that seeks to design a skyscraper is more likely to succeed in designing a skyscraper than a random monkey doodling.
To estimate the architect’s chances of success I see no point in starting out by thinking “how likely is a monkey to be able to randomly design a skyscraper?”.
It seems to me that there’s considerably less search in “not buy a porche” than in “build a skyscraper”.
Let’s suppose you value paperclips. Someone takes 10 paperclips from you, unbends them, but makes 10^90 paperclips later thanks to their use of 10 paperclips. In this hypothetical universe, these 10 paperclips are very special, and if someone gives you coordinates of a paperclip and claims it’s one of those legendary 10 paperclips (that are going to be turned into 10^90 paperclips), you’d be wise to be quite skeptical—you need evidence that the paperclips you’re looking at are so oddly located within the totality of paperclips. edit: or if someone gives you papers with paperclip marks left on them and says it’s the papers that were held together by said legendary paperclips.
edit2: albeit i do agree—if we actually seek out something, we may be able to overcome very large priors against. In this case though, the issue is that we have a claim that our existing intrinsic values are in a necessarily very unusual relation to the vast majority of what’s intrinsically valuable.