Ah, got it. Yeah, that would help, though there would remain many cases where bad futures come too quickly (e.g., if an AGI takes a treacherous turn all of a sudden).
Brian_Tomasik
A “do not resuscitate” kind of request would probably help with some futures that are mildly bad in virtue of some disconnect between your old self and the future (e.g., extreme future shock). But in those cases, you could always just kill yourself.
In the worst futures, presumably those resuscitating you wouldn’t care about your wishes. These are the scenarios where a terrible future existence could continue for a very long time without the option of suicide.
This is awesome! Thank you. :) I’d be glad to copy it into my piece if I have your permission. For now I’ve just linked to it.
Cool. Another interesting question would be how the views of a single person change over time. This would help tease out whether it’s a generational trend or a generic trend with getting older.
In my own case, I only switched to finding a soft takeoff pretty likely within the last year. The change happened as I read more sources outside LessWrong that made some compelling points. (Note that I still agree that work on AI risks may have somewhat more impact in hard-takeoff scenarios, so that hard takeoffs deserve more than their probability’s fraction of attention.)
Good question. :) I don’t want to look up exact ages for everyone, but I would guess that this graph would look more like a teepee, since Yudkowsky, Musk, Bostrom, etc. would be shifted to the right somewhat but are still younger than the long-time software veterans.
Good points. However, keep in mind that humans can also use software to do boring jobs that require less-than-human intelligence. If we were near human-level AI, there may by then be narrow-AI programs that help with the items you describe.
Thanks for the comment. There is some “multiple hypothesis testing” effect at play in the sense that I constructed the graph because of a hunch that I’d see a correlation of this type, based on a few salient examples that I knew about. I wouldn’t have made a graph of some other comparison where I didn’t expect much insight.
However, when it came to adding people, I did so purely based on whether I could clearly identify their views on the hard/soft question and years worked in industry. I’m happy to add anyone else to the graph if I can figure out the requisite data points. For instance, I wanted to add Vinge but couldn’t clearly tell what x-axis value to use for him. For Kurzweil, I didn’t really know what y-axis value to use.
This is a good point, and I added it to the penultimate paragraph of the “Caveats” section of the piece.
Thanks for the correction! I changed “endorsed” to “discussed” in the OP. What I meant to convey was that these authors endorsed the logic of the argument given the premises (ignoring sim scenarios), rather than that they agreed with the argument all things considered.
I don’t think question pits SSA against SIA; rather, it concerns what SIA itself implies. But I think my argument was wrong, and I’ve edited the top-level post to explain why.
Not sure of the relevance of eternal inflation. However, I think I’ve realized where my argument went astray and have updated the post accordingly. Let me know if we still disagree.
Thanks! What you explain in your second paragraph was what I was missing. The distinction isn’t between hypotheses where there’s one copy of me versus several (those don’t work) but rather between hypotheses where there’s one copy of me versus none, and an early filter falsely predicts lots of “none”s.
Yes, but the Fermi paradox and Great Filter operate within a given branch of the MWI multiverse.
Thanks! I think this is basically a restatement of Katja’s argument. The problem seems to be that comparing number of brains like ours isn’t the right question. The question is how many minds are exactly ours, and this number has to be the same (ignoring simulations) between (B) and (C): namely, there is one civilization exactly like ours in either case.
Sorry, I’m not seeing it. Could you spell out how?
I agree that allowing simulation arguments changes the ball game. For instance, sim args favor universes with lots of simulated copies of you. This requires that at least one alien civilization develops AI within a given local region of the universe, which in turn requires that the filters can’t be too strong. But this is different from Katja’s argument.
That’s what I originally thought, but the problem is that the probabilities of each life-form having your experiences are not independent. Once we know that one (non-simulated) life-form has your experiences in our region of the universe, this precludes other life-forms having those exact experiences, because the other life forms exist somewhere else, on different-looking planets, and so can’t observe exactly what you do.
Given our set of experiences, we filter down the set of possible hypotheses to those that are consistent with our experiences. Of the (non-simulation) hypotheses that remain, they all contain only one copy of us in our local region of the universe.
Sorry, that sentence was confusing. :/ It wasn’t really meant to say anything at all. The “filter” that we’re focusing on is a statistical property of planets in general, and it’s this property of planets in general that we’re trying to evaluate. What happened on Earth has no bearing on that question.
That sentence was also confusing because it made it sound like a filter would happen on Earth, which is not necessarily the case. I edited to say “We know the filter on Earth (if any)”, adding the “if any” part.
Thanks, Peter. :) I agree about appearing normal when the issue is trivial. I’m not convince about minimizing weirdness on important topics. Some counter-considerations:
People like Nick Bostrom seem to acquire prestige by taking on many controversial ideas at once. If Bostrom’s only schtick were anthropic bias, he probably wouldn’t have reached FP’s top 100 thinkers.
Focusing on only one controversial issue may make you appear single-minded, like “Oh, that guy only cares about X and can’t see that Y and Z are also important topics.”
If you advocate many things, people can choose the one they agree with most or find easiest to do.
Thanks, Wes_W. :)
When using SIA (which is actually an abbreviation of SSA+SIA), there are no reference classes. SIA favors hypotheses in proportion to how many copies of your subjective experiences they contain. Shulman and Bostrom explain why on p. 9 of this paper, in the paragraph beginning with “In the SSA+SIA combination”.
We know the filter on Earth (if any) can’t be early or middle because we’re here, though we don’t know what the filter looks like on planets in general. If the filter is late, there are many more boxes at our general stage. But SIA doesn’t care how many are at our general stage; it only cares how many are indistinguishable from us (including having the label “Earth” on the box). So no update.
Nice point. :)
That said, your example suggests a different difficulty: People who happen to be special numbers n get higher weight for apparently no reason. Maybe one way to address this fact is to note that what number n someone has is relative to (1) how the list is enumerated and (2) what universal Turing machine is being used for KC in the first place, and maybe averaging over these arbitrary details would blur the specialness of, say, the 1-billionth observer according to any particular coding scheme. Still, I doubt the KCs of different people would be exactly equal even after such adjustments.