I can’t help thinking that such far-fetched arguments can easily fail because they make hidden assumptions about what’s possible in reality and what isn’t, and there’s no easy way to notice these assumptions except by making genuine conceptual progress. For example, if civilization-level quantum suicide works, it makes your final statement false (and also defuses the Fermi paradox).
How do you think it makes Bostrom’s conclusion false? Post QM-suicide singleton makes lots of ancestor sims but they have little ‘measure’? I wouldn’t really count that. (Aside from that quantum suicide seems highly unlikely to be optimal, it’s the same as the “only care about the copy of you that wins the lottery” confusion.)
I agree that the scenarios Bostrom talks about aren’t interesting. I just figured I’d critique them since a lot of people do.
I can’t help thinking that such far-fetched arguments can easily fail because they make hidden assumptions about what’s possible in reality and what isn’t, and there’s no easy way to notice these assumptions except by making genuine conceptual progress. For example, if civilization-level quantum suicide works, it makes your final statement false (and also defuses the Fermi paradox).
How do you think it makes Bostrom’s conclusion false? Post QM-suicide singleton makes lots of ancestor sims but they have little ‘measure’? I wouldn’t really count that. (Aside from that quantum suicide seems highly unlikely to be optimal, it’s the same as the “only care about the copy of you that wins the lottery” confusion.)
I agree that the scenarios Bostrom talks about aren’t interesting. I just figured I’d critique them since a lot of people do.