I tend to resolve these issues with measure-problem hand-waving. Basically, since any possible universe exists (between quantum branching, inflationary multiverse, and simulated/purely mathematical existence), any collection of particles (such as me sitting here) exists with a practically uncountable set of futures and pasts, many of which make no sense (bolzman brains). The measure problem is, why is that “many” not actually “most”? The simplest answer is the anthropic one: because that kind of existence simply “doesn’t count”. So, there is some set of qualities of the universe as we know it that make it “count”, let’s call that set “consciousness”. And, personally, I think that this set includes not only the existence of optimizing agents (ourselves), but also the fact that these agents are fundamentally limited in something similar to the ways that we are. In other words, the very existence of some FAI which can keep all of your bad decisions (for any given definition of “bad”) from having consequences, means that “consciousness” as we know it has ended. Whatever exists on the other side of that barrier is simply incommensurable with my values here on this side. It’s “game over”. I can have perfect faith that my “me” will never see it completed—by definition, since then I’d no longer be a conscious “me” under my definition.
That means I am much more motivated to look for (weakly) “incremental” solutions to the problems I see with the world than for truly revolutionary ones like FAI or cryonics. (I regard the last “revolutionary” change to be the evolution of humanity itself—so “incremental” in this sense encompasses all human revolutions to date. The end of death would not be incremental in this sense.)
Sure, I can see where this is more of a justification for acting like a normal person than a rational exploration of fully coherent value space. Yet I can also argue that being meta-rational means justifying, not re-questioning, certain axioms of behavior.
Shorter me: “solving the whole world” leaves me cold, despite fun theory and all. So does ending death, or avoiding it personally. So me not signing up for cryo is perfectly rational.
While I acknowledge that this might not be the most complete and coherent possible set of values, I see no evidence that it’s specifically incoherent, and it is complete enough for me. The Singularity Institute set of values may be more complete and just as non-incoherent, but I suspect that mine are operationally superior, or at least less likely to be falsified, since they attain similar coherence with less of a divergence from evolved human behaviour.
I tend to resolve these issues with measure-problem hand-waving. Basically, since any possible universe exists (between quantum branching, inflationary multiverse, and simulated/purely mathematical existence), any collection of particles (such as me sitting here) exists with a practically uncountable set of futures and pasts, many of which make no sense (bolzman brains). The measure problem is, why is that “many” not actually “most”? The simplest answer is the anthropic one: because that kind of existence simply “doesn’t count”. So, there is some set of qualities of the universe as we know it that make it “count”, let’s call that set “consciousness”. And, personally, I think that this set includes not only the existence of optimizing agents (ourselves), but also the fact that these agents are fundamentally limited in something similar to the ways that we are. In other words, the very existence of some FAI which can keep all of your bad decisions (for any given definition of “bad”) from having consequences, means that “consciousness” as we know it has ended. Whatever exists on the other side of that barrier is simply incommensurable with my values here on this side. It’s “game over”. I can have perfect faith that my “me” will never see it completed—by definition, since then I’d no longer be a conscious “me” under my definition.
That means I am much more motivated to look for (weakly) “incremental” solutions to the problems I see with the world than for truly revolutionary ones like FAI or cryonics. (I regard the last “revolutionary” change to be the evolution of humanity itself—so “incremental” in this sense encompasses all human revolutions to date. The end of death would not be incremental in this sense.)
Sure, I can see where this is more of a justification for acting like a normal person than a rational exploration of fully coherent value space. Yet I can also argue that being meta-rational means justifying, not re-questioning, certain axioms of behavior.
Shorter me: “solving the whole world” leaves me cold, despite fun theory and all. So does ending death, or avoiding it personally. So me not signing up for cryo is perfectly rational.
While I acknowledge that this might not be the most complete and coherent possible set of values, I see no evidence that it’s specifically incoherent, and it is complete enough for me. The Singularity Institute set of values may be more complete and just as non-incoherent, but I suspect that mine are operationally superior, or at least less likely to be falsified, since they attain similar coherence with less of a divergence from evolved human behaviour.