If people believe that a technological singularity is imminent, then they may believe that it will happen before they have a significant chance of dying
This only makes sense given large fixed costs of cryonics (but you can just not make it publicly known that you’ve signed up for a policy, and the hassle of setting one up is small compared to other health and fitness activities) and extreme (dubious) confidence in quick technological advance, given that we’re talking about insurance policies.
extreme (dubious) confidence in quick technological advance
To put it another way, if you correctly take into account structural uncertainty about the future of the world, you can’t be that confident that the singularity will happen in your lifetime.
Note that I did not make any arguments against the technological feasibility of cryonics, because they all suck. Likewise, and I’m going to be blunt here, all arguments against the feasibility of a singularity that I’ve seen also suck. Taking into account structural uncertainty around nebulous concepts like identity, subjective experience, measure, et cetera, does not lead to any different predictions around whether or not a singularity will occur (but it probably does have strong implications on what type of singularity will occur!). I mean, yes, I’m probably in a Fun Theory universe and the world is full of decision theoretic zombies, but this doesn’t change whether or not an AGI in such a universe looking at its source code can go FOOM.
Will, the singularity argument above relies on not just the likely long-term feasibility of a singularity, but the near-certainty of one VERY soon, so soon that fixed costs like the inconvenience of spending a few hours signing up for cryonics defeat the insurance value. Note that the cost of life insurance for a given period scales with your risk of death from non-global-risk causes in advance of a singularity.
With reasonable fixed costs, that means something like assigning 95%+ probability to a singularity in less than five years. Unless one has incredible private info (e.g. working on a secret government project with a functional human-level AI) that would require an insane prior.
Will, the singularity argument above relies on not just the likely long-term feasibility of a singularity, but the near-certainty of one VERY soon, so soon that fixed costs like the inconvenience of spending a few hours signing up for cryonics defeat the insurance value. Note that the cost of life insurance for a given period scales with your risk of death from non-global-risk causes in advance of a singularity.
I never argued that this objection alone is enough to tip the scales in favor of not signing up. It is mostly this argument combined with the idea that loss of measure on the order of 5-50% really isn’t all that important when you’re talking about multiverse-affecting technologies; no, really, I’m not sure 5% of my measure is worth having to give up half a Hershey’s bar everyday, when we’re talking crazy post-singularity decision theoretic scenarios from one of Escher’s worst nightmares. This is even more salient if those Hershey bars (or airport parking tickets or shoes or whatever) end up helping me increase the chance of getting access to infinite computational power.
No, unfortunately, much more complicated and much more fuzzy. Unfortunately it’s a Pascalian thing. Basically, if post-singularity (or pre-singularity if I got insanely lucky for some reason—in which case this point becomes a lot more feasible) I get access to infinite computing power, it doesn’t matter how much of my measure gets through, because I’ll be able to take over any ‘branches’ I could have been able to reach with my measure otherwise. This relies on some horribly twisted ideas in cosmology / game theory / decision theory that will, once again, not fit in the margin. Outside view, it’s over a 99% chance these ideas totally wrong, or ‘not even wrong’.
Note that the cost of life insurance for a given period scales with your risk of death from non-global-risk causes in advance of a singularity.
My understanding was in policies like Roko was describing you’re not paying year by year, you’re paying for a lifetime thing where in the early years you’re mostly paying for the rate not to go up in later years. Is this inaccurate? If it’s year by year, $1/day seems expensive on a per life basis given that the population-wide rate of death is something like 1 in 1000 for young people, probably much less for LWers and much less still if you only count the ones leaving preservable brains.
A being that has so little decision theoretic measure across the multiverse as to be nearly non-existenent due to a proportionally infinitesimal amount of observer-moment-like-things. However, the being may have very high information theoretic measure to compensate. (I currently have an idea that Steve thinks is incorrect arguing for information theoretic measure to correlate roughly to the reciprocal of decision theoretic measure, which itself is very well-correlated with Eliezer’s idea of optimization power. This is all probably stupid and wrong but it’s interesting to play with the implications (like literally intelligent rocks, me [Will] being ontologically fundamental, et cetera).)
I’m going to say that I am 8 serious 0-10 that I think things will turn out to really probably not add up to ‘normality’, whatever your average rationalist thinks ‘normality’ is. Some of the implications of decision theory really are legitimately weird.
Hm, I was hoping to magically get at the same concepts you had cached but it seems like I failed. (Agent) computations that have lower Kolmogorov complexity have greater information theoretic measure in my twisted model of multiverse existence. Decision theoretic measure is something like the significantness you told me to talk to Steve Rayhawk about: the idea that one shouldn’t care about events one has no control over, combined with the (my own?) idea that having oneself cared about by a lot of agent-computations and thus made more salient to more decisions is another completely viable way of increasing one’s measure. Throw in a judicious mix of anthropic reasoning, optimization power, ontology of agency, infinite computing power in finite time, ‘probability as preference’, and a bunch of other mumbo jumbo, and you start getting some interesting ideas in decision theory. Is this not enough to hint at the conceptspace I’m trying to convey?
“You don’t come across as ontologically fundamental IRL.” Ha, I was kind of trolling there, but something along the lines of ‘I find myself as me because I am part of the computation that has the greatest proportional measure across the multiverse’. It’s one of many possible explanations I toy with as to why I exist. Decision theory really does give one the tools to blow one’s philosophical foot off. I don’t take any of my ideas too seriously, but collectively, I feel like they’re representative of a confusion that not only I have.
Yeah, that seems like a necessary plot point, but I think it’d be more fun to have a challenge first. I feel like the main character(s) should experience the human condition or whatever before they get a taste of true power, or else they’d be corrupted. First they gotta find something to protect. A classic story of humble beginnings.
Agreed. Funnest scenario is experiencing the human condition, then being the first upload to go FOOM. The psychological mind games of a transcending human. Understanding fully the triviality of human emotions that once defined you, while at the same moment modifying your own soul in an attempt to grasp onto your lingering sanity, knowing full well that the fate of the universe and billions of lives rests on the balance. Sounds like a hell of a rollercoaster.
Not necessarily. Someone may for example put a very high confidence in an upcoming technological singularity but put a very low confidence on some other technologies. To use one obvious example, it is easy to see how someone would estimate the chance of a singularity in the near future to be much higher than the chance that we will have room temperature superconductors. And you could easily assign a high confidence to one estimate for one technology and not a high confidence in your estimate for another. (Thus for example, a solid state physicist might be much more confident in their estimate for the superconductors). I’m not sure what estimates one would use to reach this class of conclusion with cryonics and the singularity, but at first glance this is a consistent approach.
Right, but if it fits minimal logical consistency it means that there’s some thinking that needs to go on. And having slept on this I can now give other plausible scenarios for someone to have this sort of position. If for example, someone puts a a high probability on a coming singularity, but they put a low probability that effective nanotech will ever be good enough to restore brain function.For example, If you believe that the vitrification procedure damages neurons in fashion that is likely to permanently erases memory, then this sort of attitude would make sense.
This only makes sense given large fixed costs of cryonics (but you can just not make it publicly known that you’ve signed up for a policy, and the hassle of setting one up is small compared to other health and fitness activities) and extreme (dubious) confidence in quick technological advance, given that we’re talking about insurance policies.
To put it another way, if you correctly take into account structural uncertainty about the future of the world, you can’t be that confident that the singularity will happen in your lifetime.
Note that I did not make any arguments against the technological feasibility of cryonics, because they all suck. Likewise, and I’m going to be blunt here, all arguments against the feasibility of a singularity that I’ve seen also suck. Taking into account structural uncertainty around nebulous concepts like identity, subjective experience, measure, et cetera, does not lead to any different predictions around whether or not a singularity will occur (but it probably does have strong implications on what type of singularity will occur!). I mean, yes, I’m probably in a Fun Theory universe and the world is full of decision theoretic zombies, but this doesn’t change whether or not an AGI in such a universe looking at its source code can go FOOM.
Will, the singularity argument above relies on not just the likely long-term feasibility of a singularity, but the near-certainty of one VERY soon, so soon that fixed costs like the inconvenience of spending a few hours signing up for cryonics defeat the insurance value. Note that the cost of life insurance for a given period scales with your risk of death from non-global-risk causes in advance of a singularity.
With reasonable fixed costs, that means something like assigning 95%+ probability to a singularity in less than five years. Unless one has incredible private info (e.g. working on a secret government project with a functional human-level AI) that would require an insane prior.
I never argued that this objection alone is enough to tip the scales in favor of not signing up. It is mostly this argument combined with the idea that loss of measure on the order of 5-50% really isn’t all that important when you’re talking about multiverse-affecting technologies; no, really, I’m not sure 5% of my measure is worth having to give up half a Hershey’s bar everyday, when we’re talking crazy post-singularity decision theoretic scenarios from one of Escher’s worst nightmares. This is even more salient if those Hershey bars (or airport parking tickets or shoes or whatever) end up helping me increase the chance of getting access to infinite computational power.
Wut. Is this a quantum immortality thing?
No, unfortunately, much more complicated and much more fuzzy. Unfortunately it’s a Pascalian thing. Basically, if post-singularity (or pre-singularity if I got insanely lucky for some reason—in which case this point becomes a lot more feasible) I get access to infinite computing power, it doesn’t matter how much of my measure gets through, because I’ll be able to take over any ‘branches’ I could have been able to reach with my measure otherwise. This relies on some horribly twisted ideas in cosmology / game theory / decision theory that will, once again, not fit in the margin. Outside view, it’s over a 99% chance these ideas totally wrong, or ‘not even wrong’.
My understanding was in policies like Roko was describing you’re not paying year by year, you’re paying for a lifetime thing where in the early years you’re mostly paying for the rate not to go up in later years. Is this inaccurate? If it’s year by year, $1/day seems expensive on a per life basis given that the population-wide rate of death is something like 1 in 1000 for young people, probably much less for LWers and much less still if you only count the ones leaving preservable brains.
How serious 0-10, and what’s a decision theoretic zombie?
A being that has so little decision theoretic measure across the multiverse as to be nearly non-existenent due to a proportionally infinitesimal amount of observer-moment-like-things. However, the being may have very high information theoretic measure to compensate. (I currently have an idea that Steve thinks is incorrect arguing for information theoretic measure to correlate roughly to the reciprocal of decision theoretic measure, which itself is very well-correlated with Eliezer’s idea of optimization power. This is all probably stupid and wrong but it’s interesting to play with the implications (like literally intelligent rocks, me [Will] being ontologically fundamental, et cetera).)
I’m going to say that I am 8 serious 0-10 that I think things will turn out to really probably not add up to ‘normality’, whatever your average rationalist thinks ‘normality’ is. Some of the implications of decision theory really are legitimately weird.
What do you mean by decision theoretic and information theoretic measure? You don’t come across as ontologically fundamental IRL.
Hm, I was hoping to magically get at the same concepts you had cached but it seems like I failed. (Agent) computations that have lower Kolmogorov complexity have greater information theoretic measure in my twisted model of multiverse existence. Decision theoretic measure is something like the significantness you told me to talk to Steve Rayhawk about: the idea that one shouldn’t care about events one has no control over, combined with the (my own?) idea that having oneself cared about by a lot of agent-computations and thus made more salient to more decisions is another completely viable way of increasing one’s measure. Throw in a judicious mix of anthropic reasoning, optimization power, ontology of agency, infinite computing power in finite time, ‘probability as preference’, and a bunch of other mumbo jumbo, and you start getting some interesting ideas in decision theory. Is this not enough to hint at the conceptspace I’m trying to convey?
“You don’t come across as ontologically fundamental IRL.” Ha, I was kind of trolling there, but something along the lines of ‘I find myself as me because I am part of the computation that has the greatest proportional measure across the multiverse’. It’s one of many possible explanations I toy with as to why I exist. Decision theory really does give one the tools to blow one’s philosophical foot off. I don’t take any of my ideas too seriously, but collectively, I feel like they’re representative of a confusion that not only I have.
If you were really the only non-zombie in a Fun Theory universe then you would be the AGI going FOOM. What could be funner than that?
Yeah, that seems like a necessary plot point, but I think it’d be more fun to have a challenge first. I feel like the main character(s) should experience the human condition or whatever before they get a taste of true power, or else they’d be corrupted. First they gotta find something to protect. A classic story of humble beginnings.
Agreed. Funnest scenario is experiencing the human condition, then being the first upload to go FOOM. The psychological mind games of a transcending human. Understanding fully the triviality of human emotions that once defined you, while at the same moment modifying your own soul in an attempt to grasp onto your lingering sanity, knowing full well that the fate of the universe and billions of lives rests on the balance. Sounds like a hell of a rollercoaster.
Not necessarily. Someone may for example put a very high confidence in an upcoming technological singularity but put a very low confidence on some other technologies. To use one obvious example, it is easy to see how someone would estimate the chance of a singularity in the near future to be much higher than the chance that we will have room temperature superconductors. And you could easily assign a high confidence to one estimate for one technology and not a high confidence in your estimate for another. (Thus for example, a solid state physicist might be much more confident in their estimate for the superconductors). I’m not sure what estimates one would use to reach this class of conclusion with cryonics and the singularity, but at first glance this is a consistent approach.
Logical consistency, whilst admirably defensible, is way too weak a condition for a belief to satisfy before I call it rational.
It is logically consistent to assign probability 1-10^-10 to the singularity happening next year.
Right, but if it fits minimal logical consistency it means that there’s some thinking that needs to go on. And having slept on this I can now give other plausible scenarios for someone to have this sort of position. If for example, someone puts a a high probability on a coming singularity, but they put a low probability that effective nanotech will ever be good enough to restore brain function.For example, If you believe that the vitrification procedure damages neurons in fashion that is likely to permanently erases memory, then this sort of attitude would make sense.