I think there’s an implicit assumption of tiny discount factors here, which are probably not held by the majority of the human population. If your utility function is such that you care very little about what happens after you die, and/or you mostly care for people in your immediate surroundings, your P(DOOM) needs to be substantially higher for you to start caring significantly.
This is not to mention Pascal’s mugging type arguments, where you should be unconvinced to make significant life choices from an unconvincing probability of some large thing.
This is not to say that I’m against x-risk research – my P(DOOM) is about 60% or so. This is more just to say that I’m not sure people with a non-EA worldview should necessarily be convinced by your arguments.
Discount factors are a cheap stand-in for three effects, none of which apply to P(DOOM):
a) difficulty of predicting the future. That extinction is forever is not a difficult prediction. (In other news, Generalissimo Francisco Franco is still dead.)
b) someone closer to the time (possibly even me) may handle that. But not if everybody is dead.
c) GPD growth rates. Which are zero if everybody is dead.
(Or to quote a bald ASI, even three million years into the future it remains true that: Everybody is dead, Dave.)
But yes, I should have pointed out that in this particular case, the normal assumption that you can safely ignore the far future and it will take care of itself does not apply.
Hmm, perhaps. My intuition behind discount factors is different, but I’m not sure it’s a crux here. I agree that extinction leads to 0 utility for everyone everywhere, but the point I was making was more that with low discount factors the massive potential of humanity has significant weight, while a high discount factor sends this to near 0.
In this worldview, near-extinction is no-longer significantly better than extinction.
That aside, I think the stronger point is that if you only care about people near to you, spatially and temporally (as I think most people implicitly do), the thing you end up caring about is the death of maybe 10 − 1000 people (discounted by your familiarity with them, so probably at most equivalent to ~100 deaths of nearby family) rather than 8000000000.
Some napkin maths as to how much someone with that sort of worldview should care: a 0.01% chance of doom in the next ~20 years then gives ~1% of an equivalent expected death in the next 20 years. 20 years is ~17 million hours, which would make it about 7.5x less worrisome than driving according to this infographic.
Again, very napkin maths, but I think my basic point is that a 0.01% P(Doom) coupled with a non-longtermist, non-cosmopolitan view seems very consistent with “who gives a shit”.
Number of relations grows exponentially with distance, genetic relatedness grows with log of distance, so assume you have e.g 1 sibling, 2 cousins, 4 second cousins etc, each layer will have an equivalent fitness contribution. log2(8 billion) = 33. Fermi estimate of 100 seems around right?
If anything, I get the impression this is overestimating how much people actually care, because there’s probably an upper bound somewhere before this point.
Is the implication here that you should also be caring about genetic fitness as carried into the future? My basic calculation here was that in purely genetic terms, you should care about the entire earth’s population ~33x as much as a sibling (modulo family trees are a bunch messier at this scale, so you probably care about it more than that).
I feel like at this scale the fundamental thing is that we are just straight up misaligned with evolution (which I think we agree on).
Indeed. I’m enough of a sociobiologist to sometimes put some intellectual effort into trying to be aligned with evolution, but I attempt not to overdo it.
Far more likely, they’re not calculating their evolutionary fitness at all. Our having emotions and values that are downstream of evolution doesn’t imply that we have a deeper goal of maximising fitness.
I think there’s an implicit assumption of tiny discount factors here, which are probably not held by the majority of the human population. If your utility function is such that you care very little about what happens after you die, and/or you mostly care for people in your immediate surroundings, your P(DOOM) needs to be substantially higher for you to start caring significantly.
This is not to mention Pascal’s mugging type arguments, where you should be unconvinced to make significant life choices from an unconvincing probability of some large thing.
This is not to say that I’m against x-risk research – my P(DOOM) is about 60% or so. This is more just to say that I’m not sure people with a non-EA worldview should necessarily be convinced by your arguments.
Discount factors are a cheap stand-in for three effects, none of which apply to P(DOOM):
a) difficulty of predicting the future. That extinction is forever is not a difficult prediction. (In other news, Generalissimo Francisco Franco is still dead.)
b) someone closer to the time (possibly even me) may handle that. But not if everybody is dead.
c) GPD growth rates. Which are zero if everybody is dead.
(Or to quote a bald ASI, even three million years into the future it remains true that: Everybody is dead, Dave.)
But yes, I should have pointed out that in this particular case, the normal assumption that you can safely ignore the far future and it will take care of itself does not apply.
Hmm, perhaps. My intuition behind discount factors is different, but I’m not sure it’s a crux here. I agree that extinction leads to 0 utility for everyone everywhere, but the point I was making was more that with low discount factors the massive potential of humanity has significant weight, while a high discount factor sends this to near 0.
In this worldview, near-extinction is no-longer significantly better than extinction.
That aside, I think the stronger point is that if you only care about people near to you, spatially and temporally (as I think most people implicitly do), the thing you end up caring about is the death of maybe 10 − 1000 people (discounted by your familiarity with them, so probably at most equivalent to ~100 deaths of nearby family) rather than 8000000000.
Some napkin maths as to how much someone with that sort of worldview should care: a 0.01% chance of doom in the next ~20 years then gives ~1% of an equivalent expected death in the next 20 years. 20 years is ~17 million hours, which would make it about 7.5x less worrisome than driving according to this infographic.
Again, very napkin maths, but I think my basic point is that a 0.01% P(Doom) coupled with a non-longtermist, non-cosmopolitan view seems very consistent with “who gives a shit”.
Such a person is very badly miscalculating their evolutionary fitness — but then, what else is new?
Number of relations grows exponentially with distance, genetic relatedness grows with log of distance, so assume you have e.g 1 sibling, 2 cousins, 4 second cousins etc, each layer will have an equivalent fitness contribution. log2(8 billion) = 33. Fermi estimate of 100 seems around right?
If anything, I get the impression this is overestimating how much people actually care, because there’s probably an upper bound somewhere before this point.
If your species goes extinct, you genetic fitness just went to 0, along with everyone else’s. Species-level evolution is also a thing.
Is the implication here that you should also be caring about genetic fitness as carried into the future? My basic calculation here was that in purely genetic terms, you should care about the entire earth’s population ~33x as much as a sibling (modulo family trees are a bunch messier at this scale, so you probably care about it more than that).
I feel like at this scale the fundamental thing is that we are just straight up misaligned with evolution (which I think we agree on).
Indeed. I’m enough of a sociobiologist to sometimes put some intellectual effort into trying to be aligned with evolution, but I attempt not to overdo it.
Far more likely, they’re not calculating their evolutionary fitness at all. Our having emotions and values that are downstream of evolution doesn’t imply that we have a deeper goal of maximising fitness.