What about all the future people that would no longer get a chance to exist—do they count? Do you value continued existence and prosperity of human civilization above and beyond the individual people? For me, it’s a strong yes to both questions, and that does change the calculus significantly!
Also, is the calculator setting non-doom post-AGI mortality to zero by capping the horizon at AGI and counting only pre-AGI deaths?
For example: time to AGI|no pause = 10y & pause = 10y. Then the calculator will arrive 60m x 10 deaths for no-pause vs 60m x 20 for pause. But if after AGI mortality only halves, the fair comparison no-pause path should be 60m x 10 + 0.5 x 60m x 10.
Yes, quite weird to put zero value for people after AGI.
If you e.g. expect p(extinction|no pause)-p(extinction|pause) = 0.1%, and 1 trillion people after AGI, pausing saves a billion people after AGI in expectation.
I think it is definitely not a classical utilitarian view, but that doesn’t trouble me. If you are a classic utilitarian, you can always put the value of S super high.
To explain briefly why I don’t care about trying to create more people, I am motivated by empathy. I want people that exist to be doing well, but I don’t care about maximizing the number of people doing well very much. Utilitarianism seems to imply we should tile the universe with flourishing humans, but that doesn’t seem valuable to me. I don’t see every wasted sperm cell as some kind of tragedy. A future person who could have existed. I don’t think the empty universe before humanity came along was good or bad. I don’t think in terms of good or bad. Things just are, and I like them or I dont. I don’t like when people suffer or die. That’s it.
I was most confused about ‘S’, and likely understood it quite differently than intended.
I understood S as roughly “Humanity stays in control after AGI, but slowly (over decades/centuries) becomes fewer and less relevant”. I’d expect in many of these cases for something morally valuable to replace humans. So I put S lower than R.
Could it make sense to introduce “population after AGI you care about” as a term – I think this could be clearer.
I meant for this to be factored into the scenario by S. I almost don’t care at all about future people getting a chance to exist, so I put S very low. If you disagree, you’ll set S higher.
There was a thought experiment in the post to try to help you think about how much you care about humanity vs. individual humans that exist.
What about all the future people that would no longer get a chance to exist—do they count? Do you value continued existence and prosperity of human civilization above and beyond the individual people? For me, it’s a strong yes to both questions, and that does change the calculus significantly!
Also, is the calculator setting non-doom post-AGI mortality to zero by capping the horizon at AGI and counting only pre-AGI deaths?
For example: time to AGI|no pause = 10y & pause = 10y. Then the calculator will arrive 60m x 10 deaths for no-pause vs 60m x 20 for pause. But if after AGI mortality only halves, the fair comparison no-pause path should be 60m x 10 + 0.5 x 60m x 10.
I made the assumption that mortality is cured by superintelligent AGI if the AGI is aligned, yes.
Yes, quite weird to put zero value for people after AGI.
If you e.g. expect p(extinction|no pause)-p(extinction|pause) = 0.1%, and 1 trillion people after AGI, pausing saves a billion people after AGI in expectation.
I think it is definitely not a classical utilitarian view, but that doesn’t trouble me. If you are a classic utilitarian, you can always put the value of S super high.
To explain briefly why I don’t care about trying to create more people, I am motivated by empathy. I want people that exist to be doing well, but I don’t care about maximizing the number of people doing well very much. Utilitarianism seems to imply we should tile the universe with flourishing humans, but that doesn’t seem valuable to me. I don’t see every wasted sperm cell as some kind of tragedy. A future person who could have existed. I don’t think the empty universe before humanity came along was good or bad. I don’t think in terms of good or bad. Things just are, and I like them or I dont. I don’t like when people suffer or die. That’s it.
I was most confused about ‘S’, and likely understood it quite differently than intended.
I understood S as roughly “Humanity stays in control after AGI, but slowly (over decades/centuries) becomes fewer and less relevant”. I’d expect in many of these cases for something morally valuable to replace humans. So I put S lower than R.
Could it make sense to introduce “population after AGI you care about” as a term – I think this could be clearer.
I meant for this to be factored into the scenario by S. I almost don’t care at all about future people getting a chance to exist, so I put S very low. If you disagree, you’ll set S higher.
There was a thought experiment in the post to try to help you think about how much you care about humanity vs. individual humans that exist.