I don’t exactly disagree with anything you wrote but would add:
First, things like “voting for the better candidate in a national election” (assuming you know who that is) have a very small probability (e.g. 1 in a million) of having a big positive counterfactual impact (if the election gets decided by that one vote). Or suppose you donate $1 to a criminal justice reform advocacy charity; what are the odds that the law gets changed because of that extra $1? The original quote was “small probabilities of helping out extremely large numbers” but then you snuck in sign uncertainty in your later discussion (“0.051% chance of doing good and a 0.049% chance of doing harm”). Without the sign uncertainty I think the story would feel quite different.
Second, I think that if you look at the list of interventions that self-described longtermists are actually funding and pursuing right now, I think the vast majority (weighted by $) would be not only absolutely well worth doing, but even in the running for the best possible philanthropic thing to do for the common good, even if you only care about people alive today (including children) having good lives. (E.g. the top two longtermist things are I think pandemic prevention and AGI-apocalypse prevention.) I know people make weird-sounding philosophical cases for these things but I think that’s just because EA is full of philosophers who find it fun to talk about that kind of stuff, I think it’s not decision-relevant on the current margin whether the number of future humans could be 1e58 vs merely 1e11 or whatever.
I agree that longtermist priorities tend to also be beneficial in the near-term, and that sign uncertainty is perhaps a more central consideration than the initial post lets on.
However, I do want to push back on the voting example. I think the point about small probabilities mattering in an election holds if, as you say, we assume we know who the best candidate is. But it seems unlikely to me that we can ever have such sign certainty on a longtermist time-horizon.
To illustrate this, I’d like to reconsider the voting example in the context of a long time-horizon. Can we ever know which candidate is best for the longterm future? Even if we imagine a highly incompetent or malicious leader, the flow-through effects of that person’s tenure in office are highly unpredictable over the longterm. For any bad leader you identify from the past, a case could be made that the counterfactual where they weren’t in power would have been worse. And that’s only over years, decades, or centuries. If humanity has a very long future, the longterm impacts are much, much more uncertain than that.
I think we can say some things with reasonable certainty about the long term future. Two examples:
First, if humans go extinct in the next couple decades, they will probably remain extinct ever after.
Second, it’s at least possible for a powerful AGI to become a singleton, wipe out or disempower other intelligent life, and remain stably in control of the future for the next bajillion years, including colonizing the galaxy or whatever. After all, AGIs can make perfect copies of themselves, AGIs don’t age like humans do, etc. And this hypothetical future singleton AGI is something that might potentially be programmed by humans who are already alive today, as far as anyone knows.
(My point in the second case is not “making a singleton AGI is something we should be trying to do, as a way to influence the long term future”. Instead, my point is “making a singleton AGI is something that people might do, whether we want them to or not … and moreover those people might do it really crappily, like without knowing how to control the motivations of the AGI they’re making. And if that happens, that could be an extremely negative influence on the very long term future. So that means that one way to have an extremely positive influence on the very long term future is to prevent that bad thing from happening.)
I agree with this, but I think I’m making a somewhat different point.
An extinction event tomorrow would create significant certainty, in the sense that it determines the future outcome. But its value is still highly uncertain, because the sign of the curtained future is unknown. A bajillion years is a long time, and I don’t see any reason to presume that a bajillion years of increasing technological power and divergence from the 21st century human experience will be positive on net. I hope it is, but I don’t think my hope resolves the sign uncertainty.
I don’t exactly disagree with anything you wrote but would add:
First, things like “voting for the better candidate in a national election” (assuming you know who that is) have a very small probability (e.g. 1 in a million) of having a big positive counterfactual impact (if the election gets decided by that one vote). Or suppose you donate $1 to a criminal justice reform advocacy charity; what are the odds that the law gets changed because of that extra $1? The original quote was “small probabilities of helping out extremely large numbers” but then you snuck in sign uncertainty in your later discussion (“0.051% chance of doing good and a 0.049% chance of doing harm”). Without the sign uncertainty I think the story would feel quite different.
Second, I think that if you look at the list of interventions that self-described longtermists are actually funding and pursuing right now, I think the vast majority (weighted by $) would be not only absolutely well worth doing, but even in the running for the best possible philanthropic thing to do for the common good, even if you only care about people alive today (including children) having good lives. (E.g. the top two longtermist things are I think pandemic prevention and AGI-apocalypse prevention.) I know people make weird-sounding philosophical cases for these things but I think that’s just because EA is full of philosophers who find it fun to talk about that kind of stuff, I think it’s not decision-relevant on the current margin whether the number of future humans could be 1e58 vs merely 1e11 or whatever.
I agree that longtermist priorities tend to also be beneficial in the near-term, and that sign uncertainty is perhaps a more central consideration than the initial post lets on.
However, I do want to push back on the voting example. I think the point about small probabilities mattering in an election holds if, as you say, we assume we know who the best candidate is. But it seems unlikely to me that we can ever have such sign certainty on a longtermist time-horizon.
To illustrate this, I’d like to reconsider the voting example in the context of a long time-horizon. Can we ever know which candidate is best for the longterm future? Even if we imagine a highly incompetent or malicious leader, the flow-through effects of that person’s tenure in office are highly unpredictable over the longterm. For any bad leader you identify from the past, a case could be made that the counterfactual where they weren’t in power would have been worse. And that’s only over years, decades, or centuries. If humanity has a very long future, the longterm impacts are much, much more uncertain than that.
I think we can say some things with reasonable certainty about the long term future. Two examples:
First, if humans go extinct in the next couple decades, they will probably remain extinct ever after.
Second, it’s at least possible for a powerful AGI to become a singleton, wipe out or disempower other intelligent life, and remain stably in control of the future for the next bajillion years, including colonizing the galaxy or whatever. After all, AGIs can make perfect copies of themselves, AGIs don’t age like humans do, etc. And this hypothetical future singleton AGI is something that might potentially be programmed by humans who are already alive today, as far as anyone knows.
(My point in the second case is not “making a singleton AGI is something we should be trying to do, as a way to influence the long term future”. Instead, my point is “making a singleton AGI is something that people might do, whether we want them to or not … and moreover those people might do it really crappily, like without knowing how to control the motivations of the AGI they’re making. And if that happens, that could be an extremely negative influence on the very long term future. So that means that one way to have an extremely positive influence on the very long term future is to prevent that bad thing from happening.)
I agree with this, but I think I’m making a somewhat different point.
An extinction event tomorrow would create significant certainty, in the sense that it determines the future outcome. But its value is still highly uncertain, because the sign of the curtained future is unknown. A bajillion years is a long time, and I don’t see any reason to presume that a bajillion years of increasing technological power and divergence from the 21st century human experience will be positive on net. I hope it is, but I don’t think my hope resolves the sign uncertainty.