Like, I think a persuasive or reasonable paper would have put its central load-bearing assumptions up front.
More up front than in the title?
it obviously shouldn’t be the basis of societal decision-making, and luckily also isn’t”.
Societal decision-making typically uses a far narrower basis than the general person-affecting stance that this paper analyzes. For example, not only do voters and governments usually not place much weight on not-yet-conceived people that might come into existence in future millennia, but they care relatively little about what happens to currently existing persons in other countries.
It’s not in the title, which is “Optimal Timing for Superintelligence: Mundane Considerations for Existing People”. My guess is you were maybe hoping that people would interpret “considerations for existing people”, to be equivalent to “person-affecting views” but that IMO doesn’t make any sense. A person-affecting assumption is not anywhere close to equivalent to “considerations for existing people”.
Existing people care about the future, and the future of humanity! If existing people (including me) didn’t care about future people, then the person-affecting view would indeed be correct, but people do!
For example, not only do voters and governments usually not place much weight on not-yet-conceived people that might come into existence in future millennia, but they care relatively little about what happens to currently existing persons in other countries.
Voters and governments put enormous weight on not-yet-conceived people! The average planning horizon for climate change regulation is many decades in the future. Nuclear waste management policies are expected to contain waste for hundreds of years. If anything, I think current governance tends to put far too much weight on the future relative to their actual ability to predict the future (as indeed, I expect neither nuclear waste nor climate change to still be relevant when they are forecasted to have large impacts).
It’s true that governments care less about what happens to people outside of their country, but that just seems like an orthogonal moral issue. They do very much care about their own country and routinely make plans that extend beyond the median life-expectancy of the people within their country (though usually this is a bad idea because actually they aren’t able to predict the future well enough to make plans that far out, but extinction risk is one of the cases where you can actually predict what will happen that far out, because you do know that you don’t have a country anymore if everyone in your country is dead).
Caring about future generations seems common, if not practically universal in policymaking. All the variance in why policymaking tends to focus on short-term effects is explained by the fact the future is hard to predict, not a lack of caring by governance institutions about the future of their countries or humanity at large. But that variance simply doesn’t exist for considering extinction risks. Maybe you have some other evidence that convinced you that countries and policy-makers operate on person-affecting views?
I am very confident that if you talk to practically any elected politician and ask them “how bad is it for everyone in the world to become infertile but otherwise they would lead happy lives until their deaths?”, their reaction would be “that would be extremely catastrophic and bad, humanity would be extinct soon, that would be extremely terrible” (in as much as you can get them to engage with the hypothetical seriously, which is of course often difficult).
The average planning horizon for climate change regulation is many decades in the future. Nuclear waste management policies are expected to contain waste for hundreds of years. … Maybe you have some other evidence that convinced you that countries and policy-makers operate on person-affecting views?
Of course they don’t consistently operate on any specific moral view. But I would claim that they are less badly approximated by ‘benefit currently existing citizens’ than ‘neutrally benefit all possible future people (or citizens) that might be brought into existence over future eons’. Much less is spent on things like nuclear waste management and preventing climate change than on providing amenities for the current population. In fact, they may be spending a net negative amount of resources on trying to benefit future generations, since they are often saddling future generations with vast debt burdens in order to fund current consumption. (FHI—particularly Toby Ord—was involved in some efforts to try to infuse a little bit more consideration of future generations in UK policymaking, but I think only very limited inroads where made on that front.)
Yep, I am definitely not saying that current governance cares about future people equally as they do for current people! (My guess is I don’t either, but I don’t know, morality is tricky and I don’t have super strong stances on population ethics)
But “not caring equally strongly about future people” and “being indifferent to human extinction as long as everyone alive gets to spend the rest of their days happy” are of course drastically different. You are making the second assumption in the paper, which even setting aside whether it’s a reasonable assumption on moral grounds, is extremely divorced from how humanity makes governance decisions (and even more divorced from how people would want humanity to make policy decisions, which would IMO be the standard to aspire to for a policy analysis like this).
In other papers (e.g. Existential Risks (2001), Astronomical Waste (2003), and Existential Risk Prevention as a Global Priority (2013)) I focus mostly on what follows from a mundane impersonal perspective. Since that perspective is even further out of step with how humanity makes governance decisions, is it your opinion that those paper should likewise be castigated? (Some people who hate longtermism have done so, quite vehemently.) But my view is that there can be value in working out what follows from various possible theoretical positions, especially ones that have a distinguished pedigree and are taken seriously in the intellectual tradition. Certainly this is a very standard thing to do in academic philosophy, and I think it’s usually a healthy practice.
Since that perspective is even further out of step with how humanity makes governance decisions, is it your opinion that those paper should likewise be castigated?
I am not fully sure what you are referring to by “mundane impersonal perspective”, but I like all of those papers. I both think they are substantially closer to capturing actual decision-making, and also are closer to what seems to me like good decision-making. They aren’t perfect (I could critique them as well), but my relationship to the perspective brought up in those papers is not the same as I would have to the sociopathic example I mention upthread, and I don’t think they have that many obvious reductio-ad-absurdum cases that obviously violate common-sense morality (and I do not remember these papers advocating for those things, but it’s been a while since I read them).
But my view is that there can be value in working out what follows from various possible theoretical positions, especially ones that have a distinguished pedigree and are taken seriously in the intellectual tradition. Certainly this is a very standard thing to do in academic philosophy, and I think it’s usually a healthy practice.
Absolute agree there is value in mapping out these kinds of things! But again, your paper really unambiguously to me does not maintain the usual “if X then Y” structure. It repeatedly falls back into making statements from an all-things-considered viewpoint, using the person-affecting view as a load-bearing argument in those statements (I could provide more quotes of it doing such).
And then separately, the person-affecting view just really doesn’t seem very interesting to me as a thing to extrapolate. I don’t know why you find it interesting. It seems to me like an exceptionally weak starting point with obvious giant holes in its ethical validity, that make exploring its conclusions much less interesting than the vast majority of other ethical frameworks (like, I would be substantially more interested in a deontological analysis of AI takeoff, or a virtue ethical analysis of AI risk, or a pragmatist analysis, all of which strike me as more interesting and more potentially valid starting point than person-affecting welfare-utilitarianism).
And then beyond that, even if one were to chase out these implications, it seems like a huge improvement to include an analysis of the likelihood of the premises of the perspective you are chasing out, and how robust or likely to be true they are. It has been a while since I read the papers you linked, but much of at least some of them is arguing for and evaluating the validity of the ethical assumptions behind caring about the cosmic endowment. Your most recent paper seems much weaker on this dimension (though my memory might be betraying me, and plausible I will have the same criticism if I were to read your past work, though even then, arguing from the basis of an approximately correct premise, even if the premise is left unevaluated, clearly is better than arguing from the basis of an IMO obviously incorrect premise, without evaluating it as such).
And then separately, the person-affecting view just really doesn’t seem very interesting to me as a thing to extrapolate.
[didn’t read original, just responding locally] My impression is that often people do justify AGI research using this sort of view. (Do you disagree?) That would make it an interesting view to extrapolate, no?
More up front than in the title?
Societal decision-making typically uses a far narrower basis than the general person-affecting stance that this paper analyzes. For example, not only do voters and governments usually not place much weight on not-yet-conceived people that might come into existence in future millennia, but they care relatively little about what happens to currently existing persons in other countries.
It’s not in the title, which is “Optimal Timing for Superintelligence: Mundane Considerations for Existing People”. My guess is you were maybe hoping that people would interpret “considerations for existing people”, to be equivalent to “person-affecting views” but that IMO doesn’t make any sense. A person-affecting assumption is not anywhere close to equivalent to “considerations for existing people”.
Existing people care about the future, and the future of humanity! If existing people (including me) didn’t care about future people, then the person-affecting view would indeed be correct, but people do!
Voters and governments put enormous weight on not-yet-conceived people! The average planning horizon for climate change regulation is many decades in the future. Nuclear waste management policies are expected to contain waste for hundreds of years. If anything, I think current governance tends to put far too much weight on the future relative to their actual ability to predict the future (as indeed, I expect neither nuclear waste nor climate change to still be relevant when they are forecasted to have large impacts).
It’s true that governments care less about what happens to people outside of their country, but that just seems like an orthogonal moral issue. They do very much care about their own country and routinely make plans that extend beyond the median life-expectancy of the people within their country (though usually this is a bad idea because actually they aren’t able to predict the future well enough to make plans that far out, but extinction risk is one of the cases where you can actually predict what will happen that far out, because you do know that you don’t have a country anymore if everyone in your country is dead).
Caring about future generations seems common, if not practically universal in policymaking. All the variance in why policymaking tends to focus on short-term effects is explained by the fact the future is hard to predict, not a lack of caring by governance institutions about the future of their countries or humanity at large. But that variance simply doesn’t exist for considering extinction risks. Maybe you have some other evidence that convinced you that countries and policy-makers operate on person-affecting views?
I am very confident that if you talk to practically any elected politician and ask them “how bad is it for everyone in the world to become infertile but otherwise they would lead happy lives until their deaths?”, their reaction would be “that would be extremely catastrophic and bad, humanity would be extinct soon, that would be extremely terrible” (in as much as you can get them to engage with the hypothetical seriously, which is of course often difficult).
Of course they don’t consistently operate on any specific moral view. But I would claim that they are less badly approximated by ‘benefit currently existing citizens’ than ‘neutrally benefit all possible future people (or citizens) that might be brought into existence over future eons’. Much less is spent on things like nuclear waste management and preventing climate change than on providing amenities for the current population. In fact, they may be spending a net negative amount of resources on trying to benefit future generations, since they are often saddling future generations with vast debt burdens in order to fund current consumption. (FHI—particularly Toby Ord—was involved in some efforts to try to infuse a little bit more consideration of future generations in UK policymaking, but I think only very limited inroads where made on that front.)
Yep, I am definitely not saying that current governance cares about future people equally as they do for current people! (My guess is I don’t either, but I don’t know, morality is tricky and I don’t have super strong stances on population ethics)
But “not caring equally strongly about future people” and “being indifferent to human extinction as long as everyone alive gets to spend the rest of their days happy” are of course drastically different. You are making the second assumption in the paper, which even setting aside whether it’s a reasonable assumption on moral grounds, is extremely divorced from how humanity makes governance decisions (and even more divorced from how people would want humanity to make policy decisions, which would IMO be the standard to aspire to for a policy analysis like this).
In other papers (e.g. Existential Risks (2001), Astronomical Waste (2003), and Existential Risk Prevention as a Global Priority (2013)) I focus mostly on what follows from a mundane impersonal perspective. Since that perspective is even further out of step with how humanity makes governance decisions, is it your opinion that those paper should likewise be castigated? (Some people who hate longtermism have done so, quite vehemently.) But my view is that there can be value in working out what follows from various possible theoretical positions, especially ones that have a distinguished pedigree and are taken seriously in the intellectual tradition. Certainly this is a very standard thing to do in academic philosophy, and I think it’s usually a healthy practice.
I am not fully sure what you are referring to by “mundane impersonal perspective”, but I like all of those papers. I both think they are substantially closer to capturing actual decision-making, and also are closer to what seems to me like good decision-making. They aren’t perfect (I could critique them as well), but my relationship to the perspective brought up in those papers is not the same as I would have to the sociopathic example I mention upthread, and I don’t think they have that many obvious reductio-ad-absurdum cases that obviously violate common-sense morality (and I do not remember these papers advocating for those things, but it’s been a while since I read them).
Absolute agree there is value in mapping out these kinds of things! But again, your paper really unambiguously to me does not maintain the usual “if X then Y” structure. It repeatedly falls back into making statements from an all-things-considered viewpoint, using the person-affecting view as a load-bearing argument in those statements (I could provide more quotes of it doing such).
And then separately, the person-affecting view just really doesn’t seem very interesting to me as a thing to extrapolate. I don’t know why you find it interesting. It seems to me like an exceptionally weak starting point with obvious giant holes in its ethical validity, that make exploring its conclusions much less interesting than the vast majority of other ethical frameworks (like, I would be substantially more interested in a deontological analysis of AI takeoff, or a virtue ethical analysis of AI risk, or a pragmatist analysis, all of which strike me as more interesting and more potentially valid starting point than person-affecting welfare-utilitarianism).
And then beyond that, even if one were to chase out these implications, it seems like a huge improvement to include an analysis of the likelihood of the premises of the perspective you are chasing out, and how robust or likely to be true they are. It has been a while since I read the papers you linked, but much of at least some of them is arguing for and evaluating the validity of the ethical assumptions behind caring about the cosmic endowment. Your most recent paper seems much weaker on this dimension (though my memory might be betraying me, and plausible I will have the same criticism if I were to read your past work, though even then, arguing from the basis of an approximately correct premise, even if the premise is left unevaluated, clearly is better than arguing from the basis of an IMO obviously incorrect premise, without evaluating it as such).
[didn’t read original, just responding locally] My impression is that often people do justify AGI research using this sort of view. (Do you disagree?) That would make it an interesting view to extrapolate, no?