The average planning horizon for climate change regulation is many decades in the future. Nuclear waste management policies are expected to contain waste for hundreds of years. … Maybe you have some other evidence that convinced you that countries and policy-makers operate on person-affecting views?
Of course they don’t consistently operate on any specific moral view. But I would claim that they are less badly approximated by ‘benefit currently existing citizens’ than ‘neutrally benefit all possible future people (or citizens) that might be brought into existence over future eons’. Much less is spent on things like nuclear waste management and preventing climate change than on providing amenities for the current population. In fact, they may be spending a net negative amount of resources on trying to benefit future generations, since they are often saddling future generations with vast debt burdens in order to fund current consumption. (FHI—particularly Toby Ord—was involved in some efforts to try to infuse a little bit more consideration of future generations in UK policymaking, but I think only very limited inroads where made on that front.)
Yep, I am definitely not saying that current governance cares about future people equally as they do for current people! (My guess is I don’t either, but I don’t know, morality is tricky and I don’t have super strong stances on population ethics)
But “not caring equally strongly about future people” and “being indifferent to human extinction as long as everyone alive gets to spend the rest of their days happy” are of course drastically different. You are making the second assumption in the paper, which even setting aside whether it’s a reasonable assumption on moral grounds, is extremely divorced from how humanity makes governance decisions (and even more divorced from how people would want humanity to make policy decisions, which would IMO be the standard to aspire to for a policy analysis like this).
In other papers (e.g. Existential Risks (2001), Astronomical Waste (2003), and Existential Risk Prevention as a Global Priority (2013)) I focus mostly on what follows from a mundane impersonal perspective. Since that perspective is even further out of step with how humanity makes governance decisions, is it your opinion that those paper should likewise be castigated? (Some people who hate longtermism have done so, quite vehemently.) But my view is that there can be value in working out what follows from various possible theoretical positions, especially ones that have a distinguished pedigree and are taken seriously in the intellectual tradition. Certainly this is a very standard thing to do in academic philosophy, and I think it’s usually a healthy practice.
Since that perspective is even further out of step with how humanity makes governance decisions, is it your opinion that those paper should likewise be castigated?
I am not fully sure what you are referring to by “mundane impersonal perspective”, but I like all of those papers. I both think they are substantially closer to capturing actual decision-making, and also are closer to what seems to me like good decision-making. They aren’t perfect (I could critique them as well), but my relationship to the perspective brought up in those papers is not the same as I would have to the sociopathic example I mention upthread, and I don’t think they have that many obvious reductio-ad-absurdum cases that obviously violate common-sense morality (and I do not remember these papers advocating for those things, but it’s been a while since I read them).
But my view is that there can be value in working out what follows from various possible theoretical positions, especially ones that have a distinguished pedigree and are taken seriously in the intellectual tradition. Certainly this is a very standard thing to do in academic philosophy, and I think it’s usually a healthy practice.
Absolute agree there is value in mapping out these kinds of things! But again, your paper really unambiguously to me does not maintain the usual “if X then Y” structure. It repeatedly falls back into making statements from an all-things-considered viewpoint, using the person-affecting view as a load-bearing argument in those statements (I could provide more quotes of it doing such).
And then separately, the person-affecting view just really doesn’t seem very interesting to me as a thing to extrapolate. I don’t know why you find it interesting. It seems to me like an exceptionally weak starting point with obvious giant holes in its ethical validity, that make exploring its conclusions much less interesting than the vast majority of other ethical frameworks (like, I would be substantially more interested in a deontological analysis of AI takeoff, or a virtue ethical analysis of AI risk, or a pragmatist analysis, all of which strike me as more interesting and more potentially valid starting point than person-affecting welfare-utilitarianism).
And then beyond that, even if one were to chase out these implications, it seems like a huge improvement to include an analysis of the likelihood of the premises of the perspective you are chasing out, and how robust or likely to be true they are. It has been a while since I read the papers you linked, but much of at least some of them is arguing for and evaluating the validity of the ethical assumptions behind caring about the cosmic endowment. Your most recent paper seems much weaker on this dimension (though my memory might be betraying me, and plausible I will have the same criticism if I were to read your past work, though even then, arguing from the basis of an approximately correct premise, even if the premise is left unevaluated, clearly is better than arguing from the basis of an IMO obviously incorrect premise, without evaluating it as such).
And then separately, the person-affecting view just really doesn’t seem very interesting to me as a thing to extrapolate.
[didn’t read original, just responding locally] My impression is that often people do justify AGI research using this sort of view. (Do you disagree?) That would make it an interesting view to extrapolate, no?
Of course they don’t consistently operate on any specific moral view. But I would claim that they are less badly approximated by ‘benefit currently existing citizens’ than ‘neutrally benefit all possible future people (or citizens) that might be brought into existence over future eons’. Much less is spent on things like nuclear waste management and preventing climate change than on providing amenities for the current population. In fact, they may be spending a net negative amount of resources on trying to benefit future generations, since they are often saddling future generations with vast debt burdens in order to fund current consumption. (FHI—particularly Toby Ord—was involved in some efforts to try to infuse a little bit more consideration of future generations in UK policymaking, but I think only very limited inroads where made on that front.)
Yep, I am definitely not saying that current governance cares about future people equally as they do for current people! (My guess is I don’t either, but I don’t know, morality is tricky and I don’t have super strong stances on population ethics)
But “not caring equally strongly about future people” and “being indifferent to human extinction as long as everyone alive gets to spend the rest of their days happy” are of course drastically different. You are making the second assumption in the paper, which even setting aside whether it’s a reasonable assumption on moral grounds, is extremely divorced from how humanity makes governance decisions (and even more divorced from how people would want humanity to make policy decisions, which would IMO be the standard to aspire to for a policy analysis like this).
In other papers (e.g. Existential Risks (2001), Astronomical Waste (2003), and Existential Risk Prevention as a Global Priority (2013)) I focus mostly on what follows from a mundane impersonal perspective. Since that perspective is even further out of step with how humanity makes governance decisions, is it your opinion that those paper should likewise be castigated? (Some people who hate longtermism have done so, quite vehemently.) But my view is that there can be value in working out what follows from various possible theoretical positions, especially ones that have a distinguished pedigree and are taken seriously in the intellectual tradition. Certainly this is a very standard thing to do in academic philosophy, and I think it’s usually a healthy practice.
I am not fully sure what you are referring to by “mundane impersonal perspective”, but I like all of those papers. I both think they are substantially closer to capturing actual decision-making, and also are closer to what seems to me like good decision-making. They aren’t perfect (I could critique them as well), but my relationship to the perspective brought up in those papers is not the same as I would have to the sociopathic example I mention upthread, and I don’t think they have that many obvious reductio-ad-absurdum cases that obviously violate common-sense morality (and I do not remember these papers advocating for those things, but it’s been a while since I read them).
Absolute agree there is value in mapping out these kinds of things! But again, your paper really unambiguously to me does not maintain the usual “if X then Y” structure. It repeatedly falls back into making statements from an all-things-considered viewpoint, using the person-affecting view as a load-bearing argument in those statements (I could provide more quotes of it doing such).
And then separately, the person-affecting view just really doesn’t seem very interesting to me as a thing to extrapolate. I don’t know why you find it interesting. It seems to me like an exceptionally weak starting point with obvious giant holes in its ethical validity, that make exploring its conclusions much less interesting than the vast majority of other ethical frameworks (like, I would be substantially more interested in a deontological analysis of AI takeoff, or a virtue ethical analysis of AI risk, or a pragmatist analysis, all of which strike me as more interesting and more potentially valid starting point than person-affecting welfare-utilitarianism).
And then beyond that, even if one were to chase out these implications, it seems like a huge improvement to include an analysis of the likelihood of the premises of the perspective you are chasing out, and how robust or likely to be true they are. It has been a while since I read the papers you linked, but much of at least some of them is arguing for and evaluating the validity of the ethical assumptions behind caring about the cosmic endowment. Your most recent paper seems much weaker on this dimension (though my memory might be betraying me, and plausible I will have the same criticism if I were to read your past work, though even then, arguing from the basis of an approximately correct premise, even if the premise is left unevaluated, clearly is better than arguing from the basis of an IMO obviously incorrect premise, without evaluating it as such).
[didn’t read original, just responding locally] My impression is that often people do justify AGI research using this sort of view. (Do you disagree?) That would make it an interesting view to extrapolate, no?