Since that perspective is even further out of step with how humanity makes governance decisions, is it your opinion that those paper should likewise be castigated?
I am not fully sure what you are referring to by “mundane impersonal perspective”, but I like all of those papers. I both think they are substantially closer to capturing actual decision-making, and also are closer to what seems to me like good decision-making. They aren’t perfect (I could critique them as well), but my relationship to the perspective brought up in those papers is not the same as I would have to the sociopathic example I mention upthread, and I don’t think they have that many obvious reductio-ad-absurdum cases that obviously violate common-sense morality (and I do not remember these papers advocating for those things, but it’s been a while since I read them).
But my view is that there can be value in working out what follows from various possible theoretical positions, especially ones that have a distinguished pedigree and are taken seriously in the intellectual tradition. Certainly this is a very standard thing to do in academic philosophy, and I think it’s usually a healthy practice.
Absolute agree there is value in mapping out these kinds of things! But again, your paper really unambiguously to me does not maintain the usual “if X then Y” structure. It repeatedly falls back into making statements from an all-things-considered viewpoint, using the person-affecting view as a load-bearing argument in those statements (I could provide more quotes of it doing such).
And then separately, the person-affecting view just really doesn’t seem very interesting to me as a thing to extrapolate. I don’t know why you find it interesting. It seems to me like an exceptionally weak starting point with obvious giant holes in its ethical validity, that make exploring its conclusions much less interesting than the vast majority of other ethical frameworks (like, I would be substantially more interested in a deontological analysis of AI takeoff, or a virtue ethical analysis of AI risk, or a pragmatist analysis, all of which strike me as more interesting and more potentially valid starting point than person-affecting welfare-utilitarianism).
And then beyond that, even if one were to chase out these implications, it seems like a huge improvement to include an analysis of the likelihood of the premises of the perspective you are chasing out, and how robust or likely to be true they are. It has been a while since I read the papers you linked, but much of at least some of them is arguing for and evaluating the validity of the ethical assumptions behind caring about the cosmic endowment. Your most recent paper seems much weaker on this dimension (though my memory might be betraying me, and plausible I will have the same criticism if I were to read your past work, though even then, arguing from the basis of an approximately correct premise, even if the premise is left unevaluated, clearly is better than arguing from the basis of an IMO obviously incorrect premise, without evaluating it as such).
And then separately, the person-affecting view just really doesn’t seem very interesting to me as a thing to extrapolate.
[didn’t read original, just responding locally] My impression is that often people do justify AGI research using this sort of view. (Do you disagree?) That would make it an interesting view to extrapolate, no?
I am not fully sure what you are referring to by “mundane impersonal perspective”, but I like all of those papers. I both think they are substantially closer to capturing actual decision-making, and also are closer to what seems to me like good decision-making. They aren’t perfect (I could critique them as well), but my relationship to the perspective brought up in those papers is not the same as I would have to the sociopathic example I mention upthread, and I don’t think they have that many obvious reductio-ad-absurdum cases that obviously violate common-sense morality (and I do not remember these papers advocating for those things, but it’s been a while since I read them).
Absolute agree there is value in mapping out these kinds of things! But again, your paper really unambiguously to me does not maintain the usual “if X then Y” structure. It repeatedly falls back into making statements from an all-things-considered viewpoint, using the person-affecting view as a load-bearing argument in those statements (I could provide more quotes of it doing such).
And then separately, the person-affecting view just really doesn’t seem very interesting to me as a thing to extrapolate. I don’t know why you find it interesting. It seems to me like an exceptionally weak starting point with obvious giant holes in its ethical validity, that make exploring its conclusions much less interesting than the vast majority of other ethical frameworks (like, I would be substantially more interested in a deontological analysis of AI takeoff, or a virtue ethical analysis of AI risk, or a pragmatist analysis, all of which strike me as more interesting and more potentially valid starting point than person-affecting welfare-utilitarianism).
And then beyond that, even if one were to chase out these implications, it seems like a huge improvement to include an analysis of the likelihood of the premises of the perspective you are chasing out, and how robust or likely to be true they are. It has been a while since I read the papers you linked, but much of at least some of them is arguing for and evaluating the validity of the ethical assumptions behind caring about the cosmic endowment. Your most recent paper seems much weaker on this dimension (though my memory might be betraying me, and plausible I will have the same criticism if I were to read your past work, though even then, arguing from the basis of an approximately correct premise, even if the premise is left unevaluated, clearly is better than arguing from the basis of an IMO obviously incorrect premise, without evaluating it as such).
[didn’t read original, just responding locally] My impression is that often people do justify AGI research using this sort of view. (Do you disagree?) That would make it an interesting view to extrapolate, no?