Do you really not see how this is normative proscription? That’s the obnoxious part—just own it.
“IF you do X, THEN everyone will die”, is not a normative prescription (in philosophical terminology). It’s not a statement about what people should (in the ethical sense) or ought to do. It’s not advocating a specific set of ethical beliefs. For that to become a normative prescription, I would need to add, “and everyone dieing is wrong, so doing X is wrong. QED”. I very carefully didn’t add that bit, I instead left it as an exercise for the reader. Now, I happen to believe that everyone dying is wrong: that is part of my personal choice of ethical system. I very strongly suspect that you, and everyone else reading this post, also have chosen personal ethical systems in which everyone dying is wrong. Buy I’m very carefully, because there are philosophers on this site, not advocating any specific normative viewpoint on anything — not even something like this that O(99.9)% of people agree on (yes, even the sociopaths agree on this one). Instead I am saying “IF you do X, THEN everyone will die.” [a factual truth-apt statement, which thus may or may not be correct: I claim it is], “Therefore, IF you don’t want everyone to die, THEN don’t X.” That’s now advice, but still not a normative statement. Your ethics may vary (though I really hope they don’t). If someone who believed that everyone dieing was a good thing read my post, then they could treat this as advice that doing X was also a good thing. I very carefully jumped through significant rhetorical hoops to avoid the normative bits, because when I write about AI ethics, if I put anything normative in, then the comments tend to degenerate into a philosophical pie-fight. So I very carefully left it out, along with footnotes and asides for the philosophers pointing out that I had done so. So far, no pie fight. For the rest of my readers who are not philosophers, I’m sorry, but some of my readership are sensitive about this stuff, and I’m attempting to get it right for them.
Now, was I expecting O(99.9)% of my readers to mentally add “and everyone dying is wrong, so doing X is wrong. QED” — yes, I absolutely was. But my saying, at the end of my aside addressed to any philosophers reading the post:
I will at one point below make an argument of the form “evolutionary theory tells us this behavior is maladaptive for humans: if you’re human then I recommend not doing it” — but that is practical, instrumental advice, not a normative prescription.]
was pointing out to the philosophers that I had carefully left this part as a (very easy) exercise for the reader. Glancing through your writings, my first impression is that you may not be a philosopher — if that is in fact the case. then, if that aside bothered you, I’m sorry: it was carefully written addressed to philosophers and attempting to use philosophical technical terminology correctly.
To be more accurate, I am not, in philosophical terms, a moral realist. I do not personally believe that, in The Grand Scheme of Things, there are any absolute objective universal rights or wrongs independent of the physical universe. I do not believe that there is an omnipotent and omniscient monotheist G.O.D. who knows everything we have done and has an opinion on what we should or should not do. I also do not believe that if such a being existed, then human moral intuitions would be any kind of privileged guide to what It’s opinions might be. We have a good scientific understanding of where human moral intuitions came from, and it’s not “because G.O.D. said so”: they evolved, and they’re whatever is adaptive for humans that evolution has so far been able to locate and cram into our genome. IMO the universe, as a whole, does not care whether all humans die, or not — it will continue to exist regardless.
However, on this particular issue of all of us dying, we humans, or at very least O(99.9%) of us, all agree that a would be a very bad thing — unsurprisingly so, since there are obvious evolutionary moral psychology reasons why O(99.9%) of us are evolved to have moral intuitions that agree on that. Given that fact, I’m being a pragmatist — I am giving advice. So I actually do mean “IF you think, as for obvious reasons O(99.9%) of people do, that everyone dying is very bad, THEN doing X is a very bad idea”. I’m avoiding the normative part not only to avoid upsetting the philosophers, but also because my personal viewpoint on ethics is based in what a philosopher would call Philosophical Realism, and specifically, on Evolutionary Moral Psychology. I.e. that there are no absolute rights and wrongs, but that there are some things that (for evolutionary reasons) almost all humans (past, present, and future) can agree are right or wrong. However, I’m aware that many of my readers may not agree with my philosophical viewpoint, and I’m not asking them to: I’m carefully confining myself to practical advice based on factual predictions from scientific hypotheses. So yes, it’s a rhetorical hoop, but it also actually reflects my personal philosophical position — which is that of a scientist and engineer who regards Moral Realism as thinly disguised religion (and is carefully avoiding that with a 10′ pole).
Fundamentally, I’m trying to base alignment on practical arguments that O(99.9%) of us can agree on.
“IF you do X, THEN everyone will die”, is not a normative prescription (in philosophical terminology). It’s not a statement about what people should (in the ethical sense) or ought to do. It’s not advocating a specific set of ethical beliefs. For that to become a normative prescription, I would need to add, “and everyone dieing is wrong, so doing X is wrong. QED”. I very carefully didn’t add that bit, I instead left it as an exercise for the reader. Now, I happen to believe that everyone dying is wrong: that is part of my personal choice of ethical system. I very strongly suspect that you, and everyone else reading this post, also have chosen personal ethical systems in which everyone dying is wrong. Buy I’m very carefully, because there are philosophers on this site, not advocating any specific normative viewpoint on anything — not even something like this that O(99.9)% of people agree on (yes, even the sociopaths agree on this one). Instead I am saying “IF you do X, THEN everyone will die.” [a factual truth-apt statement, which thus may or may not be correct: I claim it is], “Therefore, IF you don’t want everyone to die, THEN don’t X.” That’s now advice, but still not a normative statement. Your ethics may vary (though I really hope they don’t). If someone who believed that everyone dieing was a good thing read my post, then they could treat this as advice that doing X was also a good thing. I very carefully jumped through significant rhetorical hoops to avoid the normative bits, because when I write about AI ethics, if I put anything normative in, then the comments tend to degenerate into a philosophical pie-fight. So I very carefully left it out, along with footnotes and asides for the philosophers pointing out that I had done so. So far, no pie fight. For the rest of my readers who are not philosophers, I’m sorry, but some of my readership are sensitive about this stuff, and I’m attempting to get it right for them.
Now, was I expecting O(99.9)% of my readers to mentally add “and everyone dying is wrong, so doing X is wrong. QED” — yes, I absolutely was. But my saying, at the end of my aside addressed to any philosophers reading the post:
was pointing out to the philosophers that I had carefully left this part as a (very easy) exercise for the reader. Glancing through your writings, my first impression is that you may not be a philosopher — if that is in fact the case. then, if that aside bothered you, I’m sorry: it was carefully written addressed to philosophers and attempting to use philosophical technical terminology correctly.
So you do have normative intent, but try to hide it to avoid criticism. Got it.
To be more accurate, I am not, in philosophical terms, a moral realist. I do not personally believe that, in The Grand Scheme of Things, there are any absolute objective universal rights or wrongs independent of the physical universe. I do not believe that there is an omnipotent and omniscient monotheist G.O.D. who knows everything we have done and has an opinion on what we should or should not do. I also do not believe that if such a being existed, then human moral intuitions would be any kind of privileged guide to what It’s opinions might be. We have a good scientific understanding of where human moral intuitions came from, and it’s not “because G.O.D. said so”: they evolved, and they’re whatever is adaptive for humans that evolution has so far been able to locate and cram into our genome. IMO the universe, as a whole, does not care whether all humans die, or not — it will continue to exist regardless.
However, on this particular issue of all of us dying, we humans, or at very least O(99.9%) of us, all agree that a would be a very bad thing — unsurprisingly so, since there are obvious evolutionary moral psychology reasons why O(99.9%) of us are evolved to have moral intuitions that agree on that. Given that fact, I’m being a pragmatist — I am giving advice. So I actually do mean “IF you think, as for obvious reasons O(99.9%) of people do, that everyone dying is very bad, THEN doing X is a very bad idea”. I’m avoiding the normative part not only to avoid upsetting the philosophers, but also because my personal viewpoint on ethics is based in what a philosopher would call Philosophical Realism, and specifically, on Evolutionary Moral Psychology. I.e. that there are no absolute rights and wrongs, but that there are some things that (for evolutionary reasons) almost all humans (past, present, and future) can agree are right or wrong. However, I’m aware that many of my readers may not agree with my philosophical viewpoint, and I’m not asking them to: I’m carefully confining myself to practical advice based on factual predictions from scientific hypotheses. So yes, it’s a rhetorical hoop, but it also actually reflects my personal philosophical position — which is that of a scientist and engineer who regards Moral Realism as thinly disguised religion (and is carefully avoiding that with a 10′ pole).
Fundamentally, I’m trying to base alignment on practical arguments that O(99.9%) of us can agree on.