To be more accurate, I am not, in philosophical terms, a moral realist. I do not personally believe that, in The Grand Scheme of Things, there are any absolute objective universal rights or wrongs independent of the physical universe. I do not believe that there is an omnipotent and omniscient monotheist G.O.D. who knows everything we have done and has an opinion on what we should or should not do. I also do not believe that if such a being existed, then human moral intuitions would be any kind of privileged guide to what It’s opinions might be. We have a good scientific understanding of where human moral intuitions came from, and it’s not “because G.O.D. said so”: they evolved, and they’re whatever is adaptive for humans that evolution has so far been able to locate and cram into our genome. IMO the universe, as a whole, does not care whether all humans die, or not — it will continue to exist regardless.
However, on this particular issue of all of us dying, we humans, or at very least O(99.9%) of us, all agree that a would be a very bad thing — unsurprisingly so, since there are obvious evolutionary moral psychology reasons why O(99.9%) of us are evolved to have moral intuitions that agree on that. Given that fact, I’m being a pragmatist — I am giving advice. So I actually do mean “IF you think, as for obvious reasons O(99.9%) of people do, that everyone dying is very bad, THEN doing X is a very bad idea”. I’m avoiding the normative part not only to avoid upsetting the philosophers, but also because my personal viewpoint on ethics is based in what a philosopher would call Philosophical Realism, and specifically, on Evolutionary Moral Psychology. I.e. that there are no absolute rights and wrongs, but that there are some things that (for evolutionary reasons) almost all humans (past, present, and future) can agree are right or wrong. However, I’m aware that many of my readers may not agree with my philosophical viewpoint, and I’m not asking them to: I’m carefully confining myself to practical advice based on factual predictions from scientific hypotheses. So yes, it’s a rhetorical hoop, but it also actually reflects my personal philosophical position — which is that of a scientist and engineer who regards Moral Realism as thinly disguised religion (and is carefully avoiding that with a 10′ pole).
Fundamentally, I’m trying to base alignment on practical arguments that O(99.9%) of us can agree on.
To be more accurate, I am not, in philosophical terms, a moral realist. I do not personally believe that, in The Grand Scheme of Things, there are any absolute objective universal rights or wrongs independent of the physical universe. I do not believe that there is an omnipotent and omniscient monotheist G.O.D. who knows everything we have done and has an opinion on what we should or should not do. I also do not believe that if such a being existed, then human moral intuitions would be any kind of privileged guide to what It’s opinions might be. We have a good scientific understanding of where human moral intuitions came from, and it’s not “because G.O.D. said so”: they evolved, and they’re whatever is adaptive for humans that evolution has so far been able to locate and cram into our genome. IMO the universe, as a whole, does not care whether all humans die, or not — it will continue to exist regardless.
However, on this particular issue of all of us dying, we humans, or at very least O(99.9%) of us, all agree that a would be a very bad thing — unsurprisingly so, since there are obvious evolutionary moral psychology reasons why O(99.9%) of us are evolved to have moral intuitions that agree on that. Given that fact, I’m being a pragmatist — I am giving advice. So I actually do mean “IF you think, as for obvious reasons O(99.9%) of people do, that everyone dying is very bad, THEN doing X is a very bad idea”. I’m avoiding the normative part not only to avoid upsetting the philosophers, but also because my personal viewpoint on ethics is based in what a philosopher would call Philosophical Realism, and specifically, on Evolutionary Moral Psychology. I.e. that there are no absolute rights and wrongs, but that there are some things that (for evolutionary reasons) almost all humans (past, present, and future) can agree are right or wrong. However, I’m aware that many of my readers may not agree with my philosophical viewpoint, and I’m not asking them to: I’m carefully confining myself to practical advice based on factual predictions from scientific hypotheses. So yes, it’s a rhetorical hoop, but it also actually reflects my personal philosophical position — which is that of a scientist and engineer who regards Moral Realism as thinly disguised religion (and is carefully avoiding that with a 10′ pole).
Fundamentally, I’m trying to base alignment on practical arguments that O(99.9%) of us can agree on.