Six Plausible Meta-Ethical Alternatives

In this post, I list six metaeth­i­cal pos­si­bil­ities that I think are plau­si­ble, along with some ar­gu­ments or plau­si­ble sto­ries about how/​why they might be true, where that’s not ob­vi­ous. A lot of peo­ple seem fairly cer­tain in their metaeth­i­cal views, but I’m not and I want to con­vey my un­cer­tainty as well as some of the rea­sons for it.

  1. Most in­tel­li­gent be­ings in the mul­ti­verse share similar prefer­ences. This came about be­cause there are facts about what prefer­ences one should have, just like there ex­ist facts about what de­ci­sion the­ory one should use or what prior one should have, and species that man­age to build in­ter­galac­tic civ­i­liza­tions (or the equiv­a­lent in other uni­verses) tend to dis­cover all of these facts. There are oc­ca­sional pa­per­clip max­i­miz­ers that arise, but they are a rel­a­tively minor pres­ence or tend to be taken over by more so­phis­ti­cated minds.

  2. Facts about what ev­ery­one should value ex­ist, and most in­tel­li­gent be­ings have a part of their mind that can dis­cover moral facts and find them mo­ti­vat­ing, but those parts don’t have full con­trol over their ac­tions. Th­ese be­ings even­tu­ally build or be­come ra­tio­nal agents with val­ues that rep­re­sent com­pro­mises be­tween differ­ent parts of their minds, so most in­tel­li­gent be­ings end up hav­ing shared moral val­ues along with idiosyn­cratic val­ues.

  3. There aren’t facts about what ev­ery­one should value, but there are facts about how to trans­late non-prefer­ences (e.g., emo­tions, drives, fuzzy moral in­tu­itions, cir­cu­lar prefer­ences, non-con­se­quen­tial­ist val­ues, etc.) into prefer­ences. Th­ese facts may in­clude, for ex­am­ple, what is the right way to deal with on­tolog­i­cal crises. The ex­is­tence of such facts seems plau­si­ble be­cause if there were facts about what is ra­tio­nal (which seems likely) but no facts about how to be­come ra­tio­nal, that would seem like a strange state of af­fairs.

  4. None of the above facts ex­ist, so the only way to be­come or build a ra­tio­nal agent is to just think about what prefer­ences you want your fu­ture self or your agent to hold, un­til you make up your mind in some way that de­pends on your psy­chol­ogy. But at least this pro­cess of re­flec­tion is con­ver­gent at the in­di­vi­d­ual level so each per­son can rea­son­ably call the prefer­ences that they en­dorse af­ter reach­ing re­flec­tive equil­ibrium their moral­ity or real val­ues.

  5. None of the above facts ex­ist, and re­flect­ing on what one wants turns out to be a di­ver­gent pro­cess (e.g., it’s highly sen­si­tive to ini­tial con­di­tions, like whether or not you drank a cup of coffee be­fore you started, or to the or­der in which you hap­pen to en­counter philo­soph­i­cal ar­gu­ments). There are still facts about ra­tio­nal­ity, so at least agents that are already ra­tio­nal can call their util­ity func­tions (or the equiv­a­lent of util­ity func­tions in what­ever de­ci­sion the­ory ends up be­ing the right one) their real val­ues.

  6. There aren’t any nor­ma­tive facts at all, in­clud­ing facts about what is ra­tio­nal. For ex­am­ple, it turns out there is no one de­ci­sion the­ory that does bet­ter than ev­ery other de­ci­sion the­ory in ev­ery situ­a­tion, and there is no ob­vi­ous or widely-agreed-upon way to de­ter­mine which one “wins” over­all.

(Note that for the pur­poses of this post, I’m con­cen­trat­ing on moral­ity in the ax­iolog­i­cal sense (what one should value) rather than in the sense of co­op­er­a­tion and com­pro­mise. So al­ter­na­tive 1, for ex­am­ple, is not in­tended to in­clude the pos­si­bil­ity that most in­tel­li­gent be­ings end up merg­ing their prefer­ences through some kind of grand acausal bar­gain.)

It may be use­ful to clas­sify these pos­si­bil­ities us­ing la­bels from aca­demic philos­o­phy. Here’s my at­tempt: 1. re­al­ist + in­ter­nal­ist 2. re­al­ist + ex­ter­nal­ist 3. rel­a­tivist 4. sub­jec­tivist 5. moral anti-re­al­ist 6. nor­ma­tive anti-re­al­ist. (A lot of de­bates in metaethics con­cern the mean­ing of or­di­nary moral lan­guage, for ex­am­ple whether they re­fer to facts or merely ex­press at­ti­tudes. I mostly ig­nore such de­bates in the above list, be­cause it’s not clear what im­pli­ca­tions they have for the ques­tions that I care about.)

One ques­tion LWers may have is, where does Eliezer’s metathics fall into this schema? Eliezer says that there are moral facts about what val­ues ev­ery in­tel­li­gence in the mul­ti­verse should have, but only hu­mans are likely to dis­cover these facts and be mo­ti­vated by them. To me, Eliezer’s use of lan­guage is coun­ter­in­tu­itive, and since it seems plau­si­ble that there are facts about what ev­ery­one should value (or how each per­son should trans­late their non-prefer­ences into prefer­ences) that most in­tel­li­gent be­ings can dis­cover and be at least some­what mo­ti­vated by, I’m re­serv­ing the phrase “moral facts” for these. In my lan­guage, I think 3 or maybe 4 is prob­a­bly clos­est to Eliezer’s po­si­tion.