Setting Up Metaethics

Fol­lowup to: Is Mo­ral­ity Given?, Is Mo­ral­ity Prefer­ence?, Mo­ral Com­plex­ities, Could Any­thing Be Right?, The Be­drock of Fair­ness, …

In­tu­itions about moral­ity seem to split up into two broad camps: moral­ity-as-given and moral­ity-as-prefer­ence.

Some per­ceive moral­ity as a fixed given, in­de­pen­dent of our whims, about which we form change­able be­liefs. This view’s great ad­van­tage is that it seems more nor­mal up at the level of ev­ery­day moral con­ver­sa­tions: it is the in­tu­ition un­der­ly­ing our ev­ery­day no­tions of “moral er­ror”, “moral progress”, “moral ar­gu­ment”, or “just be­cause you want to mur­der some­one doesn’t make it right”.

Others choose to de­scribe moral­ity as a prefer­ence—as a de­sire in some par­tic­u­lar per­son; nowhere else is it writ­ten. This view’s great ad­van­tage is that it has an eas­ier time liv­ing with re­duc­tion­ism—fit­ting the no­tion of “moral­ity” into a uni­verse of mere physics. It has an eas­ier time at the meta level, an­swer­ing ques­tions like “What is moral­ity?” and “Where does moral­ity come from?

Both in­tu­itions must con­tend with seem­ingly im­pos­si­ble ques­tions. For ex­am­ple, Moore’s Open Ques­tion: Even if you come up with some sim­ple an­swer that fits on T-Shirt, like “Hap­piness is the sum to­tal of good­ness!“, you would need to ar­gue the iden­tity. It isn’t in­stantly ob­vi­ous to ev­ery­one that good­ness is hap­piness, which seems to in­di­cate that hap­piness and right­ness were differ­ent con­cepts to start with. What was that sec­ond con­cept, then, origi­nally?

Or if “Mo­ral­ity is mere prefer­ence!” then why care about hu­man prefer­ences? How is it pos­si­ble to es­tab­lish any “ought” at all, in a uni­verse seem­ingly of mere “is”?

So what we should want, ideally, is a metaethic that:

  1. Adds up to moral nor­mal­ity, in­clud­ing moral er­rors, moral progress, and things you should do whether you want to or not;

  2. Fits nat­u­rally into a non-mys­te­ri­ous uni­verse, pos­tu­lat­ing no ex­cep­tion to re­duc­tion­ism;

  3. Does not over­sim­plify hu­man­ity’s com­pli­cated moral ar­gu­ments and many ter­mi­nal val­ues;

  4. An­swers all the im­pos­si­ble ques­tions.

I’ll pre­sent that view to­mor­row.

To­day’s post is de­voted to set­ting up the ques­tion.

Con­sider “free will”, already dealt with in these posts. On one level of or­ga­ni­za­tion, we have mere physics, par­ti­cles that make no choices. On an­other level of or­ga­ni­za­tion, we have hu­man minds that ex­trap­o­late pos­si­ble fu­tures and choose be­tween them. How can we con­trol any­thing, even our own choices, when the uni­verse is de­ter­minis­tic?

To dis­solve the puz­zle of free will, you have to si­mul­ta­neously imag­ine two lev­els of or­ga­ni­za­tion while keep­ing them con­cep­tu­ally dis­tinct. To get it on a gut level, you have to see the level tran­si­tion—the way in which free will is how the hu­man de­ci­sion al­gorithm feels from in­side. (Be­ing told flatly “one level emerges from the other” just re­lates them by a mag­i­cal tran­si­tion rule, “emer­gence”.)

For free will, the key is to un­der­stand how your brain com­putes whether you “could” do some­thing—the al­gorithm that la­bels reach­able states. Once you un­der­stand this la­bel, it does not ap­pear par­tic­u­larly mean­ingless—“could” makes sense—and the la­bel does not con­flict with physics fol­low­ing a de­ter­minis­tic course. If you can see that, you can see that there is no con­flict be­tween your feel­ing of free­dom, and de­ter­minis­tic physics. In­deed, I am perfectly will­ing to say that the feel­ing of free­dom is cor­rect, when the feel­ing is in­ter­preted cor­rectly.

In the case of moral­ity, once again there are two lev­els of or­ga­ni­za­tion, seem­ingly quite difficult to fit to­gether:

On one level, there are just par­ti­cles with­out a shred of should-ness built into them—just like an elec­tron has no no­tion of what it “could” do—or just like a flip­ping coin is not un­cer­tain of its own re­sult.

On an­other level is the or­di­nary moral­ity of ev­ery­day life: moral er­rors, moral progress, and things you ought to do whether you want to do them or not.

And in be­tween, the level tran­si­tion ques­tion: What is this should-ness stuff?

Award your­self a point if you thought, “But wait, that prob­lem isn’t quite analo­gous to the one of free will. With free will it was just a ques­tion of fac­tual in­ves­ti­ga­tion—look at hu­man psy­chol­ogy, figure out how it does in fact gen­er­ate the feel­ing of free­dom. But here, it won’t be enough to figure out how the mind gen­er­ates its feel­ings of should-ness. Even af­ter we know, we’ll be left with a re­main­ing ques­tion—is that how we should calcu­late should-ness? So it’s not just a mat­ter of sheer fac­tual re­duc­tion­ism, it’s a moral ques­tion.”

Award your­self two points if you thought, “...oh, wait, I rec­og­nize that pat­tern: It’s one of those strange loops through the meta-level we were talk­ing about ear­lier.”

And if you’ve been read­ing along this whole time, you know the an­swer isn’t go­ing to be, “Look at this fun­da­men­tally moral stuff!”

Nor even, “Sorry, moral­ity is mere prefer­ence, and right-ness is just what serves you or your genes; all your moral in­tu­itions oth­er­wise are wrong, but I won’t ex­plain where they come from.”

Of the art of an­swer­ing im­pos­si­ble ques­tions, I have already said much: In­deed, vast seg­ments of my Over­com­ing Bias posts were cre­ated with that spe­cific hid­den agenda.

The se­quence on an­ti­ci­pa­tion fed into Mys­te­ri­ous An­swers to Mys­te­ri­ous Ques­tions, to pre­vent the Pri­mary Catas­trophic Failure of stop­ping on a poor an­swer.

The Fake Utility Func­tions se­quence was di­rected at the prob­lem of over­sim­plified moral an­swers par­tic­u­larly.

The se­quence on words pro­vided the first and ba­sic illus­tra­tion of the Mind Pro­jec­tion Fal­lacy, the un­der­stand­ing of which is one of the Great Keys.

The se­quence on words also showed us how to play Ra­tion­al­ist’s Ta­boo, and Re­place the Sym­bol with the Sub­stance. What is “right”, if you can’t say “good” or “de­sir­able” or “bet­ter” or “prefer­able” or “moral” or “should”? What hap­pens if you try to carry out the op­er­a­tion of re­plac­ing the sym­bol with what it stands for?

And the se­quence on quan­tum physics, among other pur­poses, was there to teach the fine art of not run­ning away from Scary and Con­fus­ing Prob­lems, even if oth­ers have failed to solve them, even if great minds failed to solve them for gen­er­a­tions. Heroes screw up, time moves on, and each suc­ceed­ing era gets an en­tirely new chance.

If you’re just join­ing us here (Bel­l­dandy help you) then you might want to think about read­ing all those posts be­fore, oh, say, to­mor­row.

If you’ve been read­ing this whole time, then you should think about try­ing to dis­solve the ques­tion on your own, be­fore to­mor­row. It doesn’t re­quire more than 96 in­sights be­yond those already pro­vided.

Next: The Mean­ing of Right.

Part of The Me­taethics Sequence

Next post: “The Mean­ing of Right

Pre­vi­ous post: “Chang­ing Your Me­taethics