More Dakka

Link post

Epistemic Sta­tus: Hope­fully enough Dakka

Eliezer Yud­kowsky’s book Inad­e­quate Eqilibria is ex­cel­lent. I recom­mend read­ing it, if you haven’t done so. Three re­cent re­views are Scott Aaron­son’s, Robin Han­son’s (which in­spired You Have the Right to Think and a great dis­cus­sion in its com­ments) and Scott Alexan­der’s. Alexan­der’s re­view was an ex­cel­lent sum­mary of key points, but like many he found the last part of the book, as­cribing much mod­esty to sta­tus and pre­scribing how to learn when to trust your­self, less con­vinc­ing.

My posts, in­clud­ing Zero­ing Out and Lead­ers of Men have been at­tempts to ex­tend the last part, offer­ing ad­di­tional tools. Daniel Speyer offers good con­crete sug­ges­tions as well. My hope here is to offer both an­other con­crete path to find­ing such op­por­tu­ni­ties, and ad­di­tional jus­tifi­ca­tion of the cen­tral role of so­cial con­trol (as op­posed to ob­ject-level con­cerns) in many mod­est ac­tions and mod­esty ar­gu­ments.

Eliezer uses sev­eral ex­am­ples of civ­i­liza­tional in­ad­e­quacy. Two cen­tral ex­am­ples are the failure of the Bank of Ja­pan and later the Euro­pean Cen­tral Bank to print suffi­cient amounts of money, and the failure of any­one to try treat­ing sea­sonal af­fec­tive di­s­or­der with suffi­ciently in­tense ar­tifi­cial light.

In a Me­taMed case, a pa­tient suffered from a dis­ease with a well-known re­li­able bio­marker and a safe treat­ment. In stud­ies, the treat­ment im­proved the bio­marker lin­early with dosage. Stud­ies ob­served that sick pa­tients whose bio­mark­ers reached healthy lev­els ex­pe­rienced full re­mis­sion. The treat­ment was fully safe. No one tried in­creas­ing the dose enough to re­duce the bio­marker to healthy lev­els. If they did, they never re­ported their re­sults.

In his ex­cel­lent post Sun­set at Noon, Ray­mond points out Grat­i­tude Jour­nals:

“Ra­tion­al­ists ob­vi­ously don’t *ac­tu­ally* take ideas se­ri­ously. Like, take the Grat­i­tude Jour­nal. This is the one peer-re­viewed in­ter­ven­tion that *ac­tu­ally in­creases your sub­jec­tive well be­ing*, and costs barely any­thing. And no one I know has even se­ri­ously tried it. Do liter­ally *none* of these peo­ple care about their own hap­piness?”

“Huh. Do *you* keep a grat­i­tude jour­nal?”

“Lol. No, ob­vi­ously.”

– Some Guy at the Effec­tive Altru­ism Sum­mit of 2012

Grat­i­tude jour­nals are awk­ward in­ter­ven­tions, as Ray­mond found, and we need to find de­tails that make it our own, or it won’t work. But the ac­tive in­gre­di­ent, grat­i­tude, ob­vi­ously works and is freely available. Re­mem­ber the last time some­one ex­pressed grat­i­tude to you and it made your day worse? Re­mem­ber the last time you ex­pressed grat­i­tude to some­one else, or felt grat­i­tude about some­one or some­thing, and it made your day worse?

In my ex­pe­rience it hap­pens ap­prox­i­mately zero times. Grat­i­tude just works, un­mis­tak­ably. I once sent a sin­gle grat­i­tude let­ter. It in­creased my baseline well-be­ing. Then I didn’t write more. I do try to re­mem­ber to feel grat­i­tude, and ex­press it. That helps. But I can’t think of a good rea­son not to do that more, or for any­one I know to not do it more.

In all four cases, our civ­i­liza­tion has (it seems) cor­rectly found the solu­tion. We’ve tested it. It works. The more you do, the bet­ter it works. There’s prob­a­bly a level where side effects would hap­pen, but there’s no sign of them yet.

We know the solu­tion. Our bul­lets work. We just need more. We need More (and bet­ter) (metaphor­i­cal) Dakka. And then we de­cide we’re out of bul­lets. We stop.

If it helps but doesn’t solve your prob­lem, per­haps you’re not us­ing enough.

I

We don’t use enough to find out how much enough would be, or what bad things it might cause. More Dakka might back­fire. It also might solve your prob­lem.

The Bank of Ja­pan didn’t have enough money. They printed some. It helped a lit­tle. They could have kept print­ing more money un­til print­ing more money ei­ther solves their prob­lem or starts to cause other prob­lems. They didn’t.

Yes, some coun­tries printed too much money and very bad things hap­pened, but no coun­tries printed too much money be­cause they wanted more in­fla­tion. That’s not a thing.

Doc­tors saw pa­tients suffer for lack of light. They gave them light. It helped a lit­tle. They could have tried more light un­til it solved their prob­lem or started caus­ing other prob­lems. They didn’t.

Yes,peo­ple suffer from too much sun­light, or spend­ing too long in tan­ning beds, but those are skin con­di­tions (as far as I know) and we don’t have ex­am­ples of too much of this kind of ar­tifi­cial light, other than it be­ing un­pleas­ant.

Doc­tors saw pa­tients suffer from a dis­ease in di­rect pro­por­tion to a bio­marker. They gave them a drug. It helped a lit­tle, with few if any side effects. They could have in­creased the dose un­til it ei­ther solved the prob­lem or started caus­ing other prob­lems. They didn’t.

Yes, drug over­doses cause bad side effects, but we could find no record of this drug caus­ing any bad side effects at any rea­son­able dosage, or any the­ory why it would.

Peo­ple ex­press grat­i­tude. We are told it im­proves sub­jec­tive well-be­ing in stud­ies. Our sub­jec­tive well-be­ing im­proves a lit­tle. We could ex­press more grat­i­tude, with no real down­sides. Al­most all of us don’t.

On that note, thanks for read­ing!

A de­ci­sion was uni­ver­sally made that enough, de­spite ob­vi­ously not be­ing enough, was enough. ‘More’ was never tried.

This is im­por­tant on two lev­els.

II

The first level is prac­ti­cal. If you think a prob­lem could be solved or a situ­a­tion im­proved by More Dakka, there’s a good chance you’re right.

Some­times a lit­tle more is a lit­tle bet­ter. Some­times a lot more is a lot bet­ter. Some­times each at­tempt is un­likely to work, but im­proves your chances.

If some­thing is a good idea, you need a rea­son to not try do­ing more of it.

No, se­ri­ously. You need a rea­son.

The sec­ond level is, ‘do more of what is already work­ing and see if it works more’ is as ba­sic as it gets. If we can’t re­li­ably try that, we can’t re­li­ably try any­thing. How could you ever say ‘If that worked some­one would have tried it’?

You can’t. If no one says they tried it, prob­a­bly no one tried it. There might be good rea­sons not to try it. There also might not. There’d still be a good chance no one tried it.

There’s also a chance some­one did try it and isn’t re­port­ing the re­sults any­where you can find. That doesn’t mean it didn’t work, let alone that it can never work.

III

Why would this be an over­looked strat­egy?

It sounds crazy that it could be over­looked. It’s over­looked.

Eliezer gives three tools to rec­og­nize places sys­tems fail, us­ing highly use­ful eco­nomic ar­gu­ments I recom­mend us­ing fre­quently:

1. Cases where the de­ci­sion lies in the hands of peo­ple who would gain lit­tle per­son­ally, or lose out per­son­ally, if they did what was nec­es­sary to help some­one else;

2. Cases where de­ci­sion-mak­ers can’t re­li­ably learn the in­for­ma­tion they need to make de­ci­sions, even though some­one else has that information

3. Sys­tems that are bro­ken in mul­ti­ple places so that no one ac­tor can make them bet­ter, even though, in prin­ci­ple, some mag­i­cally co­or­di­nated ac­tion could move to a new sta­ble state.

In these cases, I do not think such ex­pla­na­tions are enough.

If the Bank of Ja­pan didn’t print more money, that im­plies the Bank of Ja­pan wasn’t suffi­ciently in­cen­tivized to hit their in­fla­tion tar­get. They must have been max­i­miz­ing pri­mar­ily for pres­tige in­stead. I can buy that, but why didn’t they think the best way to do that was to hit the in­fla­tion tar­get? Alexan­der’s sug­gested pay­off ma­trix, where print­ing more money makes failure much worse, isn’t good enough. It can’t be cen­tral on its own. The an­swer was too clear, the pay­off worth the odds, and they had the in­for­ma­tion, as I de­tail later.

Eliezer gives the model of re­searchers look­ing for cita­tions plus grant givers look­ing for pres­tige, as the ex­pla­na­tion for why his SAD treat­ment wasn’t tested. I don’t buy it. Story doesn’t make sense.

If more light worked, you’d get a lot of cita­tions, for not much cost or effort. If you’re writ­ing a grant, this costs lit­tle money and could help many peo­ple. It’s less pres­ti­gious to up the dosage than be origi­nal, but it’s still a big pres­tige win.

If you say they want to as­so­ci­ate with high sta­tus re­search folk, then they won’t care about the grant con­tents, so it re­duces to a one-fac­tor mar­ket, where again re­searchers should try this.

Alexan­der no­ticed the same con­fu­sion on that one.

In the drug dosage case, Eliezer’s tools do bet­ter. No doc­tor takes the risk of be­ing sued if some­thing goes wrong, and no com­pany makes money by fund­ing the study and it’s too ex­pen­sive for a grant, and try­ing it on your own feels too risky. Maybe. It still does not feel like enough. The paths for­ward are too easy, too cheap, the pay­off too large and ob­vi­ous. Even one wealthy pa­tient could break through, and it would be worth it. Yet even our pa­tient, as far as we know, didn’t even try it and cer­tainly didn’t re­port back.

The grat­i­tude case doesn’t fit the three modes at all.

IV

Here is my model.I hope it illu­mi­nates when to try such things your­self.

Two key in­sights here are The Thing and the Sym­bolic Rep­re­sen­ta­tion of The Thing, and Scott Alexan­der’s Con­cept-Shaped Holes Can Be Im­pos­si­ble To No­tice. Both are worth read­ing, in that or­der.

I’ll sum­ma­rize the rele­vant points.

The stan­dard amount of some­thing, by defi­ni­tion, counts as the sym­bolic rep­re­sen­ta­tion of the thing. The Bank of Ja­pan ‘printed money.’ The stan­dard SAD treat­ment ‘ex­poses peo­ple to light.’ Our pa­tients’ doc­tors pre­scribed ‘stan­dard drug.’ To­day, var­i­ous peo­ple ‘left with plenty of time,’ ‘came up with a plan,’ ‘were part of a com­mu­nity,’ ‘ate pizza,’ ‘listened to the other per­son,’ ‘fo­cused on their breath,’ ‘bought enough nip­ple tops for the baby’s bot­tles,’ ‘did their job’ and ‘added salt and pep­per.’

They got re­sults. A lit­tle. Bet­ter than noth­ing. But much less than was de­sired.

The Bank of Aus­tralia printed enough money. Eliezer Yud­kowsky ex­posed his wife to enough light. Our pa­tient was told to take enough of the drug to ac­tu­ally work. Mean­while, other peo­ple ac­tu­ally left with plenty of time, ac­tu­ally came up with a work­able plan, ac­tu­ally were part of a com­mu­nity, ate real pizza, ac­tu­ally listened to an­other per­son, ac­tu­ally fo­cused on their breath, bought enough nip­ple tops for the baby’s bot­tles, ac­tu­ally did their job, and added co­pi­ous amounts of sea salt and freshly ground pep­per.

Some of these are about qual­ity rather than quan­tity. You could also think of that as a big­ger quan­tity of effort, or will­ing­ness to pay more money or de­vote more time. Still, it’s worth not­ing that an im­por­tant var­i­ant of ‘use more,’ ‘do more’ or ‘do more of­ten’ is ‘do it bet­ter.’

Be­ing part of that sec­ond group is harder than it looks:

You need to re­al­ize the thing might ex­ist at all.

You need to re­al­ize the sym­bolic rep­re­sen­ta­tion of the thing isn’t the thing.

You need to ig­nore the idea that you’ve done your job.

You need to ac­tu­ally care about solv­ing the prob­lem.

You need to think about the prob­lem a lit­tle.

You need to ig­nore the idea that no one could blame you for not try­ing.

You need to not care that what you’re about to do is un­usual or weird or so­cially awk­ward.

You need to not care that what you’re about to do might be high sta­tus.

You need to not care that what you’re about to do might be low sta­tus.

You need to not care that what you’re about to do might not work.

You need to not be con­cerned that what you’re about to do might work.

You need to not care that what you’re about to do might back­fire.

You need to not care that what you’re about to do is im­mod­est.

You need to not in­stinc­tively as­sume that this will back­fire be­cause at­tempt­ing it would be im­mod­est, so the world will find some way to strike you down.

You need to not care about the im­plicit ac­cu­sa­tion you’re mak­ing against ev­ery­one who didn’t try it.

You need to not care that what you’re about to do might be waste­ful. Or in­ap­pro­pri­ate. Or weird. Or un­fair. Or morally wrong. Or some­thing.

Why is this list get­ting so long? What is that an­swer of ‘don’t do it’ do­ing on the bot­tom of the page?

V

Long list is long. A lot of items are re­lated. Some will be ob­vi­ous, some won’t be. Let’s go through the list.

You need to re­al­ize the thing might ex­ist at all.

One can­not do bet­ter un­less one re­al­izes it might be pos­si­ble to do bet­ter. Scott gives sev­eral ex­am­ples of situ­a­tions in which he doubted the ex­is­tence of the thing.

You need to re­al­ize the sym­bolic rep­re­sen­ta­tion of the thing isn’t the thing.

Scott gives sev­eral ex­am­ples where he thought he knew what the thing was, only to find out he had no idea; what he thought was the thing was ac­tu­ally a sym­bolic rep­re­sen­ta­tion, a pale shadow. If you think hav­ing a few friends is what a com­mu­nity is, it won’t oc­cur to you to seek out a real one.

You need to ig­nore the idea that you’ve done your job.

There was a box marked ‘thing’. You’ve checked that box off by get­ting the sym­bolic ver­sion of the thing. It’s easy to then think you’ve done the job and are some­how done. Even if you’re do­ing this for your­self or some­one you care about, there’s this urge to get to and think ‘job done’, ‘quest com­plete’, and not think about de­tails. You need to re­al­ize you’re not do­ing the job so you can say you’ve done the job, or so you can tell your­self you’ve done the job. Even if you didn’t get what you wanted, your real job was to get the right to tell a story you can tell your­self that you tried to get it, right?

You need to ac­tu­ally care about solv­ing the prob­lem.

You’re do­ing the job so the job gets done. That’s why do­ing the sym­bolic ver­sion doesn’t mean you’re done. Often peo­ple don’t care much about solv­ing the prob­lem. They care whether they’re re­spon­si­ble. They care whether so­cially ap­pro­pri­ate steps have been taken.

You need to ig­nore the idea that no one could blame you for not try­ing.

Alexan­der notes how im­por­tant this one is, and it’s re­ally big.

Peo­ple of­ten care pri­mar­ily about do­ing that which no one could blame them for. Be­ing blamed or scape­goated is re­ally bad. Even self-blame! We in­stinc­tively fear some­one will dis­cover and ex­pose us, and make our­selves feel bad. We cover up the ev­i­dence and cre­ate jus­tifi­ca­tions. Do­ing the nor­mal means no one could blame you. If you don’t grasp that this is a thing, read as much of At­las Shrugged as needed un­til you grasp that. It should only take a chap­ter or two, but this idea alone is worth a thou­sand page book in or­der to get, if that’s what it takes. I’m not kid­ding.

Blame does hap­pen. The real in­cen­tive here is big. The in­cen­tive peo­ple think they have to do this, even when the chance of be­ing blamed is min­i­mal, is much, much big­ger.

You need to think about the prob­lem a lit­tle.

Peo­ple don’t like think­ing.

You need to not care that what you’re about to do is un­usual or weird or so­cially awk­ward.

There’s a pri­mal fear of do­ing any­thing un­usual or weird. More would be un­usual and weird. It might be slightly so­cially awk­ward. You’d never know un­til it ac­tu­ally was awk­ward. That would be just awful. Can’t have that. No one is watch­ing or cares, but some day some­one might find you out and then ex­pose you as no good. We go around be­ing nor­mal, only guess­ing which slightly weird things would get us in trou­ble, or that we’d need to get some­one else in trou­ble for! So we try to do none of them. That’s what hap­pens when not op­er­at­ing on ob­ject-level causal mod­els full of gears about what will work.

You need to not care that what you’re about to do might be high sta­tus.

Do­ing or ty­ing to do some­thing high sta­tus is to claim high sta­tus. Claiming sta­tus you’re not en­ti­tled to is a good way to get into a lot of trou­ble. Claiming to use­fully think, or to know some­thing, is au­to­mat­i­cally high sta­tus. Are you sure you have that right?

You need to not care that what you’re about to do might be low sta­tus.

Your sta­tus would go down. That’s even worse. If it’s high sta­tus you lose, if it’s low sta­tus you also lose, and you don’t even know which one it is since no one does it! Might even be both. Bet­ter to leave the whole thing alone.

You need to not care that what you’re about to do might not work.

Failing is just awful. Even things that are sup­posed to mostly fail. Even get­ting lu­dicrous odds. Only ex­plic­itly per­mit­ted nar­row ex­cep­tions are per­mit­ted, which shrink each year. Other­wise we must, must suc­ceed, or noth­ing we do will ever work and ev­ery­one will know that. I founded a com­pany once*. It didn’t work. Now ev­ery­one knows ra­tio­nal­ists can’t found com­pa­nies. Shouldn’t have tried.

* – Well, three times.

You need to not be con­cerned that what you’re about to do might work.

Even worse, it might work. Then what? No idea. Does not com­pute. You’d have to keep do­ing weird thing, or ad­vo­cate for weird thing. How weird would that be? What about the peo­ple you’d prove wrong? What would you even say?

You need to not care that what you’re about to do might back­fire.

It might not only not work, it might have real con­se­quences. That’s a thing. Can’t think of why that might hap­pen. Every brain­stormed risk seems highly im­prob­a­ble and not that big a deal. But why take that risk?

You need to not care that what you’re about to do is im­mod­est.

By mod­esty, any­thing you think of, that’s worth think­ing, has been thought of. Any­thing worth try­ing has been tried, any­thing worth do­ing done. Ig­nore that there’s a first time for ev­ery­thing. Who are you to claim there’s some­thing worth try­ing? Who are you to claim you know bet­ter than ev­ery­one else? Did you not no­tice all the other peo­ple? Are you re­ally high sta­tus enough to claim you know bet­ter than all of them? Let’s see that hero li­cence of yours, buster. Ob­ject-level claims are sta­tus claims!

You need to not in­stinc­tively as­sume that this will back­fire be­cause at­tempt­ing it would be im­mod­est, so the world will find some way to strike you down.

The world won’t let you get away with that. It will make this blow up in your face. And laugh. At you. Peo­ple know this. They’ll in­stinc­tively join the con­spir­acy mak­ing it hap­pen, co­or­di­nat­ing seam­lessly. Their al­ter­na­tive is think­ing for them­selves, or other peo­ple might think­ing for them­selves rather than play­ing imi­ta­tion games. Un­think­able. Let’s scape­goat some­one and re­in­force norms.

You need to not care about the im­plicit ac­cu­sa­tion you’re mak­ing against ev­ery­one who didn’t try it.

You’re not only call­ing them wrong. You’re say­ing the an­swer was in front of their face the whole time. They had an ob­vi­ous solu­tion and didn’t take it. You’re tel­ling them they didn’t have a good rea­son for that. They gonna be pissed.

You need to not care that what you’re about to do might be waste­ful. Or in­ap­pro­pri­ate. Or un­fair. Or low sta­tus. Or lack pres­tige. Or be morally wrong. Or some­thing. There’s gotta be some­thing!

The an­swer is right there at the bot­tom of the page. This isn’t done, so don’t do it. Find a rea­son. If there isn’t a good one, go with what you got. Flail around as needed.

That’s what the Bank of Ja­pan was ac­tu­ally afraid of. Noth­ing. A vague feel­ing they were sup­posed to be afraid of some­thing, so they kept brain­storm­ing un­til some­thing sounded plau­si­ble.

Print­ing money might mean print­ing too much! The op­po­site is true. Not print­ing money now means hav­ing to print even more later, as the econ­omy suffers.

Print­ing money would de­stroy their cred­i­bil­ity! The op­po­site is true. Not print­ing money de­stroyed their cred­i­bil­ity.

Peo­ple don’t like it when we print too much money! The op­po­site is true. Every­one was yel­ling at them to print more money.

The mar­kets don’t like it when we print too much money! The op­po­site is true. We have real time data. The Nikkei goes up on talk of print­ing money, down on talk of not print­ing money, and goes wild on ac­tual un­ex­pected money print­ing. It’s al­most as if the mar­ket thinks print­ing money is awe­some and has a ra­tio­nal ex­pec­ta­tions model. The bond mar­ket? The ris­ing in­ter­est rates? Not a peep.

Print­ing money wouldn’t be pres­ti­gious! It would hurt bank in­de­pen­dence! The op­po­site is true. Not print­ing money forced Prime Minister Shinzo Abe to threaten them into print­ing more money. They were seen as failures. Every­one re­spects the Bank of Aus­tralia be­cause they did print more money.

This same vague fear, com­bined with triv­ial in­con­ve­niences, is what stops the other solu­tions, too.

Not only are these triv­ial fears that shouldn’t stop us, they’re not even things that would hap­pen. When you try the thing, al­most noth­ing bad of this sort ever hap­pens at all.

At all. This is low risks of shock­ingly mild so­cial dis­ap­proval. Ig­nore.

Th­ese wor­ries aren’t real. They’re in your head.

They’re in my head, too. The voice of Pat Modesto is in your head. It is in­sidious. It says what­ever it has to. It lies. It cheats. It is the op­po­site of use­ful.

If some­one else has these con­cerns, the con­cerns are in their head, whisper­ing in their ear. Don’t hold it against them. Help them.

Some such wor­ries are real. They can point to real costs and benefits. Check! But they’re mostly try­ing to halt think­ing about the ob­ject level, to keep you from be­ing the nail that sticks up and gets ham­mered down. When some­one else raises them, mostly they’re the ham­mer. The fears are mirages we’ve been trained and built to see.

You don’t have that prob­lem, you say? Great! Other peo­ple do have that prob­lem. Sym­pa­thize and try to help. Other­wise, keep do­ing what you’re do­ing, only more so. And con­grat­u­la­tions.

VI

My prac­ti­cal sug­ges­tion is that if you do, buy or use a thing, and it seems like that was a rea­son­able thing to do, you should ask your­self:

Can I do more of this? Can I do this bet­ter? Put in more effort, more time and/​or more money? Might that do the job bet­ter? Could that be a good idea? Could that be worth it? How much more? How much bet­ter?

Make a quick ob­ject level model of what would hap­pen. See what it looks like. Dis­count your chances a lit­tle if no one does it, but only a lit­tle. Maybe half, tops. Less if those who suc­ceeded wouldn’t say any­thing. In some cases, the thing you’re about to try is ac­tu­ally done all the time, but no one talks about it. If you sus­pect that, definitely try it.

You’ll hear the voice. This isn’t done. There must be a rea­son. When you hear that, get ex­cited. You might be on to some­thing.

If you’re get­ting odds to try, try. Use the try harder, Luke! You can do this. Pull out More Dakka.

It’s also worth look­ing back on things you’ve done in the past and ask­ing the same ques­tion.

I’ve linked sev­eral times to the Challeng­ing the Difficult se­quence, but none of this need be difficult. Often all that’s needed, but never comes, is an or­di­nary effort.

The big­ger pic­ture point is also im­por­tant. Th­ese are the most ob­vi­ous things. Those bad rea­sons stop ac­tual ev­ery­one from try­ing things that cost lit­tle, on any level, with lit­tle risk, on any level, and that carry huge benefits. For other things, they stop al­most ev­ery­one. When some­one does try them and re­ports back that it worked, they’re ig­nored.

Some­thing pos­si­bly be­ing slightly so­cially awk­ward, or caus­ing a likely nom­i­nal failure, acts as a veto. Ra­tion­al­iza­tions for this are cre­ated as needed.

Ad­ding that to the eco­nomic model of in­ad­e­quate equil­ibria, and the fact that al­most no one got as far as con­sid­er­ing this idea at all, is it any won­der that you can beat ‘con­sen­sus’ by think­ing of and try­ing ob­ject-level things?

Why wouldn’t that work?