16 types of useful predictions

How of­ten do you make pre­dic­tions (ei­ther about fu­ture events, or about in­for­ma­tion that you don’t yet have)? If you’re a reg­u­lar Less Wrong reader you’re prob­a­bly fa­mil­iar with the idea that you should make your be­liefs pay rent by say­ing, “Here’s what I ex­pect to see if my be­lief is cor­rect, and here’s how con­fi­dent I am,” and that you should then up­date your be­liefs ac­cord­ingly, de­pend­ing on how your pre­dic­tions turn out.

And yet… my im­pres­sion is that few of us ac­tu­ally make pre­dic­tions on a reg­u­lar ba­sis. Cer­tainly, for me, there has always been a gap be­tween how use­ful I think pre­dic­tions are, in the­ory, and how of­ten I make them.

I don’t think this is just laz­i­ness. I think it’s sim­ply not a triv­ial task to find pre­dic­tions to make that will help you im­prove your mod­els of a do­main you care about.

At this point I should clar­ify that there are two main goals pre­dic­tions can help with:

  1. Im­proved Cal­ibra­tion (e.g., re­al­iz­ing that I’m only cor­rect about Do­main X 70% of the time, not 90% of the time as I had mis­tak­enly thought).

  2. Im­proved Ac­cu­racy (e.g., go­ing from be­ing cor­rect in Do­main X 70% of the time to be­ing cor­rect 90% of the time)

If your goal is just to be­come bet­ter cal­ibrated in gen­eral, it doesn’t much mat­ter what kinds of pre­dic­tions you make. So cal­ibra­tion ex­er­cises typ­i­cally grab ques­tions with eas­ily ob­tain­able an­swers, like “How tall is Mount Ever­est?” or “Will Don Draper die be­fore the end of Mad Men?” See, for ex­am­ple, the Cre­dence Game, Pre­dic­tion Book, and this re­cent post. And cal­ibra­tion train­ing re­ally does work.

But even though mak­ing pre­dic­tions about trivia will im­prove my gen­eral cal­ibra­tion skill, it won’t help me im­prove my mod­els of the world. That is, it won’t help me be­come more ac­cu­rate, at least not in any do­mains I care about. If I an­swer a lot of ques­tions about the heights of moun­tains, I might be­come more ac­cu­rate about that topic, but that’s not very helpful to me.

So I think the difficulty in pre­dic­tion-mak­ing is this: The set {ques­tions whose an­swers you can eas­ily look up, or oth­er­wise ob­tain} is a small sub­set of all pos­si­ble ques­tions. And the set {ques­tions whose an­swers I care about} is also a small sub­set of all pos­si­ble ques­tions. And the in­ter­sec­tion be­tween those two sub­sets is much smaller still, and not eas­ily iden­ti­fi­able. As a re­sult, pre­dic­tion-mak­ing tends to seem too effort­ful, or not fruit­ful enough to jus­tify the effort it re­quires.

But the in­ter­sec­tion’s not empty. It just re­quires some strate­gic thought to de­ter­mine which an­swer­able ques­tions have some bear­ing on is­sues you care about, or—ap­proach­ing the prob­lem from the op­po­site di­rec­tion—how to take is­sues you care about and turn them into an­swer­able ques­tions.

I’ve been mak­ing a con­certed effort to hunt for mem­bers of that in­ter­sec­tion. Here are 16 types of pre­dic­tions that I per­son­ally use to im­prove my judg­ment on is­sues I care about. (I’m sure there are plenty more, though, and hope you’ll share your own as well.)

  1. Pre­dict how long a task will take you. This one’s a given, con­sid­er­ing how com­mon and im­pact­ful the plan­ning fal­lacy is.
    Ex­am­ples: “How long will it take to write this blog post?” “How long un­til our com­pany’s prof­itable?”

  2. Pre­dict how you’ll feel in an up­com­ing situ­a­tion. Affec­tive fore­cast­ing – our abil­ity to pre­dict how we’ll feel – has some well known flaws.
    Ex­am­ples: “How much will I en­joy this party?” “Will I feel bet­ter if I leave the house?” “If I don’t get this job, will I still feel bad about it two weeks later?”

  3. Pre­dict your perfor­mance on a task or goal.
    One thing this helps me no­tice is when I’ve been try­ing the same kind of ap­proach re­peat­edly with­out suc­cess. Even just the act of mak­ing the pre­dic­tion can spark the re­al­iza­tion that I need a bet­ter game plan.
    Ex­am­ples: “Will I stick to my work­out plan for at least a month?” “How well will this event I’m or­ga­niz­ing go?” “How much work will I get done to­day?” “Can I suc­cess­fully con­vince Bob of my opinion on this is­sue?”

  4. Pre­dict how your au­di­ence will re­act to a par­tic­u­lar so­cial me­dia post (on Face­book, Twit­ter, Tum­blr, a blog, etc.).
    This is a good way to hone your judg­ment about how to cre­ate suc­cess­ful con­tent, as well as your un­der­stand­ing of your friends’ (or read­ers’) per­son­al­ities and wor­ld­views.
    Ex­am­ples: “Will this video get an un­usu­ally high num­ber of likes?” “Will link­ing to this ar­ti­cle spark a fight in the com­ments?”

  5. When you try a new ac­tivity or tech­nique, pre­dict how much value you’ll get out of it.
    I’ve no­ticed I tend to be in­ac­cu­rate in both di­rec­tions in this do­main. There are cer­tain kinds of life hacks I feel sure are go­ing to solve all my prob­lems (and they rarely do). Con­versely, I am overly skep­ti­cal of ac­tivi­ties that are out­side my com­fort zone, and of­ten end up pleas­antly sur­prised once I try them.
    Ex­am­ples: “How much will Po­modoros boost my pro­duc­tivity?” “How much will I en­joy swing danc­ing?”

  6. When you make a pur­chase, pre­dict how much value you’ll get out of it.
    Re­search on money and hap­piness shows two main things: (1) as a gen­eral rule, money doesn’t buy hap­piness, but also that (2) there are a bunch of ex­cep­tions to this rule. So there seems to be lots of po­ten­tial to im­prove your pre­dic­tion skill here, and spend your money more effec­tively than the av­er­age per­son.
    Ex­am­ples: “How much will I wear these new shoes?” “How of­ten will I use my club mem­ber­ship?” “In two months, will I think it was worth it to have re­painted the kitchen?” “In two months, will I feel that I’m still get­ting plea­sure from my new car?”

  7. Pre­dict how some­one will an­swer a ques­tion about them­selves.
    I of­ten no­tice as­sump­tions I’m been mak­ing about other peo­ple, and I like to check those as­sump­tions when I can. Ideally I get in­ter­est­ing feed­back both about the ob­ject-level ques­tion, and about my over­all model of the per­son.
    Ex­am­ples: “Does it bother you when our meet­ings run over the sched­uled time?” “Did you con­sider your­self pop­u­lar in high school?” “Do you think it’s okay to lie in or­der to pro­tect some­one’s feel­ings?”

  8. Pre­dict how much progress you can make on a prob­lem in five min­utes.
    I of­ten have the im­pres­sion that a prob­lem is in­tractable, or that I’ve already worked on it and have con­sid­ered all of the ob­vi­ous solu­tions. But then when I de­cide (or when some­one prompts me) to sit down and brain­storm for five min­utes, I am sur­prised to come away with a promis­ing new ap­proach to the prob­lem.
    Ex­am­ple: “I feel like I’ve tried ev­ery­thing to fix my sleep, and noth­ing works. If I sit down now and spend five min­utes think­ing, will I be able to gen­er­ate at least one new idea that’s promis­ing enough to try?”

  9. Pre­dict whether the data in your mem­ory sup­ports your im­pres­sion.
    Me­mory is awfully fal­lible, and I have been sur­prised at how of­ten I am un­able to gen­er­ate spe­cific ex­am­ples to sup­port a con­fi­dent im­pres­sion of mine (or how of­ten the spe­cific ex­am­ples I gen­er­ate ac­tu­ally con­tra­dict my im­pres­sion).
    Ex­am­ples: “I have the im­pres­sion that peo­ple who leave academia tend to be glad they did. If I try to list a bunch of the peo­ple I know who left academia, and how happy they are, what will the ap­prox­i­mate ra­tio of happy/​un­happy peo­ple be?”
    ″It feels like Bob never takes my ad­vice. If I sit down and try to think of ex­am­ples of Bob tak­ing my ad­vice, how many will I be able to come up with?”

  10. Pick one ex­pert source and pre­dict how they will an­swer a ques­tion.
    This is a quick short­cut to test­ing a claim or set­tling a dis­pute.
    Ex­am­ples: “Will Cochrane Med­i­cal sup­port the claim that Vi­tamin D pro­motes hair growth?” “Will Bob, who has run sev­eral com­pa­nies like ours, agree that our start­ing salary is too low?”

  11. When you meet some­one new, take note of your first im­pres­sions of him. Pre­dict how likely it is that, once you’ve got­ten to know him bet­ter, you will con­sider your first im­pres­sions of him to have been ac­cu­rate.
    A var­i­ant of this one, sug­gested to me by CFAR alum Lau­ren Lee, is to make pre­dic­tions about some­one be­fore you meet him, based on what you know about him ahead of time.
    Ex­am­ples: “All I know about this guy I’m about to meet is that he’s a banker; I’m mod­er­ately con­fi­dent that he’ll seem cocky.” “Based on the one con­ver­sa­tion I’ve had with Lisa, she seems re­ally in­sight­ful – I pre­dict that I’ll still have that im­pres­sion of her once I know her bet­ter.”

  12. Pre­dict how your Face­book friends will re­spond to a poll.
    Ex­am­ples: I of­ten post so­cial eti­quette ques­tions on Face­book. For ex­am­ple, I re­cently did a poll ask­ing, “If a con­ver­sa­tion is go­ing awk­wardly, does it make things bet­ter or worse for the other per­son to com­ment on the awk­ward­ness?” I con­fi­dently pre­dicted most peo­ple would say “worse,” and I was wrong.

  13. Pre­dict how well you un­der­stand some­one’s po­si­tion by try­ing to para­phrase it back to him.
    The illu­sion of trans­parency is per­ni­cious.
    Ex­am­ples: “You said you think run­ning a work­shop next month is a bad idea; I’m guess­ing you think that’s be­cause we don’t have enough time to ad­ver­tise, is that cor­rect?”
    ″I know you think eat­ing meat is morally un­prob­le­matic; is that be­cause you think that an­i­mals don’t suffer?”

  14. When you have a dis­agree­ment with some­one, pre­dict how likely it is that a neu­tral third party will side with you af­ter the is­sue is ex­plained to her.
    For best re­sults, don’t re­veal which of you is on which side when you’re ex­plain­ing the is­sue to your ar­biter.
    Ex­am­ple: “So, at work to­day, Bob and I dis­agreed about whether it’s ap­pro­pri­ate for in­terns to at­tend hiring meet­ings; what do you think?”

  15. Pre­dict whether a sur­pris­ing piece of news will turn out to be true.
    This is a good way to hone your bul­lshit de­tec­tor and im­prove your over­all “com­mon sense” mod­els of the world.
    Ex­am­ples: “This head­line says some sci­en­tists up­loaded a worm’s brain—af­ter I read the ar­ti­cle, will the head­line seem like an ac­cu­rate rep­re­sen­ta­tion of what re­ally hap­pened?”
    ″This viral video pur­ports to show strangers be­ing prompted to kiss; will it turn out to have been staged?”

  16. Pre­dict whether a quick on­line search will turn up any cred­ible sources sup­port­ing a par­tic­u­lar claim.
    Ex­am­ple: “Bob says that watches always stop work­ing shortly af­ter he puts them on – if I spend a few min­utes search­ing on­line, will I be able to find any cred­ible sources say­ing that this is a real phe­nomenon?”

I have one ad­di­tional, gen­eral thought on how to get the most out of pre­dic­tions:

Ra­tion­al­ists tend to fo­cus on the im­por­tance of ob­jec­tive met­rics. And as you may have no­ticed, a lot of the ex­am­ples I listed above fail that crite­rion. For ex­am­ple, “Pre­dict whether a fight will break out in the com­ments? Well, there’s no ob­jec­tive way to say whether some­thing offi­cially counts as a ‘fight’ or not…” Or, “Pre­dict whether I’ll be able to find cred­ible sources sup­port­ing X? Well, who’s to say what a cred­ible source is, and what counts as ‘sup­port­ing’ X?”

And in­deed, ob­jec­tive met­rics are prefer­able, all else equal. But all else isn’t equal. Sub­jec­tive met­rics are much eas­ier to gen­er­ate, and they’re far from use­less. Most of the time it will be clear enough, once you see the re­sults, whether your pre­dic­tion ba­si­cally came true or not—even if you haven’t pinned down a pre­cise, ob­jec­tively mea­surable suc­cess crite­rion ahead of time. Usu­ally the re­sult will be a com­mon sense “yes,” or a com­mon sense “no.” And some­times it’ll be “um...sort of?”, but that can be an in­ter­est­ingly sur­pris­ing re­sult too, if you had strongly pre­dicted the re­sults would point clearly one way or the other.

Along similar lines, I usu­ally don’t as­sign nu­mer­i­cal prob­a­bil­ities to my pre­dic­tions. I just take note of where my con­fi­dence falls on a qual­i­ta­tive “very con­fi­dent,” “pretty con­fi­dent,” “weakly con­fi­dent” scale (which might cor­re­spond to some­thing like 90%/​75%/​60% prob­a­bil­ities, if I had to put num­bers on it).

There’s prob­a­bly some ad­di­tional value you can ex­tract by writ­ing down quan­ti­ta­tive con­fi­dence lev­els, and by de­vis­ing ob­jec­tive met­rics that are im­pos­si­ble to game, rather than just rely­ing on your sub­jec­tive im­pres­sions. But in most cases I don’t think that ad­di­tional value is worth the cost you in­cur from turn­ing pre­dic­tions into an oner­ous task. In other words, don’t let the perfect be the en­emy of the good. Or in other other words: the biggest prob­lem with your pre­dic­tions right now is that they don’t ex­ist.