# Focus Your Uncertainty

Will bond yields go up, or down, or remain the same? If you’re a TV pundit and your job is to explain the outcome after the fact, then there’s no reason to worry. No matter *which* of the three possibilities comes true, you’ll be able to explain why the outcome perfectly fits your pet market theory. There’s no reason to think of these three possibilities as somehow *opposed* to one another, as *exclusive*, because you’ll get full marks for punditry no matter which outcome occurs.

But wait! Suppose you’re a *novice* TV pundit, and you aren’t experienced enough to make up plausible explanations on the spot. You need to prepare remarks in advance for tomorrow’s broadcast, and you have limited time to prepare. In this case, it would be helpful to know *which* outcome will actually occur—whether bond yields will go up, down, or remain the same—because then you would only need to prepare *one* set of excuses.

Alas, no one can possibly foresee the future. What are you to do? You certainly can’t use “probabilities.” We all know from school that “probabilities” are little numbers that appear next to a word problem, and there aren’t any little numbers here. Worse, you *feel* uncertain. You don’t remember *feeling* uncertain while you were manipulating the little numbers in word problems. *College classes teaching math* are nice clean places, so math can’t apply to life situations that aren’t nice and clean. You wouldn’t want to inappropriately transfer thinking skills from one context to another. Clearly, this is not a matter for “probabilities.”

Nonetheless, you only have 100 minutes to prepare your excuses. You can’t spend the entire 100 minutes on “up,” and also spend all 100 minutes on “down,” and also spend all 100 minutes on “same.” You’ve got to prioritize somehow.

If you needed to justify your time expenditure to a review committee, you would have to spend equal time on each possibility. Since there are no little numbers written down, you’d have no documentation to justify spending different amounts of time. You can hear the reviewers now: *And why, Mr. Finkledinger, did you spend exactly 42 minutes on excuse #3? Why not 41 minutes, or 43? Admit it—you’re not being objective! You’re playing subjective favorites!*

But, you realize with a small flash of relief, there’s no review committee to scold you. This is good, because there’s a major Federal Reserve announcement tomorrow, and it seems unlikely that bond prices will remain the same. You don’t want to spend 33 precious minutes on an excuse you don’t anticipate needing.

Your mind keeps drifting to the explanations you use on television, of why each event plausibly fits your market theory. But it rapidly becomes clear that plausibility can’t help you here—all three events are plausible. Fittability to your pet market theory doesn’t tell you how to divide your time. There’s an uncrossable gap between your 100 minutes of time, which are conserved; versus your ability to explain how an outcome fits your theory, which is unlimited.

And yet . . . even in your uncertain state of mind, it seems that you *anticipate* the three events differently; that you *expect* to need some excuses more than others. And—this is the fascinating part—when you think of something that makes it seem *more* likely that bond prices will go up, then you feel *less* likely to need an excuse for bond prices going down or remaining the same.

It even seems like there’s a relation between how much you anticipate each of the three outcomes, and how much time you want to spend preparing each excuse. Of course the relation can’t actually be quantified. You have 100 minutes to prepare your speech, but there isn’t 100 of anything to divide up in this anticipation business. (Although you do work out that, *if* some particular outcome occurs, then your utility function is logarithmic in time spent preparing the excuse.)

Still . . . your mind keeps coming back to the idea that anticipation is limited, unlike excusability, but like time to prepare excuses. Maybe anticipation should be treated as a *conserved resource*, like money. Your first impulse is to try to get more anticipation, but you soon realize that, even if you get more anticipation, you won’t have any more time to prepare your excuses. No, your only course is to *allocate* your *limited supply* of anticipation as best you can.

You’re pretty sure you weren’t taught anything like that in your statistics courses. They didn’t tell you what to do when you *felt* so terribly uncertain. They didn’t tell you what to do when there were no little numbers handed to you. Why, even if you tried to use numbers, you might end up using any sort of numbers at all—there’s no hint what kind of math to use, if you should be using math! Maybe you’d end up using *pairs* of numbers, right and left numbers, which you’d call DS for Dexter-Sinister . . . or who knows what else? (Though you do have only 100 minutes to spend preparing excuses.)

If only there were an art of *focusing your uncertainty*—of *squeezing* as much anticipation as possible into whichever outcome will *actually happen*!

But what could we call an art like that? And what would the rules be like?

- Where to Draw the Boundaries? by 13 Apr 2019 21:34 UTC; 120 points) (
- Fake Optimization Criteria by 10 Nov 2007 0:10 UTC; 69 points) (
- Be secretly wrong by 10 Dec 2016 7:06 UTC; 66 points) (
- Rationality Reading Group: Part C: Noticing Confusion by 18 Jun 2015 1:01 UTC; 15 points) (
- 2 Feb 2013 20:37 UTC; 14 points) 's comment on Rationality Quotes February 2013 by (
- 30 Mar 2012 2:15 UTC; 13 points) 's comment on New front page by (
- 2 Nov 2007 17:04 UTC; 11 points) 's comment on An Alien God by (
- [SEQ RERUN] Focus Your Uncertainty by 30 Jun 2011 5:21 UTC; 10 points) (
- A Conversation with GoD by 23 Aug 2016 7:59 UTC; 9 points) (
- [SEQ RERUN] The Proper Use of Doubt by 1 Jul 2011 23:54 UTC; 9 points) (
- 13 Dec 2020 2:33 UTC; 5 points) 's comment on Where to Draw the Boundaries? by (
- 2 Nov 2012 17:23 UTC; 4 points) 's comment on Quote on Nate Silver, and how to think about probabilities by (
- 18 Jun 2015 1:04 UTC; 3 points) 's comment on Rationality Reading Group: Part C: Noticing Confusion by (
- 2 Nov 2012 9:55 UTC; 2 points) 's comment on Quote on Nate Silver, and how to think about probabilities by (
- 26 Oct 2011 14:25 UTC; 2 points) 's comment on Is math subjective? by (
- 28 Mar 2015 8:35 UTC; 2 points) 's comment on Open thread, Mar. 23 - Mar. 31, 2015 by (
- Meetup : Sunday Meetup at Buffalo Labs by 25 Feb 2013 18:00 UTC; 2 points) (
- 29 Apr 2024 18:52 UTC; 1 point) 's comment on Quantum Explanations by (
- 17 Apr 2019 19:24 UTC; 1 point) 's comment on Where to Draw the Boundaries? by (

Well written, but I guess we don’t have many folks here who object to the concept of subjective probability.

Decision making under conditions of uncertainty is hardly an unexplored field.

Eliezer wrote: ”...you do work out that, if some particular outcome occurs, then your utility function is logarithmic in time spent preparing the excuse.” That kind of dropped out of the sky, didn’t it?

Since our puzzled pundit pontificates regarding market issues, it seems likely to me that he will draw upon his undergraduate training in economics to recognize this as an allocation problem, and he will immediately begin thinking in terms of equalizing marginal returns. Or, if his undergraduate training was at one of the better schools, then he will realize that he first has to show that marginal returns are decreasing before he begins equating them. I rather doubt that he would begin fretting about defending his allocations to a committee unless his training were in some other field entirely! :)

But even starting from an assumption of decreasing marginal utility, it is very unclear as to how he would guess that the utility function must be logarithmic. There are many decreasing functions. What is so special about the function MU(x)=1/x? Hmmm. Perhaps he can get some leverage by reflecting that he has already spent some amount of time “a priori” thinking about the problem even before the clock starts on his allocated 100 minutes. How much time? He can’t remember. But he does have the intuition that the decreasing function giving the marginal utility of additional prep time should have the same general shape regardless of how much “a priori” time was spent before the clock began ticking. That is, he intuits that shifting the function graph along the X axis should be equivalent to scaling it along the Y axis.

Or does this intuition seem just as “out of the sky” as the “logarithmic” intuition that I am trying to avoid?

The only way that I can make sense of the line you quote is to assume that the pundit already identifies “the probability that bond prices go up” with “the fraction of the 100 minutes that I ought to spend on a story explaining why bond prices went up”.

For simplicity, suppose that there are only two possible outcomes, UP and DOWN. Let

pbe the probability of UP, where 0 <p< 1. LetU(x) be the utility of having spent 100xminutes on an explanation for an outcome, given that that outcome occurs. (So, we are assuming that the utility of spending 100xminutes on a story depends only on whether you get to use that story, not on whether the story explains UP or DOWN. In other words, it is equally easy to concoct equally good stories of either kind.) Assume that the utility functionUis differentiable.The pundit is trying to maximize the expected utility

EU(

x) =U(x)p+U(1−x) (1−p).But it is given that the pundit ought to spend 100

pminutes on UP. That is, the expected utility attains its maximum whenx=p. Equivalently, the utility functionUmust satisfyEU′(

p) =U′(p)p−U′(1−p) (1−p) = 0.That is,

U′(p) /U′(1−p) = (1/p) / (1/(1−p)).This equation should hold regardless of the value of

p. In other words, the conditions are equivalent to saying thatUis a solution to the DEU′(x) /U′(1−x) = (1/x) / (1/(1−x)).It’s natural enough to notice that this DE holds if

U′(x) = 1/x. That is, settingU(x) = ln(x) yields the desired behavior.More generally, the DE says that

U′(x) = (1/x)g(x) for some functiongsatisfyingg(x) =g(1−x). But if you are only interested in findingsomemodel of the pundit’s behavior that predicts what the pundit does for all values ofp, you can setg(x) = 1.The “critical thinking” paper has changed location; it’s now at http://www.aft.org/pdfs/americaneducator/summer2007/Crit_Thinking.pdf

First of all, there is a specific time in which you could evenly divide the period because there are only 60 seconds in a minute, which is divisible by three. Secondly, your article seems to say that we should use our anticipation wisely, which would seem to say that anticipating small things is pointless. However, anticipation is a very important part of human life experiences, and as such it is almost impossible to either use less of or create more of unless one is capable of fooling one’s self into an erroneous belief. Last, by anticipating an outcome using rational thought to be able to equally anticipate all outcomes reduces the amount of emotional affect on anticipation, when emotion is a very important part of our motivation. Without having a favorite, you no longer care about the issue, and unless you are a practiced pundit, you will be unable to actually seem authentic on air without having your anticipation emotionally charged and correct.

If you see an error in my argument, please point it out, for this is my first post on these forums, and I’m still not used to thinking in your rational mindset.

The point of the article was that the subject really does anticipate different market outcomes to different extents, and that uncertainty can be represented using probability. Thus the 100 minutes of preparation time could be divided up usefully by spending n minutes on each justification, where n is reached by multiplying the probability of the outcome by 100 minutes.

For example, if you believe that the stock has a 60% chance of going up, a 35% chance of going down, and a 4.99% chance of staying the same, then you could spend 60 minutes preparing explanations of why it went up, 35 minutes preparing explanations of why it went down, 4.99 minutes preparing explanations of why it stayed the same, and .01 minutes fretting that you’ve forgotten one of the things that stocks can do.

You’re probably right about the emotional state of pundits, but that wasn’t really relevant to the point of the story.

Welcome to Less Wrong! It might be worth checking out Bayes’ Theorem.

This solution is the Kelly strategy. It isn’t generally optimal, but this makes it optimal in this case:

Nice—I’m not sure if I’ve encountered that strategy before.

It’s about how if you slide the probability of, say, bond yeilds going up, to be more likely, that makes the probablity of bonds yeilds going down or staying the same less likely. We can’t say, “I think that there is a 40% chance of bond yeilds going up, and a 70% chance of bond yeilds going down or staying the same.”

“first impulse is to try to get more anticipation, but you soon realize that, even if you get more anticiptaion, you won’t have any more time to prepare your excuses.”

It is possible to get more anticipation.

Edit—I didn’t read the premises correctly. I missed the importance of the bit “Your mind keeps drifting to the explanations you use on television, of why each event plausibly fits your market theory. But it rapidly becomes clear that plausibility can’t help you here—all three events are plausible. Fittability to your pet market theory doesn’t tell you how to divide your time. There’s an uncrossable gap between your 100 minutes of time, which are conserved; versus your ability to explain how an outcome fits your theory, which is unlimited.”

The time one spends preparing excuses is only loosely, and also inversley, linked to how easy to explain the event is. When unsure of an outcome to excuse what you are looking for is not the “most likely to be needed” excuse to be “really good” but for any excuse you need to be “as good as possible.”

Even if your pet theory is so useless as to be utterly general, it should still be possible to estimate the easiest event to explain compared to any of the others, and that is where the least time should be spent. Failing that, if the events are all equally easy to explain with your pet theory then the time taken trying to work out where to spend the time would be better spent writing whichever explanation of up or down you think most likely of the two until it is as good as you can get it in less than half the time, then do the same for the other, then a few minutes at the end saying how these cancel if the market stays same or similar,

Better would be write a long list of excuses with predicted up and down values, and use them to get a range of levels of upnesses and downnesses that you can combine any number of to excuse any specific levels of up and downness. “Normally the reserve announcement would have had a huge upwards effect on the market, but because it was rainy today and baked beans are on sale in Wal Mart this is reflected in the only slight increase seen when looking at the market as a whole” This way you can even justify trends right up till the moment of truth “Earlier in the day the market was dropping due to the anticipated reseve announcement, but once it was discovered that Bolivia was experiencing solar flares, this slowed the downward trend, with the floating of shares in Greenpeace flinging the market back up again”

Lets use something more ‘predictable’ for illustrative purposes, you are a physics teacher in

^{1960}⁄_{70}′s America, some serious looking people in suits turn up at your door, their pet scientist and all his notes were disappeared by the Reds and your country needs you.After the time wasted arguing that it was insane to even ask you to do this you have both a gun to your head and 100 minutes left to come up with excuses as to why the “Hammer and Feather on the Moon” experiment went in any of the three ways*. Given that you have good reason to believe that the Hammer and Feather experiment may not go as you predict, spending 99.99 % of your time on the obvious answer is a very unwise use of your time resource. In fact it may be wiser to spend 1 minute on the obvious answer to have more time to try to excuse the feather hitting first.

*Turns out that the president had been told Russian telekinetics were going to mess with the results of the experiment to make Americans believe the moon landings had been faked, or, if you pefer, perhaps they were worried that the props department in Area 51 hadn’t got the tensions on the invisible wires right, yet...

When unsure of an outcome to excuse what you are looking for is not the “most likely to be needed” excuse to be “really good” but for any excuse you need to be “as good as possible.”This might depend on the probability distribution across possible events. If the probabilty of all 3 outcomes is similar (33.3%), it might make sense to use “each excuse as good as possible”.But when one of the outcomes is really likely (say 85%+), you can start to think about adopting “most likely needed excuse to be really good” strategy.Playing too defensive might guarantee to save you from embarassment no matter what, but you can consider being greedy too.And, as always, I vastly enjoy this first person perspective that makes the necessity of rationality so blatantly obvious. However, does a 80 percent certainty in “bonds go up” mean a 20 percent certainty in “bonds go down or stay the same”? Can there not be a pool of still undecided minutes left at the bottom of the anticipation barrel? I not, this mode of thinking clearly highlights one thing: If you are 95 percent certain that you turned of your oven, you are also 5 percent certain that you did not, which means that if you are bound for a vacation, 95 percent certainty in a turned off oven should probably be enough for you to check it again.

There was a small reference to Dempster Schafer probability, (“DS”) that is intended to address exactly this question. As Eliezer noted, you still need to divide your 100 minutes.

For even more complex, difficult formulations to accomplish this, Dezert-Smarandache Theory (DSmT) has you covered.

The link in ‘transferring thinking skills...’ has changed slightly. It is now www.aft.org/pdfs/americaneducator/summer2007/Crit_Thinking.pdf

There is a difference between probability and uncertainty

Optimization is possible when you know the probabilities, hard/impossible if you don’t know them/are uncertain about them

The link in this sentence is broken

Can someone with knowledge comment on whether the broken AFT critical thinking PDF link went to the document that’s now located here?

https://www.aft.org/sites/default/files/media/2014/Crit_Thinking.pdf

yes