Hmm, true, but if you took that argument to its logical extreme the existence of a single grand opportunity implies the market is exploitable. I mean technically, yeah, but when I talk about EMH I mostly mean that $20 bills don’t show up every week.
my impression from Metaculus is the probability of the virus becoming widespread has gotten higher in recent days for reasons that look unrelated to your point about what the economic implications of a widespread virus would be.
Do you care to share those reasons? I’ve also been following Metaculus and my impression has been a slow progression of updates as the outbreak has gotten bigger, rather than a big update. However, the stock market looks like it did a big update.
Eh, I’m not so sure. If I noticed that every Wednesday the S&P went up 1%, and then fell 1% the next day, that would allow me to regularly beat it, no? Unless we are defining “abnormal” in a way that makes reference to the market.
Yeah similar to obesity, people seem quite willing to cave into their desires. I’d be interesting in knowing what the long-term effects of daily alcohol consumption are, though, because some sources have told me that it isn’t that bad for longevity. [ETA: The Wikipedia page is either very biased, or strongly rejects my prior sources!]
One way of framing the EMH is to say that in normal circumstances, it’s hard to beat the market. But we are in a highly abnormal circumstance—same with Bitcoin. One could imagine that even if the EMH false in its strong form, you have to wait years before seeing each new opportunity. This makes the market nearly unexploitable.
A common heuristic argument I’ve seen recently in the effective altruism community is the idea that existential risks are low probability because of what you could call the “People really don’t want to die” (PRDWTD) hypothesis. For example, see here,
People in general really want to avoid dying, so there’s a huge incentive (a willingness-to-pay measured in the trillions of dollars for the USA alone) to ensure that AI doesn’t kill everyone.
(Note that I hardly mean to strawman MacAskill here. I’m not arguing against him per se)
According to the PRDWTD hypothesis, existential risks shouldn’t be anything like war because in war you only kill your enemies, not yourself. Existential risks are rare events that should only happen if all parties made a mistake despite really really not wanting to. However, as plainly stated, it’s not clear to me whether this hypothesis really stands up to the evidence.
Strictly speaking, the thesis is obviously false. For example, how does the theory explain the facts that
When you tell most people about life extension, even probably billionaires who could do something about it, they don’t really care and come up with excuses about why life extension wouldn’t be that good anyway. Same with cryonics, and note I’m not just talking about people who think that cryonics is low probability: there are many people who think that it’s a significant probability but still don’t care.
The base rate of a leader dying is higher if they enter a war, yet historically leaders have been quite willing to join many conflicts. By this theory, Benito Mussolini, Hideki Tojo and Hitler apparently really really wanted to live, but entered a global conflict anyway that could have very reasonably (and in fact did) end in all of their deaths. I don’t think this is a one-off thing either.
I have met very few people who have researched micromorts before and purposely used them to reduce the risk of their own deaths from activities. When you ask people to estimate the risks of certain activities, they will often be orders of magnitudes off, indicating that they don’t really care that much about accurately estimating these facts.
As I said two days ago, few people seemed concerned by the coronavirus. Now I get it: there’s not much you can do to personally reduce your own death, and so actually stressing about it is pointless. But there also wasn’t much you could do to reduce your death after 9/11 and that didn’t stop people from freaking out. Therefore, if the theory you appeal to is that people don’t care about things they have no control over then your theory is false.
Obesity is a common concern in America, with 39.8% of adults here being obese, despite the fact that obesity is probably the number one contributor to death besides aging, and it’s much more controllable. I understand that it’s really hard for people to lose weight, and I don’t mean to diminish people’s struggles. There are solid reasons why it’s hard to avoid being obese for many people, but the same could also be true of existential risks.
I understand that you can clarify the hypothesis by talking about “artificially induced deaths” or some other reference class of events that fits the evidence I have above better. My point is just that you shouldn’t state “people really don’t want to die” without that big clarification, because otherwise I think it’s just false.
You might find Stuart Armstrong’s paper Anthropic decision theory for self-locating beliefs helpful.
Note that ADT is nothing but the Anthropic version of the far more general Updateless Decision Theory and Functional Decision Theory
That’s ok for most people. I can hope that bureaucrats, expert advisers, politicians and eg. Trump’s internal staff don’t share the same attitude.
Is it really necessary that I personally used my knowledge to sell stock? Why is it that important that I actually made money from what I’m saying? I’m simply pointing to a reasonable position given the evidence: you could have seen a potential pandemic coming, and anticipated the stock market falling. Wei Dai says above that he did it. Do I have to be the one who did it?
In any case, I used my foresight to predict that Metaculus’ median estimate would rise, and that seems to have borne out so far.
“Experts” counter-signal to separate themselves from the masses by saying “no need to panic”.
I think the main reason is that the social dynamic is probably favorable to them in the longrun. I worry that there is a higher social risk to being alarmist than being calm. Let me try to illustrate one scenario:
My current estimate is that there is only 15 − 20% probability of a global disaster (>50 million deaths within 1 year) mostly because the case fatality rate could be much lower than the currently reported rate, and previous illnesses like the swine flu became looking much less serious after more data came out.
Let’s say that the case fatality rate turns out to be 0.3% or something, and the illness does start looking like an abnormally bad flu, and people stop caring within months. “Experts” face no sort of criticism since they remained calm and were vindicated. People like us sigh in relief, and are perhaps reminded by the “experts” that there was nothing to worry about.
But let’s say that the case fatality rate actually turns out to be 3%, and 50% of the global population is infected. Then it’s a huge deal, global recession looks inevitable. “Experts” say that the disease is worse than anyone could have possibly seen coming, and most people believe them. People like us aren’t really vindicated, because everyone knows that the alarmists who predict doom every year will get it right occasionally.
Like with cryonics, the relatively low but still significant chance of a huge outcome makes people systematically refuse to calculate expected value. It’s not a good feature of human psychology.
I’m reminded of the fire alarm essay
When I observe that there’s no fire alarm for AGI, I’m not saying that there’s no possible equivalent of smoke appearing from under a door.
What I’m saying rather is that the smoke under the door is always going to be arguable; it is not going to be a clear and undeniable and absolute sign of fire; and so there is never going to be a fire alarm producing common knowledge that action is now due and socially acceptable.
I think what we’re seeing now is the smoke coming out from under the door and people don’t want to be the first one to cause a scene.
[ETA: I’m writing this now to cover myself in case people confuse my short form post as financial advice or something.] To be clear, and for the record, I am not saying that I had exceptional foresight, or that I am confident this outbreak will cause a global depression, or that I knew for sure that selling stock was the right thing to do a month ago. All I’m doing is pointing out that if you put together basic facts, then the evidence points to a very serious potential outcome, and I think it would be irrational at this point to place very low probabilities on doomy outcomes like the global population declining this year for the first time in centuries. People seem to be having weird biases that cause them to underestimate the risk. This is worth pointing out, and I pointed it out before.
As I said, I wrote a post about the risk about a month ago...
If the stock market indeed fell due to the coronavirus, and traders at the time misunderstood the severity, I say that I could have given actionable information in the form of “Sell your stock now” or something similar
It’s bad if this behavior shows up in future catastrophes IFF different behavior was available (knowable and achievable in terms of coordination) that would have reduced or mitigated the disaster.
Are things only bad if we can do things to prevent them? Let’s imagine the following hypothetical situation:
One month ago I identify a meteor on collision course towards Earth and I point out to people that if it hit us (which is not clear, but there is some pretty good evidence) then over a hundred million people will die. People don’t react. Most tell me that it’s nothing to worry about since it hasn’t hit Earth yet and the therefore the deathrate is 0.0%. Today, however, the stock market fell over 3%, following a day in which it fell 3%, and most media outlets are attributing this decline to the fact that the meteor has gotten closer. I go on Lesswrong shortform and say, “Hey guys, this is not good news. I have just learned that the world is so fragile that it looks highly likely we can’t get our shit together to plan for a meteor even we can see it coming more than a month in advance.” Someone tells me that this is only bad IFF different behavior was available that would have reduced or mitigated the disaster. But information was available! I put it in a post and told people about it. And furthermore, I’m just saying that our world is fragile. Things can still be bad even if I don’t point to a specific policy proposal that could have prevented it.
“plummeted”? S&P 500 is down half a percent for the last 30 days and up 12% for the last 6 months.
The last few days have been much more rapid.
Here’s the chart I have for the last 1 year, and you can definitely spot the recent trend.
Death rate so far seems well under that for auto collisions.
According to this source, “Nearly 1.25 million people die in road crashes each year.” That comes out to approximately 0.017% of the global population per year. By contrast, unless I the sources I provided are seriously incorrect, the coronavirus could kill between 0.78% to 2.0% of the global population. That’s nearly two orders of magnitude of a difference.
Think through actual scenarios and how your behaviors might actually influence them, rather than just making you feel somewhat less guilty about it.
The point of my shortform wasn’t that we can do something right now to reduce the risk massively. It was that people seem irrationally poised to dismiss a potential disaster. This is plausibly bad if this behavior shows up in future catastrophes that kill eg. billions of people.
Sure, that’s the “things are unpleasant for a while and then get better” scenario.
Where would you place global economic depression on your bimodal distribution?
See my shortform post.
I share this reaction. I think that a lot of people are under-reacting due to misperception of overreaction, signaling wisdom and vague outside view stuff. I can tell because so far everyone who has told me to “stop panicking” won’t give me any solid evidence for why my fears are underrated.
It now seems plausible that unless prominent epidemiologists are just making stuff up and the deathrate is also much smaller than its most commonly estimated value, then between 60-160 million people will die from it within about a year. Yet when I tell people this they just brush it off!
There is a large set of people who went around, and are still are going around, telling people that “The coronavirus is nothing to worry about” despite the fact that robust evidence has existed for about a month that this virus could result in a global disaster. (Don’t believe me? I wrote a post a month ago about it).
So many people have bought into the “Don’t worry about it” syndrome as a case of pretending to be wise, that I have become more pessimistic about humanity correctly responding to global catastrophic risks in the future. I too used to be one of those people who assumed that the default mode of thinking for an event like this was panic, but I’m starting to think that the real default mode is actually high status people going around saying, “Let’s not be like that ambiguous group over there panicking.”
Now that the stock market has plummeted, from what my perspective appeared entirely predictable given my inside view information, I am also starting to doubt the efficiency of the stock market in response to historically unprecedented events. And this outbreak could be even worse than even some of the most doomy media headlines are saying. If epidemiologists like the one in this article are right, and the death rate ends up being 2-3% (which seems plausible, especially if world infrastructure is strained), then we are looking at a mainline death count of between 60-160 million people dead within about a year. That could mark the first time that world population dropped in over 350 years.
This is not just a normal flu. It’s not just a “thing that takes out old people who are going to die anyway.” This could be like economic depression-level stuff, and is a big deal!
Thanks for the feedback. As a writer I still have a lot to learn about being more clear.
see above about trying to conform with the way terms are used, rather than defining terms and trying to drag everyone else along.
This seems odd given your objection to “soft/slow” takeoff usage and your advocacy of “continuous takeoff” ;)