I think the trouble might come from imagining the process as a gradual process by which a dog population evolved into a tumor population (which is not what happened; the wording in the original post is pretty misleading). The dog-to-tumor part is actually the easier and less shocking part of the story. Tumors are basically just cells that by some mutation have trouble regulating cell division and then divide uncontrollably. Malignant tumors (what we call cancers) are just tumors that happen to harm the organism (and maybe metastasize). So this particular tumor was once a dog cell, just as every human cancer starts out as a human cell. The interesting part of the story is that the tumor got to have a limited ability to survive outside of the original dog’s body, and got to be able to survive within other dogs and other canids.
AlexSchell
On any reasonable operational definition of “less entangled with reality than most religions”, you are ridiculously wrong in claiming that medicine fits the description, and I think Hanson might agree. (I’m less certain about this with regard to certain subfields like stroke rehabilitation, certain sub-subfields in nutrition, etc., but I’m talking about the weighted accuracy of the sorts of activities that Western MDs perform, that are taught in Western medical schools, etc.)
EDIT: Full disclosure: I’m a pharmacy student, so it would be moderately devastating to my sense of worth if you were right. Still.
It’s one thing to make lemonade out of lemons, another to proclaim that lemons are what you’d hope for in the first place.
Gary Marcus, Kluge
Relevant to deathism and many other things
At least one point of Three Worlds Collide is to help the reader appreciate what Irreconcilable Moral Differences feel like from the inside. Humanity revising its view of consent contributes to this goal, and has the benefit of being nearer. With immortality to keep past generations alive, sufficient cumulative moral progress will feel to them about as alien and terrible as legalizing rape.
This blog post argues that the now popular idea of “flattening the curve”, in the sense that most people get exposed but slowly enough to not overwhelm the health care system, is not feasible. The result is that we’ll either achieve containment or at least widespread regional health care system collapse (and maybe Wei Dai’s global health care collapse outcome). I haven’t spent much time modeling this yet, but tentatively it looks like flattening the curve requires very precise fine-tuning of R0 to stay on a path very close to 1 for at least several months, which seems impossible to pull off.
It feels to me now that flattening the curve is just a nice graphic without anyone checking the math, but I am confused that many informed-seeming experts are promoting the idea. Anything I’m missing?
ETA: I made an epidemic + hospitalization model (Google Sheets), it sure looks like the usual flatten-the-curve chart is a comforting fiction. Peak hospital bed demand in the uncontrolled epidemic scenario is usually drawn at 2-3x hospital capacity. I’m getting 25x and the chart looks a lot less reassuring. My shakiest assumptions are hospitalization / intensive care rates, any feedback there would be very helpful.
- 18 Mar 2020 15:58 UTC; 14 points) 's comment on Assorted thoughts on the coronavirus by (
- 12 Mar 2020 18:27 UTC; 2 points) 's comment on March Coronavirus Open Thread by (
I propose “Confusion Awareness Day”.
Well, it’s a pretty clear instance of the availability heuristic.
Donated $100.
There also seems to be a reference to the Singularity Institute:
You should get out more :)
By this I mean to become more acquainted with non-SI efforts in machine learning and AI (which is almost the same as “efforts in machine learning and AI”).
Regardless of whether we should have more or fewer posts, the problem you noticed is more precisely traced back to the lack of infrastructure aimed at collating the best output and resources produced or aggregated here. I got a lot of benefit from the “best textbooks” thread, the post(s) introducing Beeminder back in the day, the post by cousin_it on mutual screen-monitoring, and perhaps from a few other interventions (standing desks, nicotine) I picked up in the local memespace. I doubt I could find many of these as a newcomer, except by lurking around for long enough. Not proposing a solution so far, but this seems to be a common problem with big blogs that have lots of excellent content but even more chaff.
ETA: N-acetylcysteine is actually FDA-approved but only as an expectorant and as an antidote for APAP overdose.
I greatly benefited from a silly-seeming “information hazard management scheme” (suggestions for better/existing terms are welcome):
I was going to interview at my top choice med school that I applied to, School A, and knew that I would receive my first admissions decision, from School B, on the same day I would be interviewing. I was mildly confident (60-70%) that School B would accept me, and I really wanted to know their decision. If I were to be accepted, I figured I’d get a confidence boost that would improve my interview performance at School A. But finding that I hadn’t been accepted would badly shake my confidence.
So I arranged to receive a noisy and biased signal of my admissions decision. I asked my sister to execute the following when prompted: Flip a coin twice; if the outcome is HH, stop there; otherwise log into my email account and send me a text message iff School B admitted me. This protocol has the effect of diluting the bad news with noise—P(admitted | no text) ~ 0.3-0.4 -- while still being 75% likely to give me the good news if it exists.
On interview day at School A, the decision email arrived about 30 minutes before my actual interviews. I let my sister know, and waited. I didn’t hear back, and as expected, I didn’t think much of it. I resisted the temptation to open the email until after my interviews (the interviews went fine). It turned out that School B waitlisted me, which predictably wrecked my confidence. School A would later be the only med school to accept me.
[Instrumentalism about science] has a long and rather sorry philosophical history: most contemporary philosophers of science regard it as fairly conclusively refuted. But I think it’s easier to see what’s wrong with it just by noticing that real science just isn’t like this. According to instrumentalism, palaeontologists talk about dinosaurs so they can understand fossils, astrophysicists talk about stars so they can understand photoplates, virologists talk about viruses so they can understand NMR instruments, and particle physicists talk about the Higgs Boson so they can understand the LHC. In each case, it’s quite clear that instrumentalism is the wrong way around. Science is not “about” experiments; science is about the world, and experiments are part of its toolkit.
If you first do lockdowns to get new cases to ~0 and then relax, optimistically you will get localized epidemics that you can contain with widespread testing, contact tracing, and distancing if needed. Cost of testing & tracing and having to do occasional local/regional lockdowns could end up being manageable until treatment/vaccine arrives.
My main reason for optimism is Korea’s and China’s success containing a large outbreak. We will be expecting the secondary epidemics and reacting quickly, so they will be small when detected, so should be much easier to contain than the first surprise outbreak.
We’ll get data on this in the coming months as China loosens restrictions. There is option value in containing asap and first trying things other than deliberate infections.
you shouldn’t feed the patient rat poison
Are you referring to warfarin here or am I imagining things?
I am working on finishing up a philosophy paper about whether “fine-tuning” (the claim that the physical constants and initial conditions that permit the evolution of life and conscious observers are rare in the space of physically possible parameters) supports “multiverse” hypotheses according to which the cosmos is huge and is heterogeneous in its local conditions. One major argument for the view that fine-tuning does not support multiverse hypotheses is due to Ian Hacking, who claimed that this inference is analogous to an “inverse gambler’s fallacy” where a gambler enters a casino, witnesses a roll of dice resulting in double-sixes, and concludes that the people must have been throwing dice for a while.
While going through Nick Bostrom’s book Anthropic Bias, I’ve found his discussion of Hacking’s argument (and of an significantly improved recent version by Roger White, available here ) somewhat unilluminating, although I thought there must be something wrong with the argument. Going through the existing replies to this argument in the literature I’ve found counterarguments that either fail straightforwardly or (more commonly) render fine-tuning irrelevant to whether multiverse hypotheses are confirmed, degenerating into an almost a priori argument that I find very implausible. I’ve found a fairly simple way of seeing how exactly the Hacking/White argument goes wrong, by combining Bostrom’s self-sampling assumption with a technical fix independently arrived at by a few other philosophers. This solution does not generate the implausible a priori argument for the multiverse that previous approaches in the literature do, as long as the reference class (for applying the self-sampling assumption) satisfies some weak requirements.
The result is a critical review paper going through the literature while building up the concepts needed to understand the proposed solution. I’ve produced all the content by now, and am now mostly working on finishing a draft, integrating notation across sections, making it readable to philosophers with at least rudimentary knowledge of Bayesianism, and in general improving the paper to meet top-tier journal standards.
It looks like widespread border closures are inevitable now, and border policy will become even more visibly important if/when community transmission is brought under control in a country (e.g. as in China today where ~100% of new cases outside Hubei are imported). So I don’t think advocating for border closures is high leverage at the moment.
I agree that it’s super high leverage to get the public and policymakers to understand that it’s not too late for eradication (R0 < 1) through strong social distancing, and that it may be feasible to keep secondary epidemics controlled at social cost far below that of continued lockdown. [ETA: as far as I can tell there is near consensus on this point among vocal rationalists on Coronavirus Twitter, but I have seen no public official or advisor state or signal that they are looking in this direction at all.]
An important part of that will be running the numbers on something like Taleb’s proposal or your “Basically” paragraph. Ferguson et al. got quantitative results from their model & set of assumptions, so far the response has been mostly handwaving and pointing at case studies (where we have 2 months of data and are making claims around sustainable policy on 1-2 year scale).
The only point of probabilities is to have them guide actions. How does the concept of Knightian uncertainty help in guiding actions?
I’d like to be more conscious about my Bayesian-type updates of my beliefs based on general accounts of what people say. So far, I’ve started using a rule-of-thumb that somebody telling me something is so is worth approximately 1 decibel of belief (1/3rd of a bit)
You don’t really believe that. When someone introduces themselves to you under normal circumstances, your probability distribution about what their name is concentrates immensely in a few seconds. See this paper by Robin Hanson for this point and related discussion.
The success rate for out-of-hospital cardiac arrest (measured as survival until discharge) is about 10% in the US, much higher than the quoted person’s experience of <1%. (I’m not sure if this figure counts all arrests or only arrests that make it to the ER; since survival is lower for arrests that don’t make it to the, 10% may be an underestimate depending on what’s counted and what’s not).
Hastie & Dawes, Rational Choice in an Uncertain World, pp. 67-8.