Text of the shared image; don’t say I didn’t warn you about the quality of the writing, and [sic] for the whole thing. It does read like it could really be from an elderly physical therapist.
If you end up with pulmonary symptoms of corona virus pneumonia… there can be lethal damage from effusion (mucous filling lungs) or cytokine storm (body over-reacts with more effusion.
This kills people… ESPECIALLY when the number of patients is greater than the number of ICU beds or ventilators. You will be left to drown in your mucous. That mucous can also be infected by other germs during your struggle. That is happening in Italy where they have 5x more patients than they have hospital beds. And the USA has FEWER hospital beds per population than does Italy.
Many years ago, physical therapists have successfully treated this with POSTURAL DRAINAGE… where the patient is tipped over a wedge to tilt the lungs and bronchial tubes upside down… to allow the mucous to flow out, where it can be coughed out.
Google it. It is EASY to do for yourself and your family members. Simply get in position and let it flow, helping it along with breathing techniques that emphasize full, prolonged exhale, while puffing your cheeks and you blow out long and steady.
Start as soon as you feel lungs getting filled. Don’t wait until you are too sick to bother. 3-5 minutes several times per day.
I did this inside a nursing home in VT during the 1976 flu epidemic for resident patients. We did not lose anyone, while other nursing homes lost dozens. It is an old PT technique that has faded away since we have ventilators and related machines. BUT this time, we will NOT have nearly enough ventilators, not the ICU beds where they are provided.
One easy way to get into position is to lie over an EXERCISE BALL.
...and there the image ends.
I’ve seen an image on social media that suggests postural drainage, a physical therapy practice used mostly for cystic fibrosis, as a way to cope with COVID-19 at a sub-hospitalization stage; the shared image suggests that draining the mucus can keep a patient from needing a ventilator. (I’ll transcribe the actual text attached to the image in a subthread, but it’s of pretty low quality; I’ve written here what I think is the only interesting point.)
Unfortunately, Googling “postural drainage coronavirus” just gets me all the medical pages on postural drainage (because they now have headers about coronavirus).
It’s a very cheap intervention for patients not on ventilators, the mechanism seems at least plausible, and it’s the sort of thing that medical professionals might fail to consider. Is it worth taking a closer look?
Nice legwork! It’s insanity and incompetence on the part of the experts after all.
Robin Hanson started it.
I think it shouldn’t matter much which definition was used, but the World Series of Poker has one “Main Event” consisting of no-limit Texas Hold-Em, and several smaller events for the different styles of poker. I would have interpreted the question as asking about the Main Event only if it didn’t specify.
[EDIT: the following is mistaken and the claim in OP was correct, though that wasn’t knowable from the publicly released data. See habryka’s comment.]
Many of the expert predictions were indeed crazily optimistic and had tiny error bars, but there’s a problem with the story. FiveThirtyEight mistakenly reported (and they still haven’t updated this!) that the March 16-17 survey asked experts about the number of cases reported on Covid Tracker on March 29, when in fact the study asked about March 23rd.
The correct number on the 23rd was 42,152. This was of course in line with the exponential extrapolation, and it was worse than the worst-case estimates of 13 out of 18 researchers, but at least their estimates show only typical levels of insanity and incompetence.
If the listener is running a computable logical uncertainty algorithm, then for a difficult proposition it hasn’t made much sense of, the listener might say “70% likely it’s a theorem and X will say it, 20% likely it’s not a theorem and X won’t say it, 5% PA is inconsistent and X will say both, 5% X isn’t naming all and only theorems of PA”.
Conditioned on PA being consistent and on X naming all and only theorems of PA, and on the listener’s logical uncertainty being well-calibrated, you’d expect that in 78% of such cases X eventually names it.
But you can’t use the listener’s current probabilities on [X saying it] to sort out theorems from non-theorems in a way that breaks computability!
What am I missing?
I don’t know whether you missed this or just didn’t spell it out, but the reason that the likelihood ratio is so much in favor of materialism over solipsism is that if solipsism were true, you could experience literally anything, and apparently ordered universes consistent with simple physical laws are a vanishing subset of the possible experiences.
You’d need a solipsist theory that strongly predicts ordered universes without adding in too much complexity, just in order to be in the conversation with the likelihood of qualia being reducible to materialism.
“Qualia being irreducible” is, to be as charitable to you as possible, in the reference class of philosophical positions that some people have seen as unassailable and others have seen as flawed. You don’t get to assign incredibly high probability within this reference class.
(To be uncharitable, it is an intuition for which you cannot provide even what looks like an airtight philosophical argument, just louder reiterations of your intuition.)
In general, it’s good to check your intuitions against evidence where possible (so, seek out experiments and treat experimentally validated hypotheses as much stronger than intuitions).
The valley being described here is the idea that you should just discard your intuitions in favor of the null hypothesis, not just when experiments have failed to reject the null hypothesis (though even here, they could just be underpowered!), but when experiments haven’t been done at all!
It’s a generalized form of an isolated demand for rigor, where whatever gets defined as a null hypothesis gets a free pass, but anything else has to prove itself to a high standard. And that leads to really poor performance in domains where evidence is hard to come by (quickly enough), relative to trusting intuitive priors and weak evidence when that’s all that’s available.
But we already filter more than the reference class of smart Internet people, that’s the point. cousin_it argues, and I agree, that this community may already be on the extreme of “filters too carefully in the face of the need for urgent updates”. We did well by taking COVID-19 seriously before it was proven, and we could have done still better on that front.
Sometimes. More often the hero just tries AGAIN, BUT HARDER.
Huzzah, convergence! I appreciate the points you’ve made.
Don’t know if you saw, but I updated the post yesterday because of your (and khafra’s) points.
Also, your caveat is a good reframe of the main mechanism behind the post.
I do still disagree with you somewhat, because I think that people going through a crisis of faith are prone to flailing around and taking naive actions that they would have reconsidered after a week or month of actually thinking through the implications of their new belief. Trying to maximize utility while making a major update is safe for ideal Bayesian reasoners, but it fails badly for actual humans.
In the absence of an external crisis, taking relatively safe actions (and few irreversible actions) is correct in the short term, and the status quo is going to be reasonably safe for most people if you’ve been living it for years. If you can back off from newly-suspected-wrong activities for the time being without doing so irreversibly, then yes that’s better.
Well, if you have a space program and you’re dealing with crystal spheres...
I think khafra and Isnasene make good points about not applying this in cases where the plane shows signs of actually dropping and you’re updating on that. (In this case, the signs would be watching people you respect tell you to start prepping immediately- act on the warning lights in the cockpit rather than waiting for the engines to fail.)
I agree that carefully landing the plane is better than maintaining the course if catastrophic outcomes suddenly seem more plausible than before.
Obviously it applies if you’re the lead on a new technological project and suddenly realize a plausible catastrophic risk from it.
I don’t think it applies very strongly in your example about animal welfare, unless the protagonist has unusually high leverage on a big decision about to be made. The cost of continuing to stay in the old job for a few weeks while thinking things over (especially if leaving and then coming back would be infeasible) is plausibly worth the value of information thus gained.
I’d modify that, since panic can make you falsely put yourself in weird reference classes in the short run. It’s more reliable IMO to ask whether anything has shifted massively in the external world at the same time as it’s shifted in your model.
How about promise yourself to keep steering the plane mostly as normal while you think about lift, as long as the plane seems to be flying normally?
I wish I’d remembered to include this in the original post (and it feels wrong to slip it in now), but Scott Aaronson neatly paralleled my distinction between rationalists and post-rationalists when discussing interpretations of quantum mechanics:
But the basic split between Many-Worlds and Copenhagen (or better: between Many-Worlds and “shut-up-and-calculate” / “QM needs no interpretation” / etc.), I regard as coming from two fundamentally different conceptions of what a scientific theory is supposed to do for you. Is it supposed to posit an objective state for the universe, or be only a tool that you use to organize your experiences?
Scott tries his best to give a not-answer and be done with it, which is in keeping with my categorization of him as a prominent rationalist-adjacent.