I’ve seen a lot of complaints about Metz’s history, but they all seem backwards to me. They seem like a satire of virtue ethics.
Who do you think he’s “working for”? If he is working for outside forces (eg, keeping a source happy), then drawing attention to it is exactly the best way to take it out of his hands and force him to work for his editor; and force his editor to work for the paper.
Writing puff pieces sounds more lazy than malicious to me.
Are there compendiums or classifications of trolley problems?What is the most extreme real-world trolley problem? By “real-world” I mean something that really happens, emphasis on the plural. I don’t want one-off examples where one person has the moral luck of having to face it and everyone else can breathe easy that they didn’t have to think about it. I want examples where there is a definite, known policy. By “extreme,” I mean something that really pushes people’s buttons. By a classification, I mean a classification of which features make it more like a visceral trolley problem and which more like a blurry statistical haze that allows trading lives.
I propose a candidate: the dengue vaccine. In any event, I think people will find it interesting.
Dengue fever is an often-fatal mosquito-born tropical viral disease. People develop immunity, so we could make a vaccine. Obvious candidate, except … Since we are all now experts in antibodies, we all know about the crazy phenomenon of antibody-dependent enhancement, mainly observed in dengue. It is not one virus, but four closely related strains with different envelope proteins and different immunity. If you get one, it’s a non-lethal disease and you become immune to that strain. But you’re still vulnerable to the other strains and, for not entirely clear reasons, infection with a new strain is much worse.
If you’ve already had some variant of dengue, any vaccine is better than none. But if you’ve never been exposed, it might be worse than not vaccinating. So of course the vaccine is a combination of all four variants. What if each of the four vaccines had a 95% chance of working, independent? Then someone receiving the vaccine would have about a 20% chance of not being vaccinated for all four. Let’s say that’s worse than nothing. Vaccinating everyone is a trolley problem benefiting people who have been exposed at the expense of those who have not been exposed. Both the benefit and harm is statistical (you don’t know that you’ll ever get dengue in the future), but the two groups of people can be identified ahead of time, not in a God’s eye view of who will be bitten, but in a really potentially testable way. You could just test people for antibodies. If you’re first-world-rich, perhaps a tourist from the first world, you can get repeated testing for antibodies and if you ever test positive, then you should get the vaccine. But the testing is more expensive than the vaccine (and logistically complicated) and Filipinos are poor, so we’re not going to pay to test them. Should we choose some simple criterion like an age threshold and living in a badly hit area and just vaccinate everyone?
This was a hypothetical and I’m not sure if people were ever faced with this decision. If so, they decided not to pull the switch and instead kept working on the vaccine until it was much better than 95% effective. It was so effective (at least as measured by producing antibodies) enough that they rounded it off to 100% declared the problem solved and vaccinated a bunch of Filipinos who were old enough that they’d probably had it once.
And then the data trickled in and it saved lots of (net) lives, but it wasn’t quite as good as hoped. People who had been vaccinated still got dengue, just not as often. But surely that meant that people who hadn’t been exposed before were promoting mild to severe dengue? This seems pretty obvious, but they put their fingers in their ears and waited for the data to pin that down. That waiting, or maybe something else, burned their credibility and now the WHO policy is that you shouldn’t give anyone the vaccine without an antibody test. Practically speaking, that means no vaccines.
This is a trolley problem that happened in the real world and the fact that the groups of people are potentially knowable seems to really important to reluctance to switching tracks. But the rejection of the vaccine is not purely the result of the trolley problem, but also about burnt credibility.
A common pedagogical example of the perils of correlation analysis that ice cream consumption is correlated with homicide. The common cause is seasonal variation. This is usually presented as an absurd example, a mistake no one would make, but there is an extremely similar example that was nationally prominent. Polio was blamed on ice cream consumption because they had the same seasonal pattern. I wonder if the standard example was engineered from the real example. Perhaps it is better (eg, more absurd), but one doesn’t have to choose just one example; surely it is better to also include the historical example.
What do you mean by “mistake theory” and “conflict theory”?
I’m really confused by this comment and I think you are using the terms backwards. Telling someone that they’ve made a mistake is a violent act, a form of conflict, but it is an example of mistake theory.
Some people theorize that there is an irreducible conflict. They generally recommend that their side not talk to NYT. Until the doxing came up, they were the dominant voices on the topic of this article in preparation, or at least the ones causing discussion. But after the topic moved on to doxing, they have nothing more to say and have been overwhelmed by
This LW thread is almost entirely about mistake theory. Maybe you see different things on twitter, but if so, you should say that, because the one thing all your readers have in common is that they’re on LW.
Is this one of those exercises in which you write out your argument and then reverse the valence of every claim in order to see if there was an argument? That is, was this originally a list of the form: “Memory palaces are a bad idea because they produce memorization at the expense of ___”?
Does anyone know what exercise I’m talking about? I think it was in the Sequences.
I don’t think that their principal goal is to doxx him. But there is a big difference between a habit and a rule. It’s not that they used the name without thinking about it, but they specifically rejected his complaint and said that they were just following orders.
On many other places I see people discussing this, they point out that the reporter’s claim that there is an NYT policy is a bald-faced lie. You are the first person I have seen that took it at face value. This LW discussion is striking because no one else acknowledges the claim at all. I think that they believe that it is a lie, but don’t want to rudely point that out, so they pretend it was not uttered.
Added, next day: I estimate that 99% of the time that NYT writes about someone with a professional pseudonym, they treat it as a real name. 1% of the time, they note that it is a pseudonym and 1⁄10 of those times, 1/1000 of all times they print the real name.
Seriously, 99% of the time. I am not being hyperbolic. The main source of uncertainty is how often they write about someone with a professional pseudonym. I estimate that NYT writes about someone with a professional surname every day.
If you want to clarify, edit your original post.
except trusting that he isn’t picking and choosing his arguments
Well, don’t do that. I told you this before.
What’s his confidence interval?
What’s CBG’s confidence interval? When he says 0.5-1%, does he mean something? Does he mean a confidence interval, or a distribution of “normal” situations or a distribution of more general situations? Or does he not mean anything?
Later on in that thread CBG also acknowledges it may be higher in than 1% in some places and conditions.
It’s nice that he says that, but that’s exactly the situation that you cited him in the other thread, claiming <=1%. I’m guessing that the pseudo-detail is exactly what caused you to not understand his claims. If you don’t know what he claims, how can you assess his work? At least with GC you’re not fooling yourself about what you’ve done.
And I still don’t know what he claims. He seems to claim that NYC had IFR <=1%. Was NYC normal or not? In any event he’s wrong. If NYC defines the upper range, then this affects his conclusion. If NYC doesn’t count, I dunno, but I’m pretty sure that people are equivocating on whether it counts.
The Default Infection Fatality Rate (IFR) Is 0.5%-1%
Why do you believe that? We can only measure IFR in the worst outbreaks, such as NYC and Lombardy, where it was 1-2%. Maybe hospitals that aren’t overrun have half the morality rate, but how do you know?
America in general could be as high as 1.2% IFR without making the data stop making sense.
What about the data wouldn’t make sense if the IFR were 2% in America ex NYC? Outside of a massive outbreak, we can measure neither deaths nor infections. Sure, if you assume that only 33% of deaths are missed, then we can measure deaths. But why assume that? Isn’t that only true in NYC because there was pressure to record untested pneumonia deaths? Elsewhere there is less attention and less pressure.
You say that like detail is a pure good. “Greg Cochran says 1.2%” is better than any number of words from CBG. Anyhow, you repudiated this. When I pushed you on it, you came up with the number 1.4%.
start with seroprevalence data
Because of false positives, seroprevalence is massively overestimated everywhere that there hasn’t been a massive outbreak. In those places the IFR is 1-2%. But can we extrapolate to normal outbreaks? If, as widely believed, an overrun medical system has worse mortality, then maybe the normal IFR really is only 0.5-1%. But if your meta-analysis directly measures that, it is not well-done.
Yes, exactly: this post conflates accuracy and calibration. Thus it is a poor antidote to people who make that mistake.
It is striking how errors in discussions of this topic are systematically in the direction of downplaying the severity. Probably 95% of errors.
assuming a runaway infection we’d have R=3 so ~220M infected
This is a math error. Herd immunity is achieved once 1-1/R is infected. The goal of “flattening the curve,” is to just barely reach this number. But in a “runaway” scenario, it is much higher. The epidemic final size of the SIR model is 94%.
Since Lombardy had a population fatality rate of 0.2%, I’m not going to look at your citations. I assume the problem is that they ignore most of the deaths.
Well, that’s something, but I don’t see how it’s relevant to this thread.
Nothing in academic biology makes sense except in the light of feudalism.
Given Coronavirus IFR <1% then with a US population of 330 million this seems almost certain. I would have put this probability higher if there was a higher option.
If a lot of people get infected, the hospital systems will collapse and the IFR will be higher than 1%, as it was in Wuhan, Lombardy, and NYC. If the whole population gets infected, it will be much higher. Also, the IFC is probably >1% even without collapse.
So you probably won’t convince me that these people know what the claim is, but you haven’t even attempted to convince me that you know what the claim is. Do you see that I asked multiple questions?
Could you give an example?
Could you give an example where the claim is that 50% predictions are less meaningful than 10% predictions?
How do you know that it is about accuracy?
Note that you have rewritten and cherry-picked his predictions.
His precise predictions were all wrong. Maybe Japan was undercounting cases by 5x, but so was everyone else. Cases were rising at 8% per day and deaths were rising at 8% per day. Cases and deaths have both continued to rise at 8% day. For factual purposes, the best prediction was to trust the data and simply extrapolate.
The consensus that Japan was OK was wrong, but it was directly contradicted by the official data. Exponential growth is bad. 8% per day is unacceptable. But would it be possible to simply point that out? I don’t know. Maybe the only way to get attention was to claim that the data was wrong.
Here is another story of people refusing to acknowledge exponentials.