Compelling evidence that we were wrong on any individual assertion would of course change my mind on sharing that particular assertion. Examples:
An N-week follow up showing that recovered individuals were not shedding virus and/or that close contacts weren’t getting infected. (I’ve gone back and forth on N here. I think six is the minimum and the longer the better).
Evidence that the CDC’s webpage guidelines were just for show and we were performing South-Korea-like drive by screenings (although, uh, that would bring up different concerns).
Properly controlled studies of attempts to get people to use masks showing that it led to a higher transmission rate.
And evidence that I was wrong on enough assertions would change my mind on the thesis, so I would of course withdraw it.
As to what would change my mind even if I still thought the post was true… If I found it was driving people to listen to worse sources, I would at least regret the order in which we’d published. However I don’t know how I could know which source was worst without an open sharing of the problems with all of them.
I go back and forth on whether simply sufficiently bad consequences would be enough to change my mind. I’m attracted to the consequentialist framework that says they should be. But in a world where posts like this are discouraged, how can I know what the consequences really are? Maybe people are net-benefiting from their trust in the CDC because it leads them to do things like vaccinate and wash their hands- but how could I trust the numbers saying that? How could I know vaccination and hand washing were even good, if it was possible to suppress evidence that they weren’t?
An option that I think should be on the table (at least to consider) is “the post is accessible to LessWrongers, but requires a log-in, so it can’t go viral among people who have a lot less context”.
This requires a feature we don’t currently have, but I think we’ll want sooner or later for political stuff, and is not that hard to build.
Right now I think this post is basically purely beneficial (I expect the people reading it to think critically about it and have access to give information), but if I found the post had gone viral I’d become much more uncertain. (this is not to say I’d think it was harmful, I’d just have much wider error bars)
The level of handwringing about this post seems completely out of proportion when there are many thousands of people coming up with all sorts of COVID-related conspiracy theories on facebook and twitter. If it went viral my guess is that it would actually increase trust in the CDC by giving people a more realistic grounding for their vague suspicions.
We do, and that’s the point. It’s not “hey, we’re not as bad as them so don’t complain to us!”. It’s that there is already a lot of distrust out there, and giving people something to latch onto with “see, I knew the CDC wasn’t being honest with me!” can keep them from spiraling out of control with their distrust, since at least they know where it ends.
Mild well sourced criticism is way more encouraging of trust than no criticism under obvious threat of censorship because the alternative isn’t “they must be perfect” it’s “if they have to hide it, the problems are probably worse than ‘mild’”.
I responded to this on a different thread, but aside from the factual issues, this isn’t “mild well sourced criticism.” The post says the CDC is so untrustworthy that we can’t point uninformed people to it as a valid place to learn things, and there is literally no decent source for what people should do. That’s way beyond what anyone else credible was saying.
I think that requiring a login would reduce my concern about this post 95%. But given that it isn’t, you can’t wait for a post to go viral before deciding it was bad, you need to decide not to post / remove the post beforehand.
I go back and forth on whether simply sufficiently bad consequences would be enough to change my mind.
This makes me far more convinced that we need to address the infohazard concerns, which I tried to raise, rather than debate consequences directly—which everyone seems to agree are plausibly very bad, likely just fine, and somewhat unclear. There is a process issue that I see here—as far as I’ve read, you as an author decided that there were significant potential concerns, decided that they might be minimal enough to be fine, and then—without discussing the issue—unilaterally chose to post anyways.
This seems like the very definition of Unilateralist’s curse, and if we can’t get this right here on lesswrong, I’m terrified of how we’ll do with AI risk.
Secondarily, for ” Compelling evidence that we were wrong on any individual assertion would of course change my mind on sharing that particular assertion,” I’ll point to the bizarre blaming of the CDC for HHS and FDA’s failure to allow independent testing.
And for the final point, about masks, there is no compelling reason to say they should be encouraging their use given that the vast majority of people don’t know how to use them and from what I have seen/heard from people in biosecurity in the US, are almost all misusing them, so the possible benefit is minimal at best. But even if they are on net effective, would be due to a reasonable disagreement about social priorities during a potential pandemic.
However, I think that you should be more charitable than even that in your post. If there is compelling reason to think that the decisions made were eminently reasonable given the information CDC had at the time, blaming them for not knowing what you know now, with far more information, seems like a poor reason to say we should not trust them. And other than their general hesitation to be alarmist, which is a real failing but one that is a good decision for institutional reasons, “I can see this was dumb in hindsight” seems to cover most of the remaining points you made.
Fair question.
Compelling evidence that we were wrong on any individual assertion would of course change my mind on sharing that particular assertion. Examples:
An N-week follow up showing that recovered individuals were not shedding virus and/or that close contacts weren’t getting infected. (I’ve gone back and forth on N here. I think six is the minimum and the longer the better).
Evidence that the CDC’s webpage guidelines were just for show and we were performing South-Korea-like drive by screenings (although, uh, that would bring up different concerns).
Properly controlled studies of attempts to get people to use masks showing that it led to a higher transmission rate.
And evidence that I was wrong on enough assertions would change my mind on the thesis, so I would of course withdraw it.
As to what would change my mind even if I still thought the post was true… If I found it was driving people to listen to worse sources, I would at least regret the order in which we’d published. However I don’t know how I could know which source was worst without an open sharing of the problems with all of them.
I go back and forth on whether simply sufficiently bad consequences would be enough to change my mind. I’m attracted to the consequentialist framework that says they should be. But in a world where posts like this are discouraged, how can I know what the consequences really are? Maybe people are net-benefiting from their trust in the CDC because it leads them to do things like vaccinate and wash their hands- but how could I trust the numbers saying that? How could I know vaccination and hand washing were even good, if it was possible to suppress evidence that they weren’t?
An option that I think should be on the table (at least to consider) is “the post is accessible to LessWrongers, but requires a log-in, so it can’t go viral among people who have a lot less context”.
This requires a feature we don’t currently have, but I think we’ll want sooner or later for political stuff, and is not that hard to build.
Right now I think this post is basically purely beneficial (I expect the people reading it to think critically about it and have access to give information), but if I found the post had gone viral I’d become much more uncertain. (this is not to say I’d think it was harmful, I’d just have much wider error bars)
The level of handwringing about this post seems completely out of proportion when there are many thousands of people coming up with all sorts of COVID-related conspiracy theories on facebook and twitter. If it went viral my guess is that it would actually increase trust in the CDC by giving people a more realistic grounding for their vague suspicions.
I think that we should aspire to higher epistemic standards than conspiracy theorists on twitter.
We do, and that’s the point. It’s not “hey, we’re not as bad as them so don’t complain to us!”. It’s that there is already a lot of distrust out there, and giving people something to latch onto with “see, I knew the CDC wasn’t being honest with me!” can keep them from spiraling out of control with their distrust, since at least they know where it ends.
Mild well sourced criticism is way more encouraging of trust than no criticism under obvious threat of censorship because the alternative isn’t “they must be perfect” it’s “if they have to hide it, the problems are probably worse than ‘mild’”.
I responded to this on a different thread, but aside from the factual issues, this isn’t “mild well sourced criticism.” The post says the CDC is so untrustworthy that we can’t point uninformed people to it as a valid place to learn things, and there is literally no decent source for what people should do. That’s way beyond what anyone else credible was saying.
Of course we should, but that is irrelevant to the question of whether this post is hazardous if people without LW accounts read it.
Unless there are large enough demographics for which this post looks credible while FB conspiracies do not.
I think that requiring a login would reduce my concern about this post 95%. But given that it isn’t, you can’t wait for a post to go viral before deciding it was bad, you need to decide not to post / remove the post beforehand.
I think such a feature would be really useful and taking the current case as a reason to prioritize developing it seem prudent.
This makes me far more convinced that we need to address the infohazard concerns, which I tried to raise, rather than debate consequences directly—which everyone seems to agree are plausibly very bad, likely just fine, and somewhat unclear. There is a process issue that I see here—as far as I’ve read, you as an author decided that there were significant potential concerns, decided that they might be minimal enough to be fine, and then—without discussing the issue—unilaterally chose to post anyways.
This seems like the very definition of Unilateralist’s curse, and if we can’t get this right here on lesswrong, I’m terrified of how we’ll do with AI risk.
Secondarily, for ” Compelling evidence that we were wrong on any individual assertion would of course change my mind on sharing that particular assertion,” I’ll point to the bizarre blaming of the CDC for HHS and FDA’s failure to allow independent testing.
And for the final point, about masks, there is no compelling reason to say they should be encouraging their use given that the vast majority of people don’t know how to use them and from what I have seen/heard from people in biosecurity in the US, are almost all misusing them, so the possible benefit is minimal at best. But even if they are on net effective, would be due to a reasonable disagreement about social priorities during a potential pandemic.
However, I think that you should be more charitable than even that in your post. If there is compelling reason to think that the decisions made were eminently reasonable given the information CDC had at the time, blaming them for not knowing what you know now, with far more information, seems like a poor reason to say we should not trust them. And other than their general hesitation to be alarmist, which is a real failing but one that is a good decision for institutional reasons, “I can see this was dumb in hindsight” seems to cover most of the remaining points you made.