Edit: This is a type of post that should have been vetted with someone for infohazards and harms before being posted, and (Further edit) I think it should have been removed by the authors., though censorship is obviously counterproductive at this point.
Infohazards are a real thing, as is the Unilateralists’s curse. (Edit to add: No, infohazards and unilateralist’s curse are not about existential or global catastrophic risk. Read the papers.) And right now, overall, reduced trust in CDC will almost certainly kill people. Yes, their currently political leadership is crappy, and blameworthy for a number of bad decisions—but it doesn’t change the fact that undermining them now is a very bad idea.
Yes, the CDC has screwed up many times, but publicly blaming them for things that were non-obvious (like failing to delay sending out lab kits for further testing,) or that they screwed up, and everyone paying attention including them now realizes they got wrong (like being slow to allow outside testing,) in the middle of a pandemic seems like exactly the kind of consequence-blind action that lesswrongers should know better than to engage in.
Disclaimer: I know lots of people at CDC, including some in infectious diseases, and have friends there. They are human, and get things wrong under pressure—and perhaps there are people who would do better, but that’s not the question at hand.
You’re wrong about this. Trust in the CDC is not a single-variable scale and not a generically useful resource. Trust in the CDC is a mix of peoples’ estimation of the CDC’s competence, and their estimation of whether the CDC is biased towards under-response or over-response. It is severely harmful for people to over-estimate the CDC’s competence, or to fail to recognize that the CDC is biased towards under-response.
Having previously over-estimated CDC’s competence caused many parties which could have been bypassing the CDC to create and deploy tests, to fail to respond in time. I expect that decision-makers currently relying on the CDC’s competence will implement distancing measures and ban gatherings much too late.
The main reason we might want people to over-estimate the CDC’s competence is that this trust could be used to solve coordination problems. However, the coordination problems that CDC could plausibly solve—closing airports, banning public gatherings, and implementing quarantines—are problems that it solves using legal power, not using generic community trust. To the extent that community trust is required to implement such measures, knowing that the CDC has been consistently biased towards under-response will make it easier, to a greater degree than knowing that they’ve been incompetent will make it harder.
My evaluation is that reducing trust in the CDC has net-positive consequences. But note that, separately, I don’t think an evaluation of this depth is typically required before truthfully speaking about an organization’s credibility. I expect that nearly all of the time, when trading off between speaking truth and empowering an institution, speaking truth is the correct move, and those who think otherwise will be mistaken.
I can’t reply to all of this now, but in short, yes, it’s not a single variable scale, and yes, over-reliance on government was on net very harmful to this point. But no, most of the CDC’s influence isn’t legal powers, since the US legal system simply doesn’t work that way. The CDC cannot tell governors what to do, nor can the president—that’s all about their ability to persuade people that they should be listened to, and it’s going to be critical if and when they get their shit together.
I also think several object-level claims in the post are *wildly* off base in several places, in part showing a basic lack of understanding of the issues, and in part claiming retrospectively that they should have know things that no-one knew in advance. It claims The CDC should have approved testing that they weren’t part of the approval process for—no, HHS isn’t the same as CDC, nor is the FDA. It says the CDC needed to respond with tests quicker, but they evidently should have gone slower with distributing lab tests to ensure they didn’t have a false positive problem. The CDC correctly told people that focusing on masks would be a bad idea, because taking focus away from handwashing is really fucking stupid given relative efficacy and scalability of the two.
And your prior assumption that on balance it’s better to attack an institution instead of empowering it is predicated on the claims 1) being true, and 2) directly having a bearing on whether the institution should be trusted. In this case, I’ve noted that 1 is false, and for 2, I don’t think that directly attacking their credibility for admitted missteps is either necessary or helpful in telling the truth.
You’re right that the blocks to testing were largely caused by HHS and the FDA, not the CDC. We described that in the text, but I agree that there’s too large a risk someone skims the headings and misses that. I think it’s important to include because it’s entangled with things the CDC did do, but I’ve edited the heading to be clearer.
I think you’re confusing the CDC with the US government generally in many more places, and have failed to differentiate in ways that are misleading to readers. And as I said, I think you’re both blaming the CDC for things they got right, like discouraging use of already scarce masks by the uninfected public, and wrong to blame the CDC for mistakes that are only clear in retrospect.
This is a type of post that should have been vetted with someone for infohazards and harms before being posted, and pending that, I think it should be deleted by moderators or removed by the authors.
As a response to this, the moderator team did indeed reach out (CC’ing David) to one of the people I think David and I both consider to be among the best informed decision-makers in biorisk. With their permission, here is the key excerpt from their response:
> [Me summarizing David:] David is under the impression that people like Elizabeth and Jim are under an obligation to show posts like this to people in biorisk like yourself and definitely not publish if you had any objections (and that posts that don’t do so should be immediately deleted). Do you think they are under that obligation and that we should delete posts of this type?
I do not think they are under an obligation to do this. If the post contained object-level nonobvious content related to generating or exacerbating biorisks, I would consider them under a moral obligation to do so, the strength of which would depend on the particulars of the situation.
If the post overemphasizes the degree to which it’s handled the outbreak badly only mildly-moderately, or based on reasonable-seeming lines of argumentation in my view, I’d likely consider that within the reasonable range of opinions/perspectives to hold and share on forums like LW. If the post was highly misleading, such that I thought it communicated the wrong picture of the CDC, then I’d think it was epistemically virtuous to make top-level updates, and if the authors refused to do that, writing a counter-post explaining why their post was misleading would seem like a good thing to do to me, though not something I’d want to demand, if I were in position to demand such a thing, which I don’t consider myself to be.
Overall, my sense is that you made a prediction that people in biorisk would consider this post an infohazard that had to be prevented from spreading (you also reported this post to the admins, saying that we should “talk to someone who works in biorisk at at FHI, Openphil, etc. to confirm that this is a really bad idea”).
We have now done so, and in this case others did not share your assessment (and I expect most other experts would give broadly the same response). I think the authors were correct in predicting a response like this if they had ran it by anyone else, and I also don’t think they were under any obligation to run the post by anyone else. This is not in any way a post that is particularly likely to contain infohazards, and I feel very comfortable with people posting posts in this general reference class without running them by anyone else first.
Of course, please continue to point out any errors and ask for factual corrections to the post. And downvote the post if you think it is overall more misleading than helpful. A really big reason for posting things like this publicly is so that we can correct any errors and collectively promote the most important information to our attention. But it seems clear to me that this post does not constitute any significant infohazard that the LessWrong team should prevent from spreading.
I do also think that it is important for LessWrong to have a good infohazard policy, in particular for more object-level ideas, both in biorisk and artificial intelligence. In those domains, I would have probably followed your recommended policy of drafting the post until we had run the post by some more people. I am also happy to chat more with you about what our policies in these more object-level domains should be.
It does seem to me that your comments on this post (and your private messages, and postings to other online groups warning of infohazards in this space) have overall been quite damaging to good discourse norms, and I would strongly request that you stop asking people to take posts down, in particular in the way you have here. Our ability to analyze ideas on the basis of their truth-value, and not the basis of their political competitiveness and implications is one of our core strengths on LessWrong, and it appears to me that in this thread you’ve at least once argued for conclusions you think are prosocial, but not actually true, which I think is highly damaging.
You’ve also claimed that hard to access expert-consensus was on your side, when it evidently is not, which I think is also really damaging, since I do think our ability to coordinate around actually dangerous infohazards requires accurate information about the beliefs of our experts, and it seems to me that overall people will walk away with a worse model of that expert consensus after reading your comments.
Most of the consensus that has been built around infohazards in the bio-x-risk community is about the handling of potentially dangerous technological inventions, and major security vulnerabilities. You claimed here (and other places) that this consensus also applied to criticizing government institutions during times of crisis, which I think is wrong, and also has very little chance of actually ever reaching consensus (at least in crises of this type).
The effects of your comments have also been quite significant. The authors of this post have expressed large amounts of stress to me and others. I (and others on the mod team like Ben) have spent multiple hours dealing with this, and overall I expect authority-based criticism like this to have very large negative chilling effects that I think will make our overall ability to deal with this crisis (and others like it) quite a bit worse. You have also continued writing comments like this in private messages and other forums adjacent to LessWrong, with similar negative effects. While I don’t have jurisdiction over those places, I can only implore you strongly to cease writing comments of this type, and if you think something is spreading misinformation, to instead just criticize it on the object-level. Here, on LessWrong, where I do have jurisdiction, I still don’t think I am likely to invoke my moderator powers, but I am going to strong-downvote any future comments like this (and have already done so for this one).
If you do believe that we should change our infohazard policies to include cases like this, then you are welcome to argue for that by making a new top-level post. But please don’t claim that we already have norms, policies and broad buy-in, and that a post like this should have already been taken down, which is just evidently wrong.
I will of course leave Jim and/or Elizabeth to give their thoughts on the ethics of the situation, but I was surprised to see you take this line David, so I wanted to briefly share my perspective with you.
The US govt is majorly failing to deal with coronavirus, and in many worlds the fatalities will be massive (10million plus). At some point it will be undeniable and hopefully the CDC and so on will be able to give the important quarantining advice, and I’ll support them at that time.
But in the meantime,my honest impression, for reasons accurately described above about their recommendations (around things like masks aren’t helpful and claiming that community spread hasn’t happened when they’ve in-principle not tested for it), is that they’ve been dishonest and misleading, leading people to substantially underestimate the risk.
Perhaps you think the post is actually false on those charges, and if so that criticism is good and proper and I endorse it wholeheartedly. But if not, I’m understanding your position to be that we should not point out the dishonesty and misleading information for the greater good. While I can imagine many politicians are indeed in that situation, I always feel that here, on LessWrong, we should try to actually be able to talk honestly and openly about the truth of the matter, and be a place where we can actually build an accurate map and not systematically self-deceive for this reason. That’s my perspective on the matter that leads me to be pretty positively disposed to the above post ethically.
(I’m overloaded with various emergency prep stuff today and can’t have a super long convo today – should be able to reply tomorrow though.)
(10 hour time zone lags make conversations like this hard.)
My claim is not that it’s certainly true that this is bad, and should not have been said. I claim that there is a reasonable chance that it could be bad, and that for that reason alone, it should have been checked with people and discussed before being posted.
I also claim that the post is incorrect on its merits in several places, as I have responded elsewhere in the thread. BUT, as Bostrom notes in his paper, which people really need to read, infohazards aren’t a problem because they are false, they are a problem because they are damaging. So if I thought this post were entirely on point with its criticisms, I would have been far more muted in my response, but still have bemoaned the lack of judgement in not bothering to talk to people before posting it. But in that case, I might have agreed that while the infohazard concerns were real, they would be outweighed by truth seeking norms on LW. I’m not claiming that we need censorship of claims here, but we do need standards, and those standards should certainly include expecting people to carefully vet potential infohazards and avoid unilateralist curse issues before posting.
I want to be clear with you about my thoughts on this David. I’ve spent multiple hundreds of hours thinking about information hazards, publication norms, and how to avoid unilateralist action, and I regularly use those principles explicitly in decision-making. I’ve spent quite some time thinking about how to re-design LessWrong to allow for private discussion and vetting for issues that might lead to e.g. sharing insights that lead to advances in AI capabilities. But given all of that, on reflection, I still completely disagree that this post should be deleted, or that the authors were taking worrying unilateralist action, and I am happy to drop 10+ hours conversing with you about this.
Let me give my thoughts on the issue of infohazards.
I am honestly not sure what work you think the term is doing in this situation, so I’ll recap what it is for everyone following. In history, there has been a notion that all science is fundamentally good, that all knowledge is good, and that science need not ask ethical questions of its exploration. Much of Bostrom’s career has been to draw the boundaries of this idea and show where it is false. For example, one can build technologies that a civilization is not wise enough to use correctly, that lead to degradation of society and even extinction (you and I are both building our lives around increasing the wisdom of society so that we don’t go extinct). Bostrom’s infohazards paper is a philosophical exercise, asking at every level of organisation what kinds of information can hurt you. The paper itself has no conclusion, and ends with an exhortation toward freedom of speech, its point is simply to help you conceptualise this kind of thing and be able to notice in different domains. Then you can notice the tradeoff and weigh it properly in your decision-making.
So, calling something an infohazard merely means that it’s damaging information. An argument that has a false conclusion is an infohazard, because it might cause people to believe a false conclusion. Publishing private information is an infohazard, because it allows adversaries to attack you better, but we still often publish infohazardous private material because it contributes to the common good (e.g. listing our home address on public facebook events helps people burgle your house but it’s worth it to let friends find you). Now, the one kind of infohazard that there is consensus on in the x-risk community that focuses on biosecurity, is sharing specific technological designs for pathogens that could kill masses of people, or sharing information about system weaknesses that are presently subject to attack by adversaries (for obvious reasons I won’t give examples, but Davis Kingsley helpfully published an example that is no longer true in this post if anyone is interested), so I assume that this is what you are talking about, as I know of no other infohazard that there is a consensus about in the bio-x-risk space that one should take great pains to silence and punish defectors on.
The main reason Bostrom’s paper is brought up in biosecurity is in the context of arguing that the spread of specific technological designs for various pathogens and or damaging systems shouldn’t be published or sketched out in great detail. As Churchill was shocked by Niels Bohr’s plea to share the nuclear designs with the Russians, because it would lead to the end of all war (to which Churchill said no and wondered if Bohr was a Russian spy), it might be possible to have buildable pathogens that terrorists or warring states could use to hurt a lot of people or potentially cause an existential catastrophe. So it would be wise to (a) have careful publication practises that involve the option of not-publishing details of such biological systems and (b) not publicise how to discover such information.
Bostrom has put a lot of his reputation on this being a worrying problem that you need to understand carefully. If someone on LessWrong were sharing e.g. their best guess at how to design and build a pathogen that could kill 1%, 10% or possibly 100% of the world’s population, I would be in quite strong agreement that as an admin of the site I should preliminarily move the post back into their drafts, talk with the person, encourage them to think carefully about this, and connect them to people I know who’ve thought about this. I can imagine that the person has reasonable disagreements, but if it seemed like the person was actively indifferent to the idea that it might cause damage, then I can’t stop them writing anywhere on the internet, but LessWrong has very good SEO and I don’t want that to be widely accessible so it could easily be the right call to remove their content of this type from LessWrong. This seems sensible for the case of people posting mechanistic discussion of how to build pathogens that would be able to kill 1%+ of the population.
Now, you’re asking whether we should treat criticism of governmental institutions during a time of crisis in the same category that we treat someone posting pathogens designs or speculating on how to build pathogens that can kill 100 million people. We are discussing something very different, that has a fairly different set of intuitions.
Is there an argument here that is as strong as the argument that sharing pathogen designs can lead to an existential catastrophe? Let me list some reasons why this action is in fact quite useful.
Helping people inform themselves about the virus. As I am writing this message, I’m in a house meeting attempting to estimate the number of people in my area with the disease, and what levels of quarantine we need to be at and when we need to do other things (e.g. can we go to the grocery store, can we accept amazon packages, can we use Uber, etc). We’re trying to use various advice from places like the CDC and the WHO, and it’s helpful to know when I can just trust them to have done their homework versus taking them as helpful but that I should re-do their thinking with my own first-principles models in some detail.
Helping necessary institutional change happen. The coronavirus is not likely to be an existential catastrophe. I expect it will likely kill over 1 million people, but is exceedingly unlikely to kill a couple percent of the population, even given hospital overflow and failures of countries to quarantine. This isn’t the last hurrah from that perspective, and so a naive maxipok utilitarian calculus would say it is more important to improve the CDC for future existential biorisks rather than making sure to not hinder it in any way today. I think that standard policy advice is that stuff gets done quickly in crisis time, and I think that creating public, common knowledge of the severe inadequacies of our current institutions at this time, not ten years later when someone writes a historical analysis, but right now, is the time when improvements and changes are most likely to happen. I want the CDC to be better than this when it comes to future bio-x-risks, and now is a good time to very publicly state very clearly what it’s failing at.
Protecting open, scientific discourse. I’m always skeptical of advice to not publicly criticise powerful organisations because it might cause them to lose power. I always feel like, if their continued existence and power is threatened by honest and open discourse… then it’s weird to think that it’s me who’s defecting on them when I speak openly and honestly about them. I really don’t know what deal they thought they could make with me where I would silence myself (and every other free-thinking person who notices these things?). I’m afraid that was not a deal that was on offer, and they’re picking the wrong side. Open and honest discourse is always controversial and always necessary for a scientifically healthy culture.
So the counterargument here is that there is a downside strong enough possible here. Importantly, when Bostrom shows that information should be hidden and made secret because sharing it might lead to an existential catastrophe.
Could criticising the government here lead to an existential catastrophe?
I don’t know your position, but I’ll try to paint a picture, and let me know if this sounds right. I think you think that something like the following is a possibility. This post, or a successor like it, goes viral (virus based wordplay unintended) on twitter, leading to a consensus that the CDC is incompetent. Later on, the CDC recommends mass quarantine in the US, and the population follows the letter but not the spirit of the recommendation, and this means that many people break quarantine and die.
So that’s a severe outcome. But it isn’t an existential catastrophe.
(Is the coronavirus itself an existential catastrophe? As I said above, this doesn’t seem like it’s the case to me. Its death rate seems to be around 2% when given the proper medical treatment (respirators and the like), and so given hospital overload will likely be higher, perhaps 3-20% (depending on the variation in age of the population). My understanding is that it will likely peak at a maximum of 70% of any given highly connected population, and it’s worth remembering that much of humanity is spread out and not based in cities where people see each other all of the time.
I think the main world in which this is an existential catastrophe is the world where getting the disease does not confer immunity after you lose the disease. This means a constant cycle of the disease amongst the whole population, without being able to develop a vaccine. In that world, things are quite bad, and I’m not really sure what we’ll do then. That quickly moves me from “The next 12 months will see a lot of death and I’m probably going to be personally quarantined for 3-5 months and I will do work to ensure the rationality community and my family is safe and secured” to “This is the sole focus of my attention for the foreseeable future.”
Importantly, I don’t really see any clear argument for which way criticism of the CDC plays out in this world.)
And I know there are real stakes here. Even though you need to go against CDC recommendation today and stockpile, in the future the CDC will hopefully be encouraging mass quarantine, and if people ignore that advice then a fraction of them will die. But there are always life-and-death stakes to speaking honestly about failures of important institutions. Early GiveWell faced the exact same situation, criticising charities saving lives in developing countries. One can argue that this kills people by reducing funding for these important charities. But this was just worth a million times over it because we’ve coordinated around far more effective charities and saved way more lives. We need to discuss governmental failure here in order to save more lives in the future.
(Can I imagine taking down content about the coronavirus? Hm, I thought about it for a bit, and I can imagine that, if a country was under mass quarantine, if people were writing articles with advice about how to escape quarantine and meet people, that would be something we’d take down. There’s an example. But criticising the government? It’s like a fundamental human right, and not because it would be inconvenient to remove, but because it’s the only way to build public trust. It makes no sense to me to silence it.)
The reason you mustn’t silence discussion when we think the consequences are bad, is because the truth is powerful and has surprising consequences. Bostrom has argued that if it’s an existential risk, this principle no longer holds, but if you think he thinks this applies elsewhere, let me quote the end of his paper on infohazards.
Even if our best policy is to form an unyielding commitment to unlimited freedom of thought, virtually limitless freedom of speech, an extremely wide freedom of inquiry, we should realize not only that this policy has costs but that perhaps the strongest reason for adopting such an uncompromising stance would itself be based on an information hazard; namely, norm hazard: the risk that precious yet fragile norms of truth-seeking and truthful reporting would be jeopardized if we permitted convenient exceptions in our own adherence to them or if their violation were in general too readily excused.
Footnote on Unilateralism
I don’t see a reasonable argument that this was close to such a situation such that it’s a dangerous unilateralist action to write this. This isn’t a situation where 95% of people think it’s bad but 5% think it’s good.
If you want to know whether we’ve lifted the unilateralist’s curse here on LessWrong, you need look no further than the Petrov Day event that we ran, and see what the outcome was. That was indeed my attempt to help LessWrong practise and self-signal that we don’t take unilateralist action. But this case is neither an x-risk infohazard nor worrisome unilateralist action. It’s just two people doing their part in helping us draw an accurate map of the territory.
Have you considered whether your criticism itself may have been a damaging infohazard (e.g. in causing people to wrongly place trust in the CDC and thereby dying, in negatively reinforcing coronavirus model-building, in increasing the salience of the “infohazard” concept which can easily be used to illegitimately maintain a state of disinformation, in reinforcing authoritarianism in the US)? How many people did you consult before posting it? How carefully did you vet it?
If you don’t think the reasons I mentioned are good reasons to strongly vet it before posting, why not?
I have discussed the exact issue of public trust in institutions during pandemics with experts in this area repeatedly in the past.
There are risks in increasing the salience of infohazards, and I’ve talked about this point as well. The consensus in both the biosecurity world, and in EA in general, is that infohazards are underappreciated relative to the ideal, and should be made more salient. I’ve also discussed the issues with disinformation with experts in that area, and it’s very hard to claim that people in general are currently too trusting of government authority in the United States—and the application to LW specifically makes me think that people here are less inclined to trust government than the general public, though it’s probably more justifiable. But again, the protest isn’t about just die-hard lesswrongers reading the post, it’s about the risks.
But aside from that, I think there is no case to be made that the criticisms that I noted are off-base on the object-level are infohazards. Pointing out that the CDC isn’t in charge of the FDA’s decision, or pointing out that the CDC distributed tests *too quickly* and had an issue which they corrected hardly seems problematic.
The consensus in both the biosecurity world, and in EA in general, is that infohazards are underappreciated relative to the ideal, and should be made more salient.
Note that I pretty strongly disagree with this. I really wish people would talk less about infohazards, in particular when people talk about reputational risks. My sense is that a quite significant fraction of EAs share this assessment, so calling it consensus seems quite misleading.
I’ve also discussed the issues with disinformation with experts in that area, and it’s very hard to claim that people in general are currently too trusting of government authority in the United States
I also disagree with this. My sense is that on average people are far too trusting of government authority, and much less trust would probably improve things, though it obviously depends on the details of what kind of trust. Trust in the rule of law is very useful. Trust in the economic policies of the united states, or its ability to do long-term planning appears widespread and usually quite misplaced. I don’t think your position is unreasonable to hold, but calling its negation “very hard to claim” seems wrong to me, since again many people I think we both trust a good amount disagree with your position.
For point one, I agree that for reputation discussions, infohazards are probably overused, and I used it that way here. I should probably have been clearer about this in my own head, as I was incorrectly lumping infohazards together. In retrospect I regret bringing this up, rather than focusing on the fact that I think the post was misleading in a variety of ways on the object level.
For point two, I also think you are correct that there is not much consensus in some domains—when I say they are clearly not trusting enough, I should have explicitly (instead of implicitly) made my claim about public health. So in economics, governance, legislation, and other places, people are arguably too trusting overall—not obviously, but at least arguably. The other side is that most people who aren’t trusting of government in those areas are far too overconfident in crazy pet theories (gold standard, monarchy, restructuring courts, etc.) compared to what government espouses—just as they are in public health. So I’m skeptical of the argument that lower trust in general, or more assumptions that the government is generically probably screwing up in a given domain, would actually be helpful.
Cool, then I think we mostly agree on these points.
I do want to say that I am very grateful about your object-level contributions to this thread. I think we can probably get to a stage where we have a version of the top-level post that we are both happy with, at least in terms of its object-level claims.
Thanks for answering. It sounds like, while you have discussed general points with others, you have not vetted this particular criticism. Is there a reason you think a higher standard should be applied to the original post?
In large part, I think there needs to be a higher standard for the original post because it got so many things wrong. And at this point, I’ve discussed this specific post, and had my judgement confirmed three times by different people in this area who don’t want to be involved. But also see my response to Oliver below where I discuss where I think I was wrong.
The underlying statistical phenomenon is just regression to the mean: if people aren’t perfect about determining how good something is, then the one who does the thing is likely to have overestimated how good it is.
I agree that people should take this kind of statistical reasoning into account when deciding whether to do things, but it’s not at all clear to me that the “Unilateralist’s Curse” catchphrase is a good summary of the policy you would get if you applied this reasoning evenhandedly: if people aren’t perfect about determining how bad something is, then the one who vetoes the thing is likely to have overestimated how bad it is.
In order for the “Unilateralist’s Curse” effect to be more important than the “Unilateralist’s Blessing” effect, I think you need additional modeling assumptions to the effect that the payoff function is such that more variance is bad. I don’t think this holds for the reference class of “blog posts criticizing institutions”? In a world with more variance in blog posts criticizing institutions, we get more good criticisms and more bad criticisms, which sounds like a good deal to me!
I think you should read Bostrom’s actual paper for why this is a more compelling argument specifically when dealing with large risks. And it is worth noting that the reference class isn’t “blog posts criticizing institutions”—which I’m in favor of—it’s “blog posts attacking the credibility of the only institution that can feasibly respond to an incipient epidemic just as the epidemic is taking off and the public is unsure what to do about it.”
I would support a policy where, if an LW post starts to go viral, then original authors or mods are encouraged to add disclaimers to the top of posts that they wouldn’t otherwise need to add when writing for the LW audience. As SSC sometimes does.
I would not support a policy where LW authors always preemptively write for a general audience.
Here we face the tragedy of “reference class tennis”. When you don’t know how much to trust your own reasoning vs. someone else’s, you might hope to defer the historical record for some suitable reference class of analogous disputes. But if you and your interlocutor disagree on which reference class is appropriate, then you just have the same kind of problem again.
I really don’t think this is a reference class tennis problem, given that I’m criticizing a specific post for specific reasons, not making an argument that we should judge this on the basis of a specific reference class.
And given that, I’m still seeing amazingly little engagement of the object level question of whether the criticisms I noted are valid.
I want to apologize, and make sure there is a clear record of what I think both on the object level, and about my comment, in retrospect. (For other mistakes I made, not related to this comment, see here.)
I handled this very poorly, and wasted a significant amount of people’s time. I still think that the claims in the post were materially misleading, (and think some of the claims still are, after edits.) The authors replaced the section saying not to listen to the CDC with a very different disclaimer, which now says: “Notably we’re not saying any of the things they do recommend are bad.” I think we should have a clear norm that potentially harmful things need a much greater degree of caution than it displayed. But calling for it to be removed was stupid.
Above and beyond my initial comment, critically, I screwed up by being pissed off and responding angrily below about what I saw as an uninformed and misleading post, and continued to reply to comments without due consideration of the people involved in both the original post, and the comments. This was in part due to personal biases, and in part due to personal stress, which is not an excuse. This led to what can generously be described as a waste of valuable people’s time, at a particularly bad time. I have apologized to some of those those involved already, but wanted to do so publicly here as well.
Reviewing the arguments
I initially said the post should have been removed. I also used the term “infohazard” in a way that was alarmist—my central claim was that it was damaging and misleading, not that it was an infohazard in the global catastrophic risk sense that people assumed.
Several counterarguments and response to my claim that it should be taken down were advanced follow. I originally responded poorly, so I wanted to review them here, along with my view on the strength of the claims.
1) I should not have been a jerk.
I was dismissive and annoyed about what seemed to me to be many obvious factual errors. My attitude was a mistake. It was also stupid for a number of reasons, and at the very least I should have contacted the authors directly and privately, and been less confrontational. Again, I apologize.
2) Telling people to check with others before posting, and threatening to remove posts which were not so checked, is censorship, which is harmful in other ways.
As I mentioned above, saying the post should be removed was stupid, but I maintain, as I did then, that when a person is unsure about whether saying something is a good idea, and it is consequential enough to matter, they should ask for some outside advice. I think this should be a basic norm, one that lesswrong and the rationality community should not just recommend but where feasible, should try to enforce. I do think that there was a reasonable sense of urgency in getting the message out in this case, and that excuses some level of failure to vet the information carefully.
3) We should encourage people to say true things even when harmful, or as one person said “I’d want people to err heavily on the side of sharing information even if it might be dangerous.”
This stops short of Nietzschean honesty, but I still don’t think this holds up well. First, as I said, I think the post was misleading, so this simply does not apply. But the discussion in the comments and privately pushed on this more, and I think it’s useful to clarify what I claimed. I agree that we should not withhold information which could be important because of a vague concern, and if this post were correct, it would fall under that umbrella. However, what the post seem to me to try to do is collect misleading statements to make it clearer that a bad organization is, in fact, bad—playing level 2 regardless of truth. That seems obviously unacceptable. I do not think lying is acceptable to pursue level 2 goals in Zvi’s explanation of Simulacra, except in dire circumstances.
But the principle advocated here says to default to level 1 brutal / damaging honesty far more often than I think is advisable, not to lie. My initial impression what the the CDC was doing far better than it in fact was, and that the negative impacts were greatly under-appreciated.
I can understand why the balance of how much truth to say when the effect is damaging is critical, and think that Lesswrong’s norms are far better than those elsewhere. I agree that the bare minimum of not actively lying is insufficient, but as I said above, I disagree with others about how far to go in saying things that might be harmful because they are true.
4) We should not attempt to play political games by shielding bad organizations and ignoring or obscuring the truth in order to build trust incorrectly.
I think this is a claim that people should never play level 3. I endorse this. I agree that I was attempting to defend an institution that was doing poorly from claims that it was doing poorly, on the basis that a significant fraction of those claims were unfair. As I said above, this would be a defense. In retrospect, the organization was far worse than I thought at the time, as I realized far too late, and discussed more here. On the other hand, many of the claims were in fact misleading, and I don’t think that false attacks on bad things are OK either.
This is a very serious concern that we discussed before publishing- especially the parts about masks and potential racial differences. Ultimately we made some accommodations but decided that publishing was the best thing, for the following reason:
The usefulness of trust in the CDC is not independent of the quality of the job the CDC is doing. There is a level of mishandling bad enough that excess trust in the CDC would cost people’s lives. I don’t know if we’re at that level- I sure hope we’re not, both selfishly and altruistically- but it is really important to know when we are. And if we shut down information sharing on the assumption that trust in the CDC is good, we rob ourselves of the ability to identify that. Blind (performance of) trust also precludes the possibility that the CDC could be induced into a better response.
I’m curious what information would make you chance your mind about trust in the CDC being net positive, and how that information would be accessible.
Compelling evidence that we were wrong on any individual assertion would of course change my mind on sharing that particular assertion. Examples:
An N-week follow up showing that recovered individuals were not shedding virus and/or that close contacts weren’t getting infected. (I’ve gone back and forth on N here. I think six is the minimum and the longer the better).
Evidence that the CDC’s webpage guidelines were just for show and we were performing South-Korea-like drive by screenings (although, uh, that would bring up different concerns).
Properly controlled studies of attempts to get people to use masks showing that it led to a higher transmission rate.
And evidence that I was wrong on enough assertions would change my mind on the thesis, so I would of course withdraw it.
As to what would change my mind even if I still thought the post was true… If I found it was driving people to listen to worse sources, I would at least regret the order in which we’d published. However I don’t know how I could know which source was worst without an open sharing of the problems with all of them.
I go back and forth on whether simply sufficiently bad consequences would be enough to change my mind. I’m attracted to the consequentialist framework that says they should be. But in a world where posts like this are discouraged, how can I know what the consequences really are? Maybe people are net-benefiting from their trust in the CDC because it leads them to do things like vaccinate and wash their hands- but how could I trust the numbers saying that? How could I know vaccination and hand washing were even good, if it was possible to suppress evidence that they weren’t?
An option that I think should be on the table (at least to consider) is “the post is accessible to LessWrongers, but requires a log-in, so it can’t go viral among people who have a lot less context”.
This requires a feature we don’t currently have, but I think we’ll want sooner or later for political stuff, and is not that hard to build.
Right now I think this post is basically purely beneficial (I expect the people reading it to think critically about it and have access to give information), but if I found the post had gone viral I’d become much more uncertain. (this is not to say I’d think it was harmful, I’d just have much wider error bars)
The level of handwringing about this post seems completely out of proportion when there are many thousands of people coming up with all sorts of COVID-related conspiracy theories on facebook and twitter. If it went viral my guess is that it would actually increase trust in the CDC by giving people a more realistic grounding for their vague suspicions.
We do, and that’s the point. It’s not “hey, we’re not as bad as them so don’t complain to us!”. It’s that there is already a lot of distrust out there, and giving people something to latch onto with “see, I knew the CDC wasn’t being honest with me!” can keep them from spiraling out of control with their distrust, since at least they know where it ends.
Mild well sourced criticism is way more encouraging of trust than no criticism under obvious threat of censorship because the alternative isn’t “they must be perfect” it’s “if they have to hide it, the problems are probably worse than ‘mild’”.
I responded to this on a different thread, but aside from the factual issues, this isn’t “mild well sourced criticism.” The post says the CDC is so untrustworthy that we can’t point uninformed people to it as a valid place to learn things, and there is literally no decent source for what people should do. That’s way beyond what anyone else credible was saying.
I think that requiring a login would reduce my concern about this post 95%. But given that it isn’t, you can’t wait for a post to go viral before deciding it was bad, you need to decide not to post / remove the post beforehand.
I go back and forth on whether simply sufficiently bad consequences would be enough to change my mind.
This makes me far more convinced that we need to address the infohazard concerns, which I tried to raise, rather than debate consequences directly—which everyone seems to agree are plausibly very bad, likely just fine, and somewhat unclear. There is a process issue that I see here—as far as I’ve read, you as an author decided that there were significant potential concerns, decided that they might be minimal enough to be fine, and then—without discussing the issue—unilaterally chose to post anyways.
This seems like the very definition of Unilateralist’s curse, and if we can’t get this right here on lesswrong, I’m terrified of how we’ll do with AI risk.
Secondarily, for ” Compelling evidence that we were wrong on any individual assertion would of course change my mind on sharing that particular assertion,” I’ll point to the bizarre blaming of the CDC for HHS and FDA’s failure to allow independent testing.
And for the final point, about masks, there is no compelling reason to say they should be encouraging their use given that the vast majority of people don’t know how to use them and from what I have seen/heard from people in biosecurity in the US, are almost all misusing them, so the possible benefit is minimal at best. But even if they are on net effective, would be due to a reasonable disagreement about social priorities during a potential pandemic.
However, I think that you should be more charitable than even that in your post. If there is compelling reason to think that the decisions made were eminently reasonable given the information CDC had at the time, blaming them for not knowing what you know now, with far more information, seems like a poor reason to say we should not trust them. And other than their general hesitation to be alarmist, which is a real failing but one that is a good decision for institutional reasons, “I can see this was dumb in hindsight” seems to cover most of the remaining points you made.
1) I heard that you actually didn’t ignore unilateralist curse when preparing this and got outside feedback.
2) The claims were both correct and relevant to the CDC, (see my response to jimrandomh)
I’d change my mind about CDC if I were convince that these (or similar criticisms) were correct, as above, and were fair criticisms given the fact that you’re speaking post-hoc from an epistemically superior vantage point of having are more information than they did when they made their decisions. And remember that CDC is an organization with legal constraints that make them unable to do some of the things you think are good ideas, and that they have been operating under a huge staff shortage due to years of a hiring freeze and budget cuts.
And remember that CDC is an organization with legal constraints that make them unable to do some of the things you think are good ideas, and that they have been operating under a huge staff shortage due to years of a hiring freeze and budget cuts.
These sound like reasons to trust the CDC even less, is that what you meant?
While for me it is, indeed, a reason to put less weight on their analysis or expect less useful work/analysis to be done by them in a short/medium-term.
But I think this consideration, also, weakens certain types of arguments about the CDC’s lack of judgment/untrustworthiness. For example, arguments like “they did this, but should have done better” loses part of its bayesian weight as the organization likely made a lot of decisions under time pressure and other constraints. And things are more likely to go wrong if you’re under-stuffed and hence prioritize more aggressively.
I don’t expect to have a good judgment here, but it seems to me that “testing kits the CDC sent to local labs were unreliable” might fall here. It might have been a right call for them to distribute tests quickly and ~skip ensuring that tests didn’t have a false positive problem.
A better example: one might criticize CDC for lack of advice aimed at the vulnerable demographics. But absence might result not from lack of judgment but from political constraints. E.g. jimrandomh writes:
Addendum: A whistleblower claims that CDC wanted to advise elderly and fragile people to not fly on commercial airlines, but removed this advice at the White House’s direction.
Upd: this might be indicative of other negative characteristics of CDC (which might contribute to unreliability) but I don’t know enough about the US gov to asses it.
If and to the degree and in the circumstances and ways that the CDC is trustworthy, I desire to believe that the CDC is trustworthy.
If and to the degree and in the circumstances and ways that the CDC is untrustworthy, I desire to believe that the CDC is untrustworthy.
Let me not become attached to beliefs I may not want.
If you tell me that my statement that someone else is lying to us about important factual information that we need to get right in order to keep us and our friends and loved ones safe is true but harmful, and I need to delete my statement, because it is important that people believe the lying liars who are lying for our own good, and I should exercise prior restraint before I point out such things?
I too am surprised by this objection coming from David. But I also want to point out that it seems like it is mostly David’s objection, and the vast majority here are supportive of the post.
It also seems like David thinks the post contains errors, and he says he would not have been anything like this vocal otherwise. Obviously we should work out quickly whether or not the post does contain errors, and correct any we find.
Hopefully this clarifies things for you a bit, but I am making essentially 3 claims. I’d be happy to know which of these you disagree with, if any.
First, the restate the idea of infohazards as it regards the litany of Tarski, this is a personal litany. It does not apply to making public statements, especially ones that are put in places that people who will be negatively affected by them will likely see them. Otherwise, I might apply the litany to say “If I am going to unconditionally cooperate in this prisoners dilemma, I desire that everyone knows I will unconditionally cooperate in this prisoners dilemma.” This is obviously wrong and dumb.
Second, the claims in the post don’t have the simple relationship with trustworthiness that one might assume, and some of the claims are in fact misleading. These bear further discussion.
Most obviously, blaming the CDC for the FDA and HHS not allowing 3rd party detection kits is somewhere between false and misleading.
In some cases, it’s only clear in retrospect that the CDC got this wrong. Perhaps you think they should do better, but that’s different than saying they are untrustworthy, or not credible.
There is a difference between “these facts make the CDC look bad” and “the CDC is untrustworthy.” As I said elsewhere in comments, a number of points here are in that category.
There are situations where CDC did basically exactly the right thing, and the claim that they are untrustworthy is based on bad analysis. An example is discouraging use of face masks, which is exactly the correct thing for them to encourage given both the limited supply, and the fact that most people who are buying and hoarding them aren’t going to use them correctly. They didn’t even misrepresent the evidence—there really is evidence that community use of face-masks doesn’t help. And even if not, the fact that the CDC makes good public recommendations seems like a really bad reason to encourage people to distrust them.
Other places, they did the right thing, and are being blamed for the fact that things went wrong. For example, distributing testing kits quickly was really important, so they did. The fact that one of the chemicals supplied was no good was detected before the any were used seems like a great reason to think the CDC is doing a good job, both in rushing, and in making sure nothing goes wrong and catching their mistake before the kits started being used.
Third, given that the authors said they realized it might be bad, this should never have been posted without discussion with someone external. Instead, they went ahead and posted it without asking anyone. Lesswrong should have higher standards than this.
Third, given that the authors said they realized it might be bad, this should never have been posted without discussion with someone external.
For example…?
Suppose I’m a Less Wrong member who sometimes makes posts. Suppose I have some thoughts on this whole virus thing and I want to write down those thoughts and post them on Less Wrong.
You’re suggesting that after I write down what I think, but before I publish the post, I should consult with “someone external”.
But with whom? Are you proposing some general guideline for how to determine when a post should go through such consultation, and how to determine with whom to consult, and how to consult with them? If so, please do detail this process. I, for one, haven’t the foggiest idea how I would, in the general case, discern when a to-be-published post of mine needs to be vetted by some external agent, and how to figure out who that should be, etc.
This whole business of having people vet our posts seems like it’s easy to propose in retrospect as a purported unsatisfied criterion of posting a given post, but not so easy to satisfy in prospect. Perhaps I’m misunderstanding you. In any case, I should like to read your thoughts on the aforesaid guidelines.
(By the way, what assurances of vetting would satisfy you? Suppose the OP had contained a note: “This post has been vetted by X.”. And suppose otherwise the post were unchanged. For what value(s) of X would you now have no quarrel with the post?)
I’m proposing that literally anyone in the EA biosecurity world would have been a good place to start. Almost any of them would either have a response, or have a much better idea of who to ask. Just like for hey, I have an idea for how someone could misuse AI, running the potentially dangerous ideas by almost anyone in AI safety is enough for people to say either “I really wouldn’t worry,” or “maybe ask person Y,” or “Holy shit, no.”
As for what value of X, I’d be happy if basically anyone that had done work in biosecurity was asked. Anyone who signed up for / attended the Catalyst summit, for example. Or anyone who has posted about biosecurity on the EA forum. I know most of them, and on the whole I trust their judgement. Maybe I’m wrong, but in this case, I think most of their judgement would be to say either that it needs to be edited, or that it should probably be checked with someone at Open Phil or FHI before posting, since it’s potentially a bad idea.
Most obviously, blaming the CDC for the FDA and HHS not allowing 3rd party detection kits is somewhere between false and misleading.
Please support this claim. It seems obvious that they shat the bed (don’t know which agency, let god sort them out for now, history and FOIA requests will sort them out in the future). It seems obvious from reading the news that many many local and commercial labs would have been ready with capacity a lot sooner than they are if FDA/CDC/HHS conglomerate got out of the way sooner.
It’s quite plausible that this is due to Trump pressure, history will sort this out, but my estimation of guilt will likely just move from “weasel” to “weak for not resisting”, and the facts remain the same
The CDC is not the same as HHS or the FDA, since they have different staff, are in different locations, and they have different goals (42 USC 6a versus 42 USC 43 and 21 USC).
Given that, I’m not sure why we should trust the CDC more or less because of the actions of the FDA. I’m not sure why this claim needs further support. Note that the CDC has no legal or other authority over what tests non-federal government laboratories can perform. They do have oversight over both certain types of labs from a biosafety standpoint, but that’s mostly irrelevant to allowing them to do tests, and there is no claim that the CDC banned research. And if we are asking the question that this post purports to answer—should we trust the CDC—it makes quite a difference whether the decision being discussed was something they had control over.
… many local and commercial labs would have been ready with capacity a lot sooner than they are if FDA/CDC/HHS conglomerate got out of the way sooner.
If you want to know whether the “FDA/CDC/HHS conglomerate,” should be blamed, I’d ask whether you think they are all the same thing, or whether this question in incoherent. As noted above, they aren’t the same, so I claim the question is mostly incoherent. You might suggest that they are all a part of the same government, so they should be lumped together. I’d suggest that you could ask whether you should trust the “DR_Manhattan/Davidmanheim/Elizabeth, jimrandomh conglomerate” in our judgement about whether to differentiate between these agencies. Clearly, of course, our judgement differs, but we’re all a part of the same web site, so maybe we can all be lumped together. If that doesn’t make sense, good.
All data I’ve seen indicates it was a poorly interplay between the FDA/HHS that caused the CDC to be the only source of tests (because FDA/HHS were the ones with the legal power to do so, and it’s recorded that they used it). It’s included on this list because interacted with decisions the CDC *did* make. I don’t think it’s misleading because we noted which agency did what, and have since edited the section header to make it clear even to skimmers.
It interacted with them, but it’s not clear to me that it interacted in a way that’s relevant to the credibility of the CDC.
The examples are “a list of actions from the CDC that we believe are misleading or otherwise indicative of an underlying problem”, but this isn’t an action from the CDC and it doesn’t obviously indicate a problem at the CDC.
Note to downvoters: While I disagree with this comment, it expresses a real concern and opens a conversation that does very much need to happen. So I’ve upvoted it back out of the negatives, and think it should probably stay positive.
This is a harmful post, and should be deleted or removed.
Was outside of LW norms. It came off as a blunt attempt to shut down discussion, with very little in terms of justification for doing so. This is in no way a clear cut infohazard, and even if it was, I’m not convinced that shutting down discussion of things that might be infohazards is a good policy, especially on a relatively obscure site centered around truth seeking. Statements this confident about issues this complicated should only be said after some extensive analysis and discussion of the situation. jtm’s presentation of the issue struck me as far more tempered and far less adversarial. I’d encourage Davidmanheim to supplant his comment with a more fleshed out version of his position.
I am shocked to hear that people need proof something is an infohazard before deciding that the issue needs to be discussed BEFORE posts like this go live. I see no evidence that any such discussion occurred, and in fact the responses above seem to indicate that they didn’t.
But I did change the phrasing, so as not to claim I was trying to shut down discussion. The point I was making, however, remains.
I am shocked to hear that people need proof something is an infohazard before deciding that the issue needs to be discussed BEFORE posts like this go live.
I think there’s a few issues here:
When deciding to take down a post due to infohazard concerns, what should that discussion look like?
How thorough should the vetting process for posts be before it gets posted, especially given infohazard and unilateralist curse considerations?
Is this post an infohazard and if so how dangerous is it?
My previous comment was with regards to 1.
With regards to 2, it’s a matter of thresholds. Especially on this forum, I’d want people to err heavily on the side of sharing information even if it might be dangerous. I wouldn’t want people to stop themselves from sharing information unless the potential infohazard was extremely dangerous or a threat to the continued existence of this community. This is mainly due to the issues that crop up once you start blinding yourself. As I understand it, 2 people discussed this issue before posting, and deemed it worthwhile to post anyway. To me, that seems like more than enough caution for the level of risk that I’ve seen associated with this post. Granted, I don’t think the authors took the unilateralists curse into account, and that’s something that everyone should keep in mind when thinking about posting infohazards. It would also be nice to have some sort of “expanding circles of vetting” where a potentially spicy post can start off only being seen by select individuals, then people above a certain karma threshold, then behind a login by the rest of the LW community, and then as a post viewable and linkable by the general public.
With regards to 3. AFAIK, the issue became politicized the second the president issued a press conference claiming the CDC was doing a good job. This prompted every political actor set against the current presidency to go out of their way to find everything wrong with the CDC and the governments overall response to the pandemic. They then started shouting their findings from the rooftops.But this is only after much larger forums than LW have been questioning the response of both the CDC and the WHO for quite a while now.
The cat left the bag a month ago, and in a significantly less levelheaded and more memetically virulent form. This post won’t change that, and is at worst a concise summary of what has already been said.
1) Yes, this discussion is important, but it should have taken place before a post like this was posted.
2) The standard for something that is admittedly an infohazard can’t be ” the authors themselves think the thing they are doing is a good idea.” And knowing most of the people in this space, I strongly suspect that if anyone in biosecurity in EA had read this, they would have said that it needs at least further consideration and a rewrite. Perhaps I wrong, but I think it is reasonable to ask someone in the relevant area, rather than simply having authors discuss it.
People have many avenues of vetting things before making a public post—it’s not like the set of people in EA who work on biosecurity is a secret, and they have publicly offered to vet potential infohazards in other places.
3) I disagree with you that most of the criticism in other places was of the form exhibited here. Claiming we shouldn’t trust the CDC seems dangerous to me. And this post isn’t simply repeating what others have said. Liberal news sources often note that the Trump administration isn’t trustworthy, or say that the CDC has screwed up, but as far as I have seen, they *don’t* claim that the organization is fundamentally untrustworthy, as this post does. In my view, the central framing changes the tone of the rest of the points from “the CDC doesn’t get everything right, and we should be cautious about blindly accepting their claims” to “the CDC is fundamentally so broken that it should be actively ignored.” While I’m sure that some lesswrong readers are going to marginally and responsibly update their beliefs in light of the new information presented here based on their well-reasoned understanding of the US government and its limitations, many readers will not.
Would your opinion change significantly if we changed the wording to highlight that this is an opinion on the trustworthiness of the CDC in this moment, with these constraints, rather than a fundamental property of the CDC?
I read that exactly the opposite way—it says they discussed it, not that they consulted anyone external, much less that they checked with people and were told that there was a consensus that this would be fine. Unilateralist curse isn’t solved by saying “I thought about it, and talked with my co-author, and this seems OK.”
Edit: This is a type of post that should have been vetted with someone for infohazards and harms before being posted, and (Further edit) I think it should have been removed by the authors., though censorship is obviously counterproductive at this point.
Infohazards are a real thing, as is the Unilateralists’s curse. (Edit to add: No, infohazards and unilateralist’s curse are not about existential or global catastrophic risk. Read the papers.) And right now, overall, reduced trust in CDC will almost certainly kill people. Yes, their currently political leadership is crappy, and blameworthy for a number of bad decisions—but it doesn’t change the fact that undermining them now is a very bad idea.
Yes, the CDC has screwed up many times, but publicly blaming them for things that were non-obvious (like failing to delay sending out lab kits for further testing,) or that they screwed up, and everyone paying attention including them now realizes they got wrong (like being slow to allow outside testing,) in the middle of a pandemic seems like exactly the kind of consequence-blind action that lesswrongers should know better than to engage in.
Disclaimer: I know lots of people at CDC, including some in infectious diseases, and have friends there. They are human, and get things wrong under pressure—and perhaps there are people who would do better, but that’s not the question at hand.
You’re wrong about this. Trust in the CDC is not a single-variable scale and not a generically useful resource. Trust in the CDC is a mix of peoples’ estimation of the CDC’s competence, and their estimation of whether the CDC is biased towards under-response or over-response. It is severely harmful for people to over-estimate the CDC’s competence, or to fail to recognize that the CDC is biased towards under-response.
Having previously over-estimated CDC’s competence caused many parties which could have been bypassing the CDC to create and deploy tests, to fail to respond in time. I expect that decision-makers currently relying on the CDC’s competence will implement distancing measures and ban gatherings much too late.
The main reason we might want people to over-estimate the CDC’s competence is that this trust could be used to solve coordination problems. However, the coordination problems that CDC could plausibly solve—closing airports, banning public gatherings, and implementing quarantines—are problems that it solves using legal power, not using generic community trust. To the extent that community trust is required to implement such measures, knowing that the CDC has been consistently biased towards under-response will make it easier, to a greater degree than knowing that they’ve been incompetent will make it harder.
My evaluation is that reducing trust in the CDC has net-positive consequences. But note that, separately, I don’t think an evaluation of this depth is typically required before truthfully speaking about an organization’s credibility. I expect that nearly all of the time, when trading off between speaking truth and empowering an institution, speaking truth is the correct move, and those who think otherwise will be mistaken.
I can’t reply to all of this now, but in short, yes, it’s not a single variable scale, and yes, over-reliance on government was on net very harmful to this point. But no, most of the CDC’s influence isn’t legal powers, since the US legal system simply doesn’t work that way. The CDC cannot tell governors what to do, nor can the president—that’s all about their ability to persuade people that they should be listened to, and it’s going to be critical if and when they get their shit together.
I also think several object-level claims in the post are *wildly* off base in several places, in part showing a basic lack of understanding of the issues, and in part claiming retrospectively that they should have know things that no-one knew in advance. It claims The CDC should have approved testing that they weren’t part of the approval process for—no, HHS isn’t the same as CDC, nor is the FDA. It says the CDC needed to respond with tests quicker, but they evidently should have gone slower with distributing lab tests to ensure they didn’t have a false positive problem. The CDC correctly told people that focusing on masks would be a bad idea, because taking focus away from handwashing is really fucking stupid given relative efficacy and scalability of the two.
And your prior assumption that on balance it’s better to attack an institution instead of empowering it is predicated on the claims 1) being true, and 2) directly having a bearing on whether the institution should be trusted. In this case, I’ve noted that 1 is false, and for 2, I don’t think that directly attacking their credibility for admitted missteps is either necessary or helpful in telling the truth.
You’re right that the blocks to testing were largely caused by HHS and the FDA, not the CDC. We described that in the text, but I agree that there’s too large a risk someone skims the headings and misses that. I think it’s important to include because it’s entangled with things the CDC did do, but I’ve edited the heading to be clearer.
I think you’re confusing the CDC with the US government generally in many more places, and have failed to differentiate in ways that are misleading to readers. And as I said, I think you’re both blaming the CDC for things they got right, like discouraging use of already scarce masks by the uninfected public, and wrong to blame the CDC for mistakes that are only clear in retrospect.
Okay, what are those places?
As a response to this, the moderator team did indeed reach out (CC’ing David) to one of the people I think David and I both consider to be among the best informed decision-makers in biorisk. With their permission, here is the key excerpt from their response:
Overall, my sense is that you made a prediction that people in biorisk would consider this post an infohazard that had to be prevented from spreading (you also reported this post to the admins, saying that we should “talk to someone who works in biorisk at at FHI, Openphil, etc. to confirm that this is a really bad idea”).
We have now done so, and in this case others did not share your assessment (and I expect most other experts would give broadly the same response). I think the authors were correct in predicting a response like this if they had ran it by anyone else, and I also don’t think they were under any obligation to run the post by anyone else. This is not in any way a post that is particularly likely to contain infohazards, and I feel very comfortable with people posting posts in this general reference class without running them by anyone else first.
Of course, please continue to point out any errors and ask for factual corrections to the post. And downvote the post if you think it is overall more misleading than helpful. A really big reason for posting things like this publicly is so that we can correct any errors and collectively promote the most important information to our attention. But it seems clear to me that this post does not constitute any significant infohazard that the LessWrong team should prevent from spreading.
I do also think that it is important for LessWrong to have a good infohazard policy, in particular for more object-level ideas, both in biorisk and artificial intelligence. In those domains, I would have probably followed your recommended policy of drafting the post until we had run the post by some more people. I am also happy to chat more with you about what our policies in these more object-level domains should be.
It does seem to me that your comments on this post (and your private messages, and postings to other online groups warning of infohazards in this space) have overall been quite damaging to good discourse norms, and I would strongly request that you stop asking people to take posts down, in particular in the way you have here. Our ability to analyze ideas on the basis of their truth-value, and not the basis of their political competitiveness and implications is one of our core strengths on LessWrong, and it appears to me that in this thread you’ve at least once argued for conclusions you think are prosocial, but not actually true, which I think is highly damaging.
You’ve also claimed that hard to access expert-consensus was on your side, when it evidently is not, which I think is also really damaging, since I do think our ability to coordinate around actually dangerous infohazards requires accurate information about the beliefs of our experts, and it seems to me that overall people will walk away with a worse model of that expert consensus after reading your comments.
Most of the consensus that has been built around infohazards in the bio-x-risk community is about the handling of potentially dangerous technological inventions, and major security vulnerabilities. You claimed here (and other places) that this consensus also applied to criticizing government institutions during times of crisis, which I think is wrong, and also has very little chance of actually ever reaching consensus (at least in crises of this type).
The effects of your comments have also been quite significant. The authors of this post have expressed large amounts of stress to me and others. I (and others on the mod team like Ben) have spent multiple hours dealing with this, and overall I expect authority-based criticism like this to have very large negative chilling effects that I think will make our overall ability to deal with this crisis (and others like it) quite a bit worse. You have also continued writing comments like this in private messages and other forums adjacent to LessWrong, with similar negative effects. While I don’t have jurisdiction over those places, I can only implore you strongly to cease writing comments of this type, and if you think something is spreading misinformation, to instead just criticize it on the object-level. Here, on LessWrong, where I do have jurisdiction, I still don’t think I am likely to invoke my moderator powers, but I am going to strong-downvote any future comments like this (and have already done so for this one).
If you do believe that we should change our infohazard policies to include cases like this, then you are welcome to argue for that by making a new top-level post. But please don’t claim that we already have norms, policies and broad buy-in, and that a post like this should have already been taken down, which is just evidently wrong.
I will of course leave Jim and/or Elizabeth to give their thoughts on the ethics of the situation, but I was surprised to see you take this line David, so I wanted to briefly share my perspective with you.
The US govt is majorly failing to deal with coronavirus, and in many worlds the fatalities will be massive (10million plus). At some point it will be undeniable and hopefully the CDC and so on will be able to give the important quarantining advice, and I’ll support them at that time.
But in the meantime, my honest impression, for reasons accurately described above about their recommendations (around things like masks aren’t helpful and claiming that community spread hasn’t happened when they’ve in-principle not tested for it), is that they’ve been dishonest and misleading, leading people to substantially underestimate the risk.
Perhaps you think the post is actually false on those charges, and if so that criticism is good and proper and I endorse it wholeheartedly. But if not, I’m understanding your position to be that we should not point out the dishonesty and misleading information for the greater good. While I can imagine many politicians are indeed in that situation, I always feel that here, on LessWrong, we should try to actually be able to talk honestly and openly about the truth of the matter, and be a place where we can actually build an accurate map and not systematically self-deceive for this reason. That’s my perspective on the matter that leads me to be pretty positively disposed to the above post ethically.
(I’m overloaded with various emergency prep stuff today and can’t have a super long convo today – should be able to reply tomorrow though.)
(10 hour time zone lags make conversations like this hard.)
My claim is not that it’s certainly true that this is bad, and should not have been said. I claim that there is a reasonable chance that it could be bad, and that for that reason alone, it should have been checked with people and discussed before being posted.
I also claim that the post is incorrect on its merits in several places, as I have responded elsewhere in the thread. BUT, as Bostrom notes in his paper, which people really need to read, infohazards aren’t a problem because they are false, they are a problem because they are damaging. So if I thought this post were entirely on point with its criticisms, I would have been far more muted in my response, but still have bemoaned the lack of judgement in not bothering to talk to people before posting it. But in that case, I might have agreed that while the infohazard concerns were real, they would be outweighed by truth seeking norms on LW. I’m not claiming that we need censorship of claims here, but we do need standards, and those standards should certainly include expecting people to carefully vet potential infohazards and avoid unilateralist curse issues before posting.
I want to be clear with you about my thoughts on this David. I’ve spent multiple hundreds of hours thinking about information hazards, publication norms, and how to avoid unilateralist action, and I regularly use those principles explicitly in decision-making. I’ve spent quite some time thinking about how to re-design LessWrong to allow for private discussion and vetting for issues that might lead to e.g. sharing insights that lead to advances in AI capabilities. But given all of that, on reflection, I still completely disagree that this post should be deleted, or that the authors were taking worrying unilateralist action, and I am happy to drop 10+ hours conversing with you about this.
Let me give my thoughts on the issue of infohazards.
I am honestly not sure what work you think the term is doing in this situation, so I’ll recap what it is for everyone following. In history, there has been a notion that all science is fundamentally good, that all knowledge is good, and that science need not ask ethical questions of its exploration. Much of Bostrom’s career has been to draw the boundaries of this idea and show where it is false. For example, one can build technologies that a civilization is not wise enough to use correctly, that lead to degradation of society and even extinction (you and I are both building our lives around increasing the wisdom of society so that we don’t go extinct). Bostrom’s infohazards paper is a philosophical exercise, asking at every level of organisation what kinds of information can hurt you. The paper itself has no conclusion, and ends with an exhortation toward freedom of speech, its point is simply to help you conceptualise this kind of thing and be able to notice in different domains. Then you can notice the tradeoff and weigh it properly in your decision-making.
So, calling something an infohazard merely means that it’s damaging information. An argument that has a false conclusion is an infohazard, because it might cause people to believe a false conclusion. Publishing private information is an infohazard, because it allows adversaries to attack you better, but we still often publish infohazardous private material because it contributes to the common good (e.g. listing our home address on public facebook events helps people burgle your house but it’s worth it to let friends find you). Now, the one kind of infohazard that there is consensus on in the x-risk community that focuses on biosecurity, is sharing specific technological designs for pathogens that could kill masses of people, or sharing information about system weaknesses that are presently subject to attack by adversaries (for obvious reasons I won’t give examples, but Davis Kingsley helpfully published an example that is no longer true in this post if anyone is interested), so I assume that this is what you are talking about, as I know of no other infohazard that there is a consensus about in the bio-x-risk space that one should take great pains to silence and punish defectors on.
The main reason Bostrom’s paper is brought up in biosecurity is in the context of arguing that the spread of specific technological designs for various pathogens and or damaging systems shouldn’t be published or sketched out in great detail. As Churchill was shocked by Niels Bohr’s plea to share the nuclear designs with the Russians, because it would lead to the end of all war (to which Churchill said no and wondered if Bohr was a Russian spy), it might be possible to have buildable pathogens that terrorists or warring states could use to hurt a lot of people or potentially cause an existential catastrophe. So it would be wise to (a) have careful publication practises that involve the option of not-publishing details of such biological systems and (b) not publicise how to discover such information.
Bostrom has put a lot of his reputation on this being a worrying problem that you need to understand carefully. If someone on LessWrong were sharing e.g. their best guess at how to design and build a pathogen that could kill 1%, 10% or possibly 100% of the world’s population, I would be in quite strong agreement that as an admin of the site I should preliminarily move the post back into their drafts, talk with the person, encourage them to think carefully about this, and connect them to people I know who’ve thought about this. I can imagine that the person has reasonable disagreements, but if it seemed like the person was actively indifferent to the idea that it might cause damage, then I can’t stop them writing anywhere on the internet, but LessWrong has very good SEO and I don’t want that to be widely accessible so it could easily be the right call to remove their content of this type from LessWrong. This seems sensible for the case of people posting mechanistic discussion of how to build pathogens that would be able to kill 1%+ of the population.
Now, you’re asking whether we should treat criticism of governmental institutions during a time of crisis in the same category that we treat someone posting pathogens designs or speculating on how to build pathogens that can kill 100 million people. We are discussing something very different, that has a fairly different set of intuitions.
Is there an argument here that is as strong as the argument that sharing pathogen designs can lead to an existential catastrophe? Let me list some reasons why this action is in fact quite useful.
Helping people inform themselves about the virus. As I am writing this message, I’m in a house meeting attempting to estimate the number of people in my area with the disease, and what levels of quarantine we need to be at and when we need to do other things (e.g. can we go to the grocery store, can we accept amazon packages, can we use Uber, etc). We’re trying to use various advice from places like the CDC and the WHO, and it’s helpful to know when I can just trust them to have done their homework versus taking them as helpful but that I should re-do their thinking with my own first-principles models in some detail.
Helping necessary institutional change happen. The coronavirus is not likely to be an existential catastrophe. I expect it will likely kill over 1 million people, but is exceedingly unlikely to kill a couple percent of the population, even given hospital overflow and failures of countries to quarantine. This isn’t the last hurrah from that perspective, and so a naive maxipok utilitarian calculus would say it is more important to improve the CDC for future existential biorisks rather than making sure to not hinder it in any way today. I think that standard policy advice is that stuff gets done quickly in crisis time, and I think that creating public, common knowledge of the severe inadequacies of our current institutions at this time, not ten years later when someone writes a historical analysis, but right now, is the time when improvements and changes are most likely to happen. I want the CDC to be better than this when it comes to future bio-x-risks, and now is a good time to very publicly state very clearly what it’s failing at.
Protecting open, scientific discourse. I’m always skeptical of advice to not publicly criticise powerful organisations because it might cause them to lose power. I always feel like, if their continued existence and power is threatened by honest and open discourse… then it’s weird to think that it’s me who’s defecting on them when I speak openly and honestly about them. I really don’t know what deal they thought they could make with me where I would silence myself (and every other free-thinking person who notices these things?). I’m afraid that was not a deal that was on offer, and they’re picking the wrong side. Open and honest discourse is always controversial and always necessary for a scientifically healthy culture.
So the counterargument here is that there is a downside strong enough possible here. Importantly, when Bostrom shows that information should be hidden and made secret because sharing it might lead to an existential catastrophe.
Could criticising the government here lead to an existential catastrophe?
I don’t know your position, but I’ll try to paint a picture, and let me know if this sounds right. I think you think that something like the following is a possibility. This post, or a successor like it, goes viral (virus based wordplay unintended) on twitter, leading to a consensus that the CDC is incompetent. Later on, the CDC recommends mass quarantine in the US, and the population follows the letter but not the spirit of the recommendation, and this means that many people break quarantine and die.
So that’s a severe outcome. But it isn’t an existential catastrophe.
(Is the coronavirus itself an existential catastrophe? As I said above, this doesn’t seem like it’s the case to me. Its death rate seems to be around 2% when given the proper medical treatment (respirators and the like), and so given hospital overload will likely be higher, perhaps 3-20% (depending on the variation in age of the population). My understanding is that it will likely peak at a maximum of 70% of any given highly connected population, and it’s worth remembering that much of humanity is spread out and not based in cities where people see each other all of the time.
I think the main world in which this is an existential catastrophe is the world where getting the disease does not confer immunity after you lose the disease. This means a constant cycle of the disease amongst the whole population, without being able to develop a vaccine. In that world, things are quite bad, and I’m not really sure what we’ll do then. That quickly moves me from “The next 12 months will see a lot of death and I’m probably going to be personally quarantined for 3-5 months and I will do work to ensure the rationality community and my family is safe and secured” to “This is the sole focus of my attention for the foreseeable future.”
Importantly, I don’t really see any clear argument for which way criticism of the CDC plays out in this world.)
And I know there are real stakes here. Even though you need to go against CDC recommendation today and stockpile, in the future the CDC will hopefully be encouraging mass quarantine, and if people ignore that advice then a fraction of them will die. But there are always life-and-death stakes to speaking honestly about failures of important institutions. Early GiveWell faced the exact same situation, criticising charities saving lives in developing countries. One can argue that this kills people by reducing funding for these important charities. But this was just worth a million times over it because we’ve coordinated around far more effective charities and saved way more lives. We need to discuss governmental failure here in order to save more lives in the future.
(Can I imagine taking down content about the coronavirus? Hm, I thought about it for a bit, and I can imagine that, if a country was under mass quarantine, if people were writing articles with advice about how to escape quarantine and meet people, that would be something we’d take down. There’s an example. But criticising the government? It’s like a fundamental human right, and not because it would be inconvenient to remove, but because it’s the only way to build public trust. It makes no sense to me to silence it.)
The reason you mustn’t silence discussion when we think the consequences are bad, is because the truth is powerful and has surprising consequences. Bostrom has argued that if it’s an existential risk, this principle no longer holds, but if you think he thinks this applies elsewhere, let me quote the end of his paper on infohazards.
Footnote on Unilateralism
I don’t see a reasonable argument that this was close to such a situation such that it’s a dangerous unilateralist action to write this. This isn’t a situation where 95% of people think it’s bad but 5% think it’s good.
If you want to know whether we’ve lifted the unilateralist’s curse here on LessWrong, you need look no further than the Petrov Day event that we ran, and see what the outcome was. That was indeed my attempt to help LessWrong practise and self-signal that we don’t take unilateralist action. But this case is neither an x-risk infohazard nor worrisome unilateralist action. It’s just two people doing their part in helping us draw an accurate map of the territory.
Have you considered whether your criticism itself may have been a damaging infohazard (e.g. in causing people to wrongly place trust in the CDC and thereby dying, in negatively reinforcing coronavirus model-building, in increasing the salience of the “infohazard” concept which can easily be used to illegitimately maintain a state of disinformation, in reinforcing authoritarianism in the US)? How many people did you consult before posting it? How carefully did you vet it?
If you don’t think the reasons I mentioned are good reasons to strongly vet it before posting, why not?
I have discussed the exact issue of public trust in institutions during pandemics with experts in this area repeatedly in the past.
There are risks in increasing the salience of infohazards, and I’ve talked about this point as well. The consensus in both the biosecurity world, and in EA in general, is that infohazards are underappreciated relative to the ideal, and should be made more salient. I’ve also discussed the issues with disinformation with experts in that area, and it’s very hard to claim that people in general are currently too trusting of government authority in the United States—and the application to LW specifically makes me think that people here are less inclined to trust government than the general public, though it’s probably more justifiable. But again, the protest isn’t about just die-hard lesswrongers reading the post, it’s about the risks.
But aside from that, I think there is no case to be made that the criticisms that I noted are off-base on the object-level are infohazards. Pointing out that the CDC isn’t in charge of the FDA’s decision, or pointing out that the CDC distributed tests *too quickly* and had an issue which they corrected hardly seems problematic.
Note that I pretty strongly disagree with this. I really wish people would talk less about infohazards, in particular when people talk about reputational risks. My sense is that a quite significant fraction of EAs share this assessment, so calling it consensus seems quite misleading.
I also disagree with this. My sense is that on average people are far too trusting of government authority, and much less trust would probably improve things, though it obviously depends on the details of what kind of trust. Trust in the rule of law is very useful. Trust in the economic policies of the united states, or its ability to do long-term planning appears widespread and usually quite misplaced. I don’t think your position is unreasonable to hold, but calling its negation “very hard to claim” seems wrong to me, since again many people I think we both trust a good amount disagree with your position.
For point one, I agree that for reputation discussions, infohazards are probably overused, and I used it that way here. I should probably have been clearer about this in my own head, as I was incorrectly lumping infohazards together. In retrospect I regret bringing this up, rather than focusing on the fact that I think the post was misleading in a variety of ways on the object level.
For point two, I also think you are correct that there is not much consensus in some domains—when I say they are clearly not trusting enough, I should have explicitly (instead of implicitly) made my claim about public health. So in economics, governance, legislation, and other places, people are arguably too trusting overall—not obviously, but at least arguably. The other side is that most people who aren’t trusting of government in those areas are far too overconfident in crazy pet theories (gold standard, monarchy, restructuring courts, etc.) compared to what government espouses—just as they are in public health. So I’m skeptical of the argument that lower trust in general, or more assumptions that the government is generically probably screwing up in a given domain, would actually be helpful.
Cool, then I think we mostly agree on these points.
I do want to say that I am very grateful about your object-level contributions to this thread. I think we can probably get to a stage where we have a version of the top-level post that we are both happy with, at least in terms of its object-level claims.
Thanks for answering. It sounds like, while you have discussed general points with others, you have not vetted this particular criticism. Is there a reason you think a higher standard should be applied to the original post?
In large part, I think there needs to be a higher standard for the original post because it got so many things wrong. And at this point, I’ve discussed this specific post, and had my judgement confirmed three times by different people in this area who don’t want to be involved. But also see my response to Oliver below where I discuss where I think I was wrong.
The underlying statistical phenomenon is just regression to the mean: if people aren’t perfect about determining how good something is, then the one who does the thing is likely to have overestimated how good it is.
I agree that people should take this kind of statistical reasoning into account when deciding whether to do things, but it’s not at all clear to me that the “Unilateralist’s Curse” catchphrase is a good summary of the policy you would get if you applied this reasoning evenhandedly: if people aren’t perfect about determining how bad something is, then the one who vetoes the thing is likely to have overestimated how bad it is.
In order for the “Unilateralist’s Curse” effect to be more important than the “Unilateralist’s Blessing” effect, I think you need additional modeling assumptions to the effect that the payoff function is such that more variance is bad. I don’t think this holds for the reference class of “blog posts criticizing institutions”? In a world with more variance in blog posts criticizing institutions, we get more good criticisms and more bad criticisms, which sounds like a good deal to me!
I think you should read Bostrom’s actual paper for why this is a more compelling argument specifically when dealing with large risks. And it is worth noting that the reference class isn’t “blog posts criticizing institutions”—which I’m in favor of—it’s “blog posts attacking the credibility of the only institution that can feasibly respond to an incipient epidemic just as the epidemic is taking off and the public is unsure what to do about it.”
Is it your impression that the general public reads LessWrong?
What’s the model where an LW blogpost in any way undermines the CDC’s credibility with the general public?
It’s my impression that posts on lesswrong occasionally go viral, as has happened a couple times lately.
Thanks for the answer, that’s a fair point.
I would support a policy where, if an LW post starts to go viral, then original authors or mods are encouraged to add disclaimers to the top of posts that they wouldn’t otherwise need to add when writing for the LW audience. As SSC sometimes does.
I would not support a policy where LW authors always preemptively write for a general audience.
I’ve been away for some time. Any idea what posts he’s talking about here?
We end up on the frontpage of Reddit or HN from time to time. The last post that got a lot of clicks (15k+) was this one: https://www.lesswrong.com/posts/W5PhyEQqEWTcpRpqn/dunbar-s-function
Is there anything more recent? That post was 11 years ago.
It was on reddit like two weeks ago. LW posts have a long shelf life apparently.
Thanks for the suggestion! I just re-skimmed the Bostrom et al. paper (it’s been a while) and wrote up my thoughts in a top-level post.
Here we face the tragedy of “reference class tennis”. When you don’t know how much to trust your own reasoning vs. someone else’s, you might hope to defer the historical record for some suitable reference class of analogous disputes. But if you and your interlocutor disagree on which reference class is appropriate, then you just have the same kind of problem again.
I really don’t think this is a reference class tennis problem, given that I’m criticizing a specific post for specific reasons, not making an argument that we should judge this on the basis of a specific reference class.
And given that, I’m still seeing amazingly little engagement of the object level question of whether the criticisms I noted are valid.
I want to apologize, and make sure there is a clear record of what I think both on the object level, and about my comment, in retrospect. (For other mistakes I made, not related to this comment, see here.)
I handled this very poorly, and wasted a significant amount of people’s time. I still think that the claims in the post were materially misleading, (and think some of the claims still are, after edits.) The authors replaced the section saying not to listen to the CDC with a very different disclaimer, which now says: “Notably we’re not saying any of the things they do recommend are bad.” I think we should have a clear norm that potentially harmful things need a much greater degree of caution than it displayed. But calling for it to be removed was stupid.
Above and beyond my initial comment, critically, I screwed up by being pissed off and responding angrily below about what I saw as an uninformed and misleading post, and continued to reply to comments without due consideration of the people involved in both the original post, and the comments. This was in part due to personal biases, and in part due to personal stress, which is not an excuse. This led to what can generously be described as a waste of valuable people’s time, at a particularly bad time. I have apologized to some of those those involved already, but wanted to do so publicly here as well.
Reviewing the arguments
I initially said the post should have been removed. I also used the term “infohazard” in a way that was alarmist—my central claim was that it was damaging and misleading, not that it was an infohazard in the global catastrophic risk sense that people assumed.
Several counterarguments and response to my claim that it should be taken down were advanced follow. I originally responded poorly, so I wanted to review them here, along with my view on the strength of the claims.
1) I should not have been a jerk.
I was dismissive and annoyed about what seemed to me to be many obvious factual errors. My attitude was a mistake. It was also stupid for a number of reasons, and at the very least I should have contacted the authors directly and privately, and been less confrontational. Again, I apologize.
2) Telling people to check with others before posting, and threatening to remove posts which were not so checked, is censorship, which is harmful in other ways.
As I mentioned above, saying the post should be removed was stupid, but I maintain, as I did then, that when a person is unsure about whether saying something is a good idea, and it is consequential enough to matter, they should ask for some outside advice. I think this should be a basic norm, one that lesswrong and the rationality community should not just recommend but where feasible, should try to enforce. I do think that there was a reasonable sense of urgency in getting the message out in this case, and that excuses some level of failure to vet the information carefully.
3) We should encourage people to say true things even when harmful, or as one person said “I’d want people to err heavily on the side of sharing information even if it might be dangerous.”
This stops short of Nietzschean honesty, but I still don’t think this holds up well. First, as I said, I think the post was misleading, so this simply does not apply. But the discussion in the comments and privately pushed on this more, and I think it’s useful to clarify what I claimed. I agree that we should not withhold information which could be important because of a vague concern, and if this post were correct, it would fall under that umbrella. However, what the post seem to me to try to do is collect misleading statements to make it clearer that a bad organization is, in fact, bad—playing level 2 regardless of truth. That seems obviously unacceptable. I do not think lying is acceptable to pursue level 2 goals in Zvi’s explanation of Simulacra, except in dire circumstances.
But the principle advocated here says to default to level 1 brutal / damaging honesty far more often than I think is advisable, not to lie. My initial impression what the the CDC was doing far better than it in fact was, and that the negative impacts were greatly under-appreciated.
I can understand why the balance of how much truth to say when the effect is damaging is critical, and think that Lesswrong’s norms are far better than those elsewhere. I agree that the bare minimum of not actively lying is insufficient, but as I said above, I disagree with others about how far to go in saying things that might be harmful because they are true.
4) We should not attempt to play political games by shielding bad organizations and ignoring or obscuring the truth in order to build trust incorrectly.
I think this is a claim that people should never play level 3. I endorse this. I agree that I was attempting to defend an institution that was doing poorly from claims that it was doing poorly, on the basis that a significant fraction of those claims were unfair. As I said above, this would be a defense. In retrospect, the organization was far worse than I thought at the time, as I realized far too late, and discussed more here. On the other hand, many of the claims were in fact misleading, and I don’t think that false attacks on bad things are OK either.
This is a very serious concern that we discussed before publishing- especially the parts about masks and potential racial differences. Ultimately we made some accommodations but decided that publishing was the best thing, for the following reason:
The usefulness of trust in the CDC is not independent of the quality of the job the CDC is doing. There is a level of mishandling bad enough that excess trust in the CDC would cost people’s lives. I don’t know if we’re at that level- I sure hope we’re not, both selfishly and altruistically- but it is really important to know when we are. And if we shut down information sharing on the assumption that trust in the CDC is good, we rob ourselves of the ability to identify that. Blind (performance of) trust also precludes the possibility that the CDC could be induced into a better response.
I’m curious what information would make you chance your mind about trust in the CDC being net positive, and how that information would be accessible.
It seems like good practice to also share your inverse-of-this-statement – what information would change your mind that the post is net-negative?
Fair question.
Compelling evidence that we were wrong on any individual assertion would of course change my mind on sharing that particular assertion. Examples:
An N-week follow up showing that recovered individuals were not shedding virus and/or that close contacts weren’t getting infected. (I’ve gone back and forth on N here. I think six is the minimum and the longer the better).
Evidence that the CDC’s webpage guidelines were just for show and we were performing South-Korea-like drive by screenings (although, uh, that would bring up different concerns).
Properly controlled studies of attempts to get people to use masks showing that it led to a higher transmission rate.
And evidence that I was wrong on enough assertions would change my mind on the thesis, so I would of course withdraw it.
As to what would change my mind even if I still thought the post was true… If I found it was driving people to listen to worse sources, I would at least regret the order in which we’d published. However I don’t know how I could know which source was worst without an open sharing of the problems with all of them.
I go back and forth on whether simply sufficiently bad consequences would be enough to change my mind. I’m attracted to the consequentialist framework that says they should be. But in a world where posts like this are discouraged, how can I know what the consequences really are? Maybe people are net-benefiting from their trust in the CDC because it leads them to do things like vaccinate and wash their hands- but how could I trust the numbers saying that? How could I know vaccination and hand washing were even good, if it was possible to suppress evidence that they weren’t?
An option that I think should be on the table (at least to consider) is “the post is accessible to LessWrongers, but requires a log-in, so it can’t go viral among people who have a lot less context”.
This requires a feature we don’t currently have, but I think we’ll want sooner or later for political stuff, and is not that hard to build.
Right now I think this post is basically purely beneficial (I expect the people reading it to think critically about it and have access to give information), but if I found the post had gone viral I’d become much more uncertain. (this is not to say I’d think it was harmful, I’d just have much wider error bars)
The level of handwringing about this post seems completely out of proportion when there are many thousands of people coming up with all sorts of COVID-related conspiracy theories on facebook and twitter. If it went viral my guess is that it would actually increase trust in the CDC by giving people a more realistic grounding for their vague suspicions.
I think that we should aspire to higher epistemic standards than conspiracy theorists on twitter.
We do, and that’s the point. It’s not “hey, we’re not as bad as them so don’t complain to us!”. It’s that there is already a lot of distrust out there, and giving people something to latch onto with “see, I knew the CDC wasn’t being honest with me!” can keep them from spiraling out of control with their distrust, since at least they know where it ends.
Mild well sourced criticism is way more encouraging of trust than no criticism under obvious threat of censorship because the alternative isn’t “they must be perfect” it’s “if they have to hide it, the problems are probably worse than ‘mild’”.
I responded to this on a different thread, but aside from the factual issues, this isn’t “mild well sourced criticism.” The post says the CDC is so untrustworthy that we can’t point uninformed people to it as a valid place to learn things, and there is literally no decent source for what people should do. That’s way beyond what anyone else credible was saying.
Of course we should, but that is irrelevant to the question of whether this post is hazardous if people without LW accounts read it.
Unless there are large enough demographics for which this post looks credible while FB conspiracies do not.
I think that requiring a login would reduce my concern about this post 95%. But given that it isn’t, you can’t wait for a post to go viral before deciding it was bad, you need to decide not to post / remove the post beforehand.
I think such a feature would be really useful and taking the current case as a reason to prioritize developing it seem prudent.
This makes me far more convinced that we need to address the infohazard concerns, which I tried to raise, rather than debate consequences directly—which everyone seems to agree are plausibly very bad, likely just fine, and somewhat unclear. There is a process issue that I see here—as far as I’ve read, you as an author decided that there were significant potential concerns, decided that they might be minimal enough to be fine, and then—without discussing the issue—unilaterally chose to post anyways.
This seems like the very definition of Unilateralist’s curse, and if we can’t get this right here on lesswrong, I’m terrified of how we’ll do with AI risk.
Secondarily, for ” Compelling evidence that we were wrong on any individual assertion would of course change my mind on sharing that particular assertion,” I’ll point to the bizarre blaming of the CDC for HHS and FDA’s failure to allow independent testing.
And for the final point, about masks, there is no compelling reason to say they should be encouraging their use given that the vast majority of people don’t know how to use them and from what I have seen/heard from people in biosecurity in the US, are almost all misusing them, so the possible benefit is minimal at best. But even if they are on net effective, would be due to a reasonable disagreement about social priorities during a potential pandemic.
However, I think that you should be more charitable than even that in your post. If there is compelling reason to think that the decisions made were eminently reasonable given the information CDC had at the time, blaming them for not knowing what you know now, with far more information, seems like a poor reason to say we should not trust them. And other than their general hesitation to be alarmist, which is a real failing but one that is a good decision for institutional reasons, “I can see this was dumb in hindsight” seems to cover most of the remaining points you made.
I’d change my mind about this post if:
1) I heard that you actually didn’t ignore unilateralist curse when preparing this and got outside feedback.
2) The claims were both correct and relevant to the CDC, (see my response to jimrandomh)
I’d change my mind about CDC if I were convince that these (or similar criticisms) were correct, as above, and were fair criticisms given the fact that you’re speaking post-hoc from an epistemically superior vantage point of having are more information than they did when they made their decisions. And remember that CDC is an organization with legal constraints that make them unable to do some of the things you think are good ideas, and that they have been operating under a huge staff shortage due to years of a hiring freeze and budget cuts.
These sound like reasons to trust the CDC even less, is that what you meant?
While for me it is, indeed, a reason to put less weight on their analysis or expect less useful work/analysis to be done by them in a short/medium-term.
But I think this consideration, also, weakens certain types of arguments about the CDC’s lack of judgment/untrustworthiness. For example, arguments like “they did this, but should have done better” loses part of its bayesian weight as the organization likely made a lot of decisions under time pressure and other constraints. And things are more likely to go wrong if you’re under-stuffed and hence prioritize more aggressively.
I don’t expect to have a good judgment here, but it seems to me that “testing kits the CDC sent to local labs were unreliable” might fall here. It might have been a right call for them to distribute tests quickly and ~skip ensuring that tests didn’t have a false positive problem.
A better example: one might criticize CDC for lack of advice aimed at the vulnerable demographics. But absence might result not from lack of judgment but from political constraints. E.g. jimrandomh writes:
Upd: this might be indicative of other negative characteristics of CDC (which might contribute to unreliability) but I don’t know enough about the US gov to asses it.
If and to the degree and in the circumstances and ways that the CDC is trustworthy, I desire to believe that the CDC is trustworthy.
If and to the degree and in the circumstances and ways that the CDC is untrustworthy, I desire to believe that the CDC is untrustworthy.
Let me not become attached to beliefs I may not want.
If you tell me that my statement that someone else is lying to us about important factual information that we need to get right in order to keep us and our friends and loved ones safe is true but harmful, and I need to delete my statement, because it is important that people believe the lying liars who are lying for our own good, and I should exercise prior restraint before I point out such things?
I too am surprised by this objection coming from David. But I also want to point out that it seems like it is mostly David’s objection, and the vast majority here are supportive of the post.
It also seems like David thinks the post contains errors, and he says he would not have been anything like this vocal otherwise. Obviously we should work out quickly whether or not the post does contain errors, and correct any we find.
Hopefully this clarifies things for you a bit, but I am making essentially 3 claims. I’d be happy to know which of these you disagree with, if any.
First, the restate the idea of infohazards as it regards the litany of Tarski, this is a personal litany. It does not apply to making public statements, especially ones that are put in places that people who will be negatively affected by them will likely see them. Otherwise, I might apply the litany to say “If I am going to unconditionally cooperate in this prisoners dilemma, I desire that everyone knows I will unconditionally cooperate in this prisoners dilemma.” This is obviously wrong and dumb.
Second, the claims in the post don’t have the simple relationship with trustworthiness that one might assume, and some of the claims are in fact misleading. These bear further discussion.
Most obviously, blaming the CDC for the FDA and HHS not allowing 3rd party detection kits is somewhere between false and misleading.
In some cases, it’s only clear in retrospect that the CDC got this wrong. Perhaps you think they should do better, but that’s different than saying they are untrustworthy, or not credible.
There is a difference between “these facts make the CDC look bad” and “the CDC is untrustworthy.” As I said elsewhere in comments, a number of points here are in that category.
There are situations where CDC did basically exactly the right thing, and the claim that they are untrustworthy is based on bad analysis. An example is discouraging use of face masks, which is exactly the correct thing for them to encourage given both the limited supply, and the fact that most people who are buying and hoarding them aren’t going to use them correctly. They didn’t even misrepresent the evidence—there really is evidence that community use of face-masks doesn’t help. And even if not, the fact that the CDC makes good public recommendations seems like a really bad reason to encourage people to distrust them.
Other places, they did the right thing, and are being blamed for the fact that things went wrong. For example, distributing testing kits quickly was really important, so they did. The fact that one of the chemicals supplied was no good was detected before the any were used seems like a great reason to think the CDC is doing a good job, both in rushing, and in making sure nothing goes wrong and catching their mistake before the kits started being used.
Third, given that the authors said they realized it might be bad, this should never have been posted without discussion with someone external. Instead, they went ahead and posted it without asking anyone. Lesswrong should have higher standards than this.
For example…?
Suppose I’m a Less Wrong member who sometimes makes posts. Suppose I have some thoughts on this whole virus thing and I want to write down those thoughts and post them on Less Wrong.
You’re suggesting that after I write down what I think, but before I publish the post, I should consult with “someone external”.
But with whom? Are you proposing some general guideline for how to determine when a post should go through such consultation, and how to determine with whom to consult, and how to consult with them? If so, please do detail this process. I, for one, haven’t the foggiest idea how I would, in the general case, discern when a to-be-published post of mine needs to be vetted by some external agent, and how to figure out who that should be, etc.
This whole business of having people vet our posts seems like it’s easy to propose in retrospect as a purported unsatisfied criterion of posting a given post, but not so easy to satisfy in prospect. Perhaps I’m misunderstanding you. In any case, I should like to read your thoughts on the aforesaid guidelines.
(By the way, what assurances of vetting would satisfy you? Suppose the OP had contained a note: “This post has been vetted by X.”. And suppose otherwise the post were unchanged. For what value(s) of X would you now have no quarrel with the post?)
I’m proposing that literally anyone in the EA biosecurity world would have been a good place to start. Almost any of them would either have a response, or have a much better idea of who to ask. Just like for hey, I have an idea for how someone could misuse AI, running the potentially dangerous ideas by almost anyone in AI safety is enough for people to say either “I really wouldn’t worry,” or “maybe ask person Y,” or “Holy shit, no.”
As for what value of X, I’d be happy if basically anyone that had done work in biosecurity was asked. Anyone who signed up for / attended the Catalyst summit, for example. Or anyone who has posted about biosecurity on the EA forum. I know most of them, and on the whole I trust their judgement. Maybe I’m wrong, but in this case, I think most of their judgement would be to say either that it needs to be edited, or that it should probably be checked with someone at Open Phil or FHI before posting, since it’s potentially a bad idea.
Please support this claim. It seems obvious that they shat the bed (don’t know which agency, let god sort them out for now, history and FOIA requests will sort them out in the future). It seems obvious from reading the news that many many local and commercial labs would have been ready with capacity a lot sooner than they are if FDA/CDC/HHS conglomerate got out of the way sooner.
It’s quite plausible that this is due to Trump pressure, history will sort this out, but my estimation of guilt will likely just move from “weasel” to “weak for not resisting”, and the facts remain the same
The CDC is not the same as HHS or the FDA, since they have different staff, are in different locations, and they have different goals (42 USC 6a versus 42 USC 43 and 21 USC).
Given that, I’m not sure why we should trust the CDC more or less because of the actions of the FDA. I’m not sure why this claim needs further support. Note that the CDC has no legal or other authority over what tests non-federal government laboratories can perform. They do have oversight over both certain types of labs from a biosafety standpoint, but that’s mostly irrelevant to allowing them to do tests, and there is no claim that the CDC banned research. And if we are asking the question that this post purports to answer—should we trust the CDC—it makes quite a difference whether the decision being discussed was something they had control over.
If you want to know whether the “FDA/CDC/HHS conglomerate,” should be blamed, I’d ask whether you think they are all the same thing, or whether this question in incoherent. As noted above, they aren’t the same, so I claim the question is mostly incoherent. You might suggest that they are all a part of the same government, so they should be lumped together. I’d suggest that you could ask whether you should trust the “DR_Manhattan/Davidmanheim/Elizabeth, jimrandomh conglomerate” in our judgement about whether to differentiate between these agencies. Clearly, of course, our judgement differs, but we’re all a part of the same web site, so maybe we can all be lumped together. If that doesn’t make sense, good.
Ok. To clarify, one of them is to blame. Maybe it’s not the CDC. History will tell.
All data I’ve seen indicates it was a poorly interplay between the FDA/HHS that caused the CDC to be the only source of tests (because FDA/HHS were the ones with the legal power to do so, and it’s recorded that they used it). It’s included on this list because interacted with decisions the CDC *did* make. I don’t think it’s misleading because we noted which agency did what, and have since edited the section header to make it clear even to skimmers.
It interacted with them, but it’s not clear to me that it interacted in a way that’s relevant to the credibility of the CDC.
The examples are “a list of actions from the CDC that we believe are misleading or otherwise indicative of an underlying problem”, but this isn’t an action from the CDC and it doesn’t obviously indicate a problem at the CDC.
Am I missing something?
Note to downvoters: While I disagree with this comment, it expresses a real concern and opens a conversation that does very much need to happen. So I’ve upvoted it back out of the negatives, and think it should probably stay positive.
I downvoted because I felt that this:
Was outside of LW norms. It came off as a blunt attempt to shut down discussion, with very little in terms of justification for doing so. This is in no way a clear cut infohazard, and even if it was, I’m not convinced that shutting down discussion of things that might be infohazards is a good policy, especially on a relatively obscure site centered around truth seeking. Statements this confident about issues this complicated should only be said after some extensive analysis and discussion of the situation. jtm’s presentation of the issue struck me as far more tempered and far less adversarial. I’d encourage Davidmanheim to supplant his comment with a more fleshed out version of his position.
I am shocked to hear that people need proof something is an infohazard before deciding that the issue needs to be discussed BEFORE posts like this go live. I see no evidence that any such discussion occurred, and in fact the responses above seem to indicate that they didn’t.
But I did change the phrasing, so as not to claim I was trying to shut down discussion. The point I was making, however, remains.
I think there’s a few issues here:
When deciding to take down a post due to infohazard concerns, what should that discussion look like?
How thorough should the vetting process for posts be before it gets posted, especially given infohazard and unilateralist curse considerations?
Is this post an infohazard and if so how dangerous is it?
My previous comment was with regards to 1.
With regards to 2, it’s a matter of thresholds. Especially on this forum, I’d want people to err heavily on the side of sharing information even if it might be dangerous. I wouldn’t want people to stop themselves from sharing information unless the potential infohazard was extremely dangerous or a threat to the continued existence of this community. This is mainly due to the issues that crop up once you start blinding yourself. As I understand it, 2 people discussed this issue before posting, and deemed it worthwhile to post anyway. To me, that seems like more than enough caution for the level of risk that I’ve seen associated with this post. Granted, I don’t think the authors took the unilateralists curse into account, and that’s something that everyone should keep in mind when thinking about posting infohazards. It would also be nice to have some sort of “expanding circles of vetting” where a potentially spicy post can start off only being seen by select individuals, then people above a certain karma threshold, then behind a login by the rest of the LW community, and then as a post viewable and linkable by the general public.
With regards to 3. AFAIK, the issue became politicized the second the president issued a press conference claiming the CDC was doing a good job. This prompted every political actor set against the current presidency to go out of their way to find everything wrong with the CDC and the governments overall response to the pandemic. They then started shouting their findings from the rooftops. But this is only after much larger forums than LW have been questioning the response of both the CDC and the WHO for quite a while now.
The cat left the bag a month ago, and in a significantly less levelheaded and more memetically virulent form. This post won’t change that, and is at worst a concise summary of what has already been said.
I think these are all good points.
1) Yes, this discussion is important, but it should have taken place before a post like this was posted.
2) The standard for something that is admittedly an infohazard can’t be ” the authors themselves think the thing they are doing is a good idea.” And knowing most of the people in this space, I strongly suspect that if anyone in biosecurity in EA had read this, they would have said that it needs at least further consideration and a rewrite. Perhaps I wrong, but I think it is reasonable to ask someone in the relevant area, rather than simply having authors discuss it.
People have many avenues of vetting things before making a public post—it’s not like the set of people in EA who work on biosecurity is a secret, and they have publicly offered to vet potential infohazards in other places.
3) I disagree with you that most of the criticism in other places was of the form exhibited here. Claiming we shouldn’t trust the CDC seems dangerous to me. And this post isn’t simply repeating what others have said. Liberal news sources often note that the Trump administration isn’t trustworthy, or say that the CDC has screwed up, but as far as I have seen, they *don’t* claim that the organization is fundamentally untrustworthy, as this post does. In my view, the central framing changes the tone of the rest of the points from “the CDC doesn’t get everything right, and we should be cautious about blindly accepting their claims” to “the CDC is fundamentally so broken that it should be actively ignored.” While I’m sure that some lesswrong readers are going to marginally and responsibly update their beliefs in light of the new information presented here based on their well-reasoned understanding of the US government and its limitations, many readers will not.
Would your opinion change significantly if we changed the wording to highlight that this is an opinion on the trustworthiness of the CDC in this moment, with these constraints, rather than a fundamental property of the CDC?
I might be misunderstanding you, but doesn’t Elizabeth explicitly state that this discussion did take place here?
I read that exactly the opposite way—it says they discussed it, not that they consulted anyone external, much less that they checked with people and were told that there was a consensus that this would be fine. Unilateralist curse isn’t solved by saying “I thought about it, and talked with my co-author, and this seems OK.”
Right, so you set the standard higher than simply talking about it. That wasn’t clear to me from your previous post but it makes sense.