I wrote, “Wouldn’t it just be easier to convince the public to accept a certain amount of risk, to accept debates about trade-offs?”
Zubon replied:
How?
Keeping secrets is a known technology. Overcoming widespread biases is the reason we are here. If you have a way to sway the public on these issues, please, share.
“Keeping secrets” is a vague description of Eliezer’s proposal. “Keeping secrets” might be known technology, but so is “convincing the public to accept risks.” (E.g., they accept automobile fatality rates.) Which of these “technologies” would be easier to deploy in this case? That depends on the particular secrets to be kept and the particular risks to be accepted.
Since Eliezer talked about keeping projects “classified”, I assume that he’s talking about government-funded research. So, as I read him, he wants the government to fund basic, nonmilitary research that carries existential risks, but he wants the projects and the reports on the existential risks to be kept classified.
In a democracy, that means that the public, or their elected representatives, need to be convinced to spend their tax dollars on research, even while they know that they will not be told of the risks, or even of the nature of the specific research projects being funded. That is routine for military research, but there the public believes that the secrecy is protecting them from a greater existential threat. Eliezer is talking about basic research that does not obviously protect us from an existential threat.
The point is really this: To convince the public to fund research of this nature, you will need to convince them to accept risks anyways, since they need to vote for all this funding to go into some black box marked “Research that poses a potential existential threat, so you can’t know about it.” So, Eliezer’s plan already requires convincing the public to accept risks. Then, on top of that, he needs to keep the secrets. That’s why it seems to me that his plan can only be harder than mine, which just requires convincing them to accept risks, without the need for the secrecy.
Eliezer,
You point to a problem: “You can’t admit a single particle of uncertain danger if you want your science’s funding to survive. These days you are not allowed to end by saying, “There remains the distinct possibility...” Because there is no debate you can have about tradeoffs between scientific progress and risk. If you get to the point where you’re having a debate about tradeoffs, you’ve lost the debate. That’s how the world stands, nowadays.”
As a solution, you propose that “where human-caused uncertain existential dangers are concerned, the only way to get a real, serious, rational, fair, evenhanded assessment of the risks, in our modern environment,
Is if the whole project is classified, the paper is written for scientists without translation, and the public won’t get to see the report for another fifty years.”
Wouldn’t it just be easier to convince the public to accept a certain amount of risk, to accept debates about trade-offs? What you propose would require convincing that same public to give the government a blank check to fund secret projects that are being kept secret precisely because they present some existential threat. That might work for military projects, since the public could be convinced that the secrecy is necessary to prevent another existential threat (e.g., commies).
It just seems easier to modify public sentiment so that they accept serious discussions of risk. Otherwise, you have to convince them to trust scientists to accurately evaluate those risks in utter secrecy, which scientists will be funded only if they find that the risks are acceptable.
Anyways, I’m unconvinced that secrecy was the cause for the difference in rhetorical style between LA-602 and the RHIC review. What seems more plausible to me is this: Teller et al. could afford to mention that risks remained because they figured that a military project like theirs would get funded anyways. The authors of the RHIC Review had no such assurance.