The motivating example for this post is whether you should say “So, I actually checked with some of their former employees, and if what they say and my corresponding calculations are right, they actually only saved 138 puppies”, with Quinn arguing that you shouldn’t say it because saying it has bad consequences. The problem is, saying this has very clearly good consequences, which means trying to use it as a tool for figuring out what you think of appeals to consequences sets up your intuitions to confuse you.
(It has clearly good consequences because “how much money goes to PADP right now” is far less import than “building a culture of caring about the actual effectiveness of organizations and truly trying to find/make the best ones”. Plus if, say, Animal Charity Evaluators trusted this higher number of puppies saved and it had lead them to recommend PADP as I’ve if their top charities, that that would mean displacing funds that could have gone to more effective animal charities. The whole Effective Altruism project is about trying to figure out how to get the biggest positive impact, and you can’t do this if you declare discussing negative information about organizations off limits.)
The post would be a lot clearer if it had a motivating example that really did have bad consequences, all things considered. As a person who’s strongly pro transparency is hard for me to come up with cases, but there are still contexts where I think it’s probably the case. What if Carter were a researcher who had run a small study on a new infant vaccine and seen elevated autism rates on the experimental group. There’s an existing “vaccines cause autism” meme that is both very probably wrong and very probably harmful, which means Carter should be careful about messaging for their results. Good potential outcomes include:
Carter’s experiment is replicated, confirmed, and the vaccine is not rolled out.
Carter’s experiment fails to replicate, researchers look into it more, and discover that there was a problem in the initial experiment / in the replication / they need more data / etc
Bad potential outcomes include:
Headlines that say “scientists finally admit vaccines do cause autism”
Because of the potential harmful consequences of handling this poorly, Carter should be careful about how they talk about their results and to who. Trying to get funding to scale up the experiment, making sure the FDA is aware, letting other researchers know, etc, all are beneficial and have good consequences. Going to the mainstream media with a controversial sell-lots-of-papers story, by contrast, would have predictably bad consequences.
When talking with friends or within your field it’s hard to think of cases where you shouldn’t just say the interesting thing you’ve found, while with larger audiences and in less truth-oriented cultures you need to start being more careful.
The post would be a lot clearer if it had a motivating example that really did have bad consequences, ask things considered.
The extreme case would be a scientific discovery which enabled anyone to destroy the world, such as the supernova thing in Three Worlds Collide or the thought experiment that Bostrom discusses in The Vulnerable World Hypothesis:
So let us consider a counterfactual history in which Szilard invents nuclear fission and realizes that a nuclear bomb could be made with a piece of glass, a metal object, and a battery arranged in a particular configuration. What happens next? Szilard becomes gravely concerned. He sees that his discovery must be kept secret at all costs. But how. His insight is bound to occur to others. He could talk to a few of his physicist friends, the ones most likely to stumble upon the idea, and try to persuade them not to publish anything on nuclear chain reactions or on any of the reasoning steps leading up to the dangerous discovery. (That is what Szilard did in actual history.)
[...] Soon, figuring out how to initiate a nuclear chain reaction with pieces of metal, glass, and electricity will no longer take genius but will be within reach of any STEM student with an inventive mindset.
Note, I’m not arguing for a positive obligation to always inform everyone (see last few lines of dialogue), it’s important for people to use their discernment sometimes.
But, in the case you mentioned, if your study really did find that a vaccine caused autism, by the logic of the dialogue, that casts doubt on the “vaccines don’t cause autism and antivaxxers are wrong and harmful” belief. (Maybe you’re not the only one who has found that vaccines cause autism, and other researchers are hiding it too). So, you should at least update that belief on the new evidence before evaluating consequences. (It could be that, even after considering this, the new study is likely to be a fluke, and discerning researchers will share the new study in an academic community without going to the press)
My main objection is that the post is built around a case where Quinn is very wrong in their initial “bad consequences” claim, and that this leads people to have misleading intuitions. I was trying to propose an alternative situation where the “bad consequences” claim was true or closer to true, but where Quinn would still be wrong to suggest Carter shouldn’t describe what they’d found.
(Also, for what it’s worth, I find the Quinn character’s argumentative approach very frustrating to read. This makes it hard to take anything that character describes seriously.)
The motivating example for this post is whether you should say “So, I actually checked with some of their former employees, and if what they say and my corresponding calculations are right, they actually only saved 138 puppies”, with Quinn arguing that you shouldn’t say it because saying it has bad consequences. The problem is, saying this has very clearly good consequences, which means trying to use it as a tool for figuring out what you think of appeals to consequences sets up your intuitions to confuse you.
(It has clearly good consequences because “how much money goes to PADP right now” is far less import than “building a culture of caring about the actual effectiveness of organizations and truly trying to find/make the best ones”. Plus if, say, Animal Charity Evaluators trusted this higher number of puppies saved and it had lead them to recommend PADP as I’ve if their top charities, that that would mean displacing funds that could have gone to more effective animal charities. The whole Effective Altruism project is about trying to figure out how to get the biggest positive impact, and you can’t do this if you declare discussing negative information about organizations off limits.)
The post would be a lot clearer if it had a motivating example that really did have bad consequences, all things considered. As a person who’s strongly pro transparency is hard for me to come up with cases, but there are still contexts where I think it’s probably the case. What if Carter were a researcher who had run a small study on a new infant vaccine and seen elevated autism rates on the experimental group. There’s an existing “vaccines cause autism” meme that is both very probably wrong and very probably harmful, which means Carter should be careful about messaging for their results. Good potential outcomes include:
Carter’s experiment is replicated, confirmed, and the vaccine is not rolled out.
Carter’s experiment fails to replicate, researchers look into it more, and discover that there was a problem in the initial experiment / in the replication / they need more data / etc
Bad potential outcomes include:
Headlines that say “scientists finally admit vaccines do cause autism”
Because of the potential harmful consequences of handling this poorly, Carter should be careful about how they talk about their results and to who. Trying to get funding to scale up the experiment, making sure the FDA is aware, letting other researchers know, etc, all are beneficial and have good consequences. Going to the mainstream media with a controversial sell-lots-of-papers story, by contrast, would have predictably bad consequences.
When talking with friends or within your field it’s hard to think of cases where you shouldn’t just say the interesting thing you’ve found, while with larger audiences and in less truth-oriented cultures you need to start being more careful.
EDIT: expanded this into https://www.jefftk.com/p/appeals-to-consequences
The extreme case would be a scientific discovery which enabled anyone to destroy the world, such as the supernova thing in Three Worlds Collide or the thought experiment that Bostrom discusses in The Vulnerable World Hypothesis:
Note, I’m not arguing for a positive obligation to always inform everyone (see last few lines of dialogue), it’s important for people to use their discernment sometimes.
But, in the case you mentioned, if your study really did find that a vaccine caused autism, by the logic of the dialogue, that casts doubt on the “vaccines don’t cause autism and antivaxxers are wrong and harmful” belief. (Maybe you’re not the only one who has found that vaccines cause autism, and other researchers are hiding it too). So, you should at least update that belief on the new evidence before evaluating consequences. (It could be that, even after considering this, the new study is likely to be a fluke, and discerning researchers will share the new study in an academic community without going to the press)
My main objection is that the post is built around a case where Quinn is very wrong in their initial “bad consequences” claim, and that this leads people to have misleading intuitions. I was trying to propose an alternative situation where the “bad consequences” claim was true or closer to true, but where Quinn would still be wrong to suggest Carter shouldn’t describe what they’d found.
(Also, for what it’s worth, I find the Quinn character’s argumentative approach very frustrating to read. This makes it hard to take anything that character describes seriously.)