There is a difference between a 50% “I have no idea” and e.g. a 90% “I am not sure, but probably yes”. Which one would be more appropriate for accepted science that we didn’t personally verify?
Different fields of science are quite different. A field with good epistemics like most of physics is very different than a field that largely does what Feymann called cargo culting. That speech is quite good.
I’m not sure what the term “accepted science” is supposed to mean. It seems like a propaganda term from people who think that science isn’t about empiricism but about something else. Is a paper ‘accepted science’ by virtue of having been accepted by peer review? Is it about whether the relevant government authority accepts the science? Is it about whether authorities like the NYT do?
In practice not knowing frequently means that you want to set yourself up to be resilient to claims being wrong and it’s often not on focusing on the exact probability.
The problem with broad skepticism is that I would reject enormous numbers of true conclusions, including very basic facts about the world.
For example, I haven’t personally verified the heliocentric model of the solar system from observation. I think I’ve “verified” gravitational acceleration maybe once, poorly. I have “verified” vaccine reliability from the fact that I don’t know anyone with polio, but my parents (generally reliable witnesses) actually did remember when people got polio. Also, I once met an EMT who walked through old New England graveyards looking for “tiny tombstones” where multiple children under 10 in a family all died within a year or so, with causes of death and death dates that were consistent with known diseases. (But can I trust him? I mean, he seemed fairly trustworthy, but I never walked through those graveyards.) For that matter, my only verification that the Roman empire existed is what you can perform as a tourist in Italy. I believe one of my old cars had a broken overdrive system because a good mechanic said that it did, and because he fixed the problem within 10 seconds of opening the hood by yanking out a cable, and told me “No charge.” I didn’t take out my Haynes teardown manual and study the engine to verify his claim, though I easily could have. The car was, in fact, fixed, and that was good enough past 200,000 miles.
An over-broad skepticism of experts risks turning people into the kind of credulous fools who try to heal themselves with the powers of quartz crystals.
A more subtle balance is required here, I think, and accepting broad categories of information as “probably true because experts said so” is almost certainly a decent rule of thumb. Especially if you apply some common sense, and if you keep track of which experts appear to be full it (e.g., the replication crisis), and if you remain aware that you almost certainly have some false beliefs but you don’t know which.
An over-broad skepticism of experts risks turning people into the kind of credulous fools who try to heal themselves with the powers of quartz crystals.
There’s a reason why I spoke about generally being skeptical. The person who easily accepts claims about the healing powers of quartz crystals is not broadly skeptical. They are not the person who often says “I don’t know”.
Especially if you apply some common sense, and if you keep track of which experts appear to be full it (e.g., the replication crisis)
The replication crisis is about the community of psychology getting much better at getting rid of bullshit. Before the crisis you could have listened to Feynmann’s cargo cult science speech and him explaining why rat psychology is cargo cult science and observe that the same criticisms apply to most of psychology.
Fields of science that behave like what Feymann describes as cargo cult science but who don’t had their replication crisis are less trustworthy than post-replication crisis psychology. Post-replication crisis psychology still isn’t perfect but it’s a step up.
There are many cases where systematically increased transparency that reveal problems in an expert community should get you to trust them more because they have found ways to reduce problems.
If you ask “What do I do if I don’t know?”, there’s the answer is to make sure that you have decent feedback systems that allow you to change course if what you are doing isn’t working.
There’s the policy of generally being more skeptical both on claims that something is true or something is false and more often say “I don’t know”.
There is a difference between a 50% “I have no idea” and e.g. a 90% “I am not sure, but probably yes”. Which one would be more appropriate for accepted science that we didn’t personally verify?
Different fields of science are quite different. A field with good epistemics like most of physics is very different than a field that largely does what Feymann called cargo culting. That speech is quite good.
I’m not sure what the term “accepted science” is supposed to mean. It seems like a propaganda term from people who think that science isn’t about empiricism but about something else. Is a paper ‘accepted science’ by virtue of having been accepted by peer review? Is it about whether the relevant government authority accepts the science? Is it about whether authorities like the NYT do?
In practice not knowing frequently means that you want to set yourself up to be resilient to claims being wrong and it’s often not on focusing on the exact probability.
The problem with broad skepticism is that I would reject enormous numbers of true conclusions, including very basic facts about the world.
For example, I haven’t personally verified the heliocentric model of the solar system from observation. I think I’ve “verified” gravitational acceleration maybe once, poorly. I have “verified” vaccine reliability from the fact that I don’t know anyone with polio, but my parents (generally reliable witnesses) actually did remember when people got polio. Also, I once met an EMT who walked through old New England graveyards looking for “tiny tombstones” where multiple children under 10 in a family all died within a year or so, with causes of death and death dates that were consistent with known diseases. (But can I trust him? I mean, he seemed fairly trustworthy, but I never walked through those graveyards.) For that matter, my only verification that the Roman empire existed is what you can perform as a tourist in Italy. I believe one of my old cars had a broken overdrive system because a good mechanic said that it did, and because he fixed the problem within 10 seconds of opening the hood by yanking out a cable, and told me “No charge.” I didn’t take out my Haynes teardown manual and study the engine to verify his claim, though I easily could have. The car was, in fact, fixed, and that was good enough past 200,000 miles.
An over-broad skepticism of experts risks turning people into the kind of credulous fools who try to heal themselves with the powers of quartz crystals.
A more subtle balance is required here, I think, and accepting broad categories of information as “probably true because experts said so” is almost certainly a decent rule of thumb. Especially if you apply some common sense, and if you keep track of which experts appear to be full it (e.g., the replication crisis), and if you remain aware that you almost certainly have some false beliefs but you don’t know which.
There’s a reason why I spoke about generally being skeptical. The person who easily accepts claims about the healing powers of quartz crystals is not broadly skeptical. They are not the person who often says “I don’t know”.
The replication crisis is about the community of psychology getting much better at getting rid of bullshit. Before the crisis you could have listened to Feynmann’s cargo cult science speech and him explaining why rat psychology is cargo cult science and observe that the same criticisms apply to most of psychology.
Fields of science that behave like what Feymann describes as cargo cult science but who don’t had their replication crisis are less trustworthy than post-replication crisis psychology. Post-replication crisis psychology still isn’t perfect but it’s a step up.
There are many cases where systematically increased transparency that reveal problems in an expert community should get you to trust them more because they have found ways to reduce problems.
If you ask “What do I do if I don’t know?”, there’s the answer is to make sure that you have decent feedback systems that allow you to change course if what you are doing isn’t working.