I do claim that among non-scientists, the problem is much more common.
Non-scientists don’t often engage in writing scientific papers. In what instances do you believe they screw things up and should engage in in-depth scientific analysis?
Reasoning from a statistical outlier is probably the most common error I see. Most news stories do this.
Mistakes on LessWrong tend to be more varied. Likely due to LessWrong’s focus on Cognitive Psychology, I see a lot of people repeating the mistakes of Psychoanalysis; coming up with elaborate theories about mental processes with no reasonable means of verification of their ideas.
They do this because it’s good storytelling and they want to sell papers. A have fairly low confidence that teaching the authors statistics helps in any way.
If you think it helps can you explain why you think so?
coming up with elaborate theories about mental processes with no reasonable means of verification of their ideas.
Why do you think those posts need “reasonable means of verification of their ideas” while you haven’t provided one for the post you written and think it’s okay based on heuristics? Aren’t those people not also simply using heuristics instead of structured scientific thinking?
Not the authors; the readers; though I don’t think the authors are generally aware of the problem either.
Verifiability is not heuristics. I’m combining in the term verification the two scientific concepts of direct observation and falsification. By elaborate theories, I’m referring to occam’s razor.
My post isn’t a theory post. It contains a few ideas and a lot of observations, but the assumptions are pretty straightforward and they’re related to the central concept of scientific analysis, but not dependent on one another. The general statement about schools is not dependent on the specific statement about math, nor is the argument about math dependent on the argument about whether rationality is sufficient. And I try to be exact with my phrasing to specify my uncertainty where it exists.
Non-scientists don’t often engage in writing scientific papers. In what instances do you believe they screw things up and should engage in in-depth scientific analysis?
Reasoning from a statistical outlier is probably the most common error I see. Most news stories do this.
Mistakes on LessWrong tend to be more varied. Likely due to LessWrong’s focus on Cognitive Psychology, I see a lot of people repeating the mistakes of Psychoanalysis; coming up with elaborate theories about mental processes with no reasonable means of verification of their ideas.
They do this because it’s good storytelling and they want to sell papers. A have fairly low confidence that teaching the authors statistics helps in any way. If you think it helps can you explain why you think so?
Why do you think those posts need “reasonable means of verification of their ideas” while you haven’t provided one for the post you written and think it’s okay based on heuristics? Aren’t those people not also simply using heuristics instead of structured scientific thinking?
Not the authors; the readers; though I don’t think the authors are generally aware of the problem either.
Verifiability is not heuristics. I’m combining in the term verification the two scientific concepts of direct observation and falsification. By elaborate theories, I’m referring to occam’s razor.
My post isn’t a theory post. It contains a few ideas and a lot of observations, but the assumptions are pretty straightforward and they’re related to the central concept of scientific analysis, but not dependent on one another. The general statement about schools is not dependent on the specific statement about math, nor is the argument about math dependent on the argument about whether rationality is sufficient. And I try to be exact with my phrasing to specify my uncertainty where it exists.