F techniques tend to make assumptions that are equivalent to establishing prior distributions, but because it’s easy to forget about these assumptions, many people use F techniques without considering what the assumptions mean. If you are explicit about establishing priors, however, this mostly evaporates.
Notice that the point about your analogy was regarding area of application, not relative vagueness.
I don’t have a strong personal opinion about F/B. This is just based on informal observations about F techniques versus B techniques.
Every biology paper released based on a 5% P-value threshold without regard to the underlying plausibility of the connection. There are many effects where I wouldn’t take a 0.1% P-value to mean anything (see: kerfluffle over superluminal neutrinos), and some where I’d take a 10% P-value as a weak but notable degree of confirmation.
“Area of app” depends on granularity: “analysis of running time” (e.g. “how long will this take, I haven’t got all day”) is an area of app, but if we are willing to drill in we can talk about distributions on input vs worst case as separate areas of app. I don’t really see a qualitative difference here: sometimes F is more appropriate, sometimes not. It really depends on how much we know about the problem and how paranoid we are being. Just as with algorithms—sometimes input distributions are reasonable, sometimes not.
Or if we are being theoretical statisticians, our intended target for techniques we are developing. I am not sympathetic to “but the unwashed masses don’t really understand, therefore” kind of arguments. Math techniques don’t care, it’s best to use what’s appropriate.
edit: in fact, let the utility function u(.) be the running time of an algorithm A, and the prior over theta the input distribution for algorithm A inputs. Now consider what the expectation for F vs the expectation for B is computing. This is a degenerate statistical problem, of course, but this isn’t even an analogy, it’s an isomorphism.
F techniques tend to make assumptions that are equivalent to establishing prior distributions, but because it’s easy to forget about these assumptions, many people use F techniques without considering what the assumptions mean. If you are explicit about establishing priors, however, this mostly evaporates.
Notice that the point about your analogy was regarding area of application, not relative vagueness.
I don’t have a strong personal opinion about F/B. This is just based on informal observations about F techniques versus B techniques.
Can you name three examples of this happening?
Here’s one: http://lesswrong.com/lw/f6o/original_research_on_less_wrong/7q1g
Every biology paper released based on a 5% P-value threshold without regard to the underlying plausibility of the connection. There are many effects where I wouldn’t take a 0.1% P-value to mean anything (see: kerfluffle over superluminal neutrinos), and some where I’d take a 10% P-value as a weak but notable degree of confirmation.
I could, but I doubt anything would come of it. Forget about the off-hand vagueness remark; the analogy still fails.
“Area of app” depends on granularity: “analysis of running time” (e.g. “how long will this take, I haven’t got all day”) is an area of app, but if we are willing to drill in we can talk about distributions on input vs worst case as separate areas of app. I don’t really see a qualitative difference here: sometimes F is more appropriate, sometimes not. It really depends on how much we know about the problem and how paranoid we are being. Just as with algorithms—sometimes input distributions are reasonable, sometimes not.
Or if we are being theoretical statisticians, our intended target for techniques we are developing. I am not sympathetic to “but the unwashed masses don’t really understand, therefore” kind of arguments. Math techniques don’t care, it’s best to use what’s appropriate.
edit: in fact, let the utility function u(.) be the running time of an algorithm A, and the prior over theta the input distribution for algorithm A inputs. Now consider what the expectation for F vs the expectation for B is computing. This is a degenerate statistical problem, of course, but this isn’t even an analogy, it’s an isomorphism.