You cannot apply Bayes’ Theorem until you have a probability space; many real-world situations, especially the ones people argue about, do not have well-defined probability spaces, including a complete set of mutually exclusive and exhaustive possible events, which are agreed upon by all participants in the argument.
You will notice that, even on LessWrong, people almost never have Bayesian discussions where they literally apply Bayes’ Rule. It would probably be healthy to try to literally do that more often! But making a serious attempt to debate a contentious issue “Bayesianly” typically looks more like Rootclaim’s lab leak debate, which took a lot of setup labor and time, and where the result of quantifying the likelihoods was to reveal just how heavily your “posterior” conclusion depends on your “prior” assumptions, which were outside the scope of debate.
I think prediction markets are good, and I think Rootclaim-style quantified debates are worth doing occasionally, but what we do in most discussion isn’t Bayesian and can’t easily be made Bayesian.
I am not so sure about preferring models to propositions. I think what you’re getting at is that we can make much more rigorous claims about formal models than about “reality”… but most of the time what we care about is reality. And we can’t be rigorous about the intuitive “mental models” that we use for most real-world questions. So if your take is “we should talk about the model we’re using, not what the world is”, then...I don’t think that’s true in general.
In the context of formal models, we absolutely should consider how well they correspond to reality. (It’s a major bias of science that it’s more prestigious to make claims within a model than to ask “how realistic is this model for what we care about?”)
In the context of informal “mental models”, it’s probably good to communicate how things work “in your head” because they might work differently in someone else’s head, but ultimately what people care about is the intersubjective commonalities that can be in both your heads (and, for all practical purposes, in the world), so you do have to deal with that eventually.
I think I agree with this post directionally.
You cannot apply Bayes’ Theorem until you have a probability space; many real-world situations, especially the ones people argue about, do not have well-defined probability spaces, including a complete set of mutually exclusive and exhaustive possible events, which are agreed upon by all participants in the argument.
You will notice that, even on LessWrong, people almost never have Bayesian discussions where they literally apply Bayes’ Rule. It would probably be healthy to try to literally do that more often! But making a serious attempt to debate a contentious issue “Bayesianly” typically looks more like Rootclaim’s lab leak debate, which took a lot of setup labor and time, and where the result of quantifying the likelihoods was to reveal just how heavily your “posterior” conclusion depends on your “prior” assumptions, which were outside the scope of debate.
I think prediction markets are good, and I think Rootclaim-style quantified debates are worth doing occasionally, but what we do in most discussion isn’t Bayesian and can’t easily be made Bayesian.
I am not so sure about preferring models to propositions. I think what you’re getting at is that we can make much more rigorous claims about formal models than about “reality”… but most of the time what we care about is reality. And we can’t be rigorous about the intuitive “mental models” that we use for most real-world questions. So if your take is “we should talk about the model we’re using, not what the world is”, then...I don’t think that’s true in general.
In the context of formal models, we absolutely should consider how well they correspond to reality. (It’s a major bias of science that it’s more prestigious to make claims within a model than to ask “how realistic is this model for what we care about?”)
In the context of informal “mental models”, it’s probably good to communicate how things work “in your head” because they might work differently in someone else’s head, but ultimately what people care about is the intersubjective commonalities that can be in both your heads (and, for all practical purposes, in the world), so you do have to deal with that eventually.