First, thanks for this comment—I thought the original post was interesting, but also figured there was probably a mistake in reasoning happening somewhere.
However...
”This is not a fallacy; it happens because you’ve given the agent the wrong prior!”
This begs the question of how to develop priors. I thought the benefit of Bayes is that it can converge on the best probabilities no matter your starting point, when you’ve been presented enough evidence, so long as you don’t assign anything a 0% or 100% prior.
Like, in real life, people who understand math can understand coin flips and gambling and independent events well enough to understand the gambler’s fallacy, and assigning a prior of 1⁄3 to Switchy or Sticky would be ridiculous. But what about other areas of life? For instance, you could be playing a videogame and you don’t know whether an enemy boss was programmed to cycle between both of its possible attacks randomly, or if it was programmed to be Switchy or Sticky. Then I think the “fallacy” presented by the OP would apply, wouldn’t it?
This begs the question of how to develop priors. I thought the benefit of Bayes is that it can converge on the best probabilities no matter your starting point, when you’ve been presented enough evidence, so long as you don’t assign anything a 0% or 100% prior.
This is true, and this still happens, the post didn’t say anything about convergence in the limit
This begs the question of how to develop priors.
For instance, you could be playing a videogame and you don’t know whether an enemy boss was programmed to cycle between both of its possible attacks randomly, or if it was programmed to be Switchy or Sticky. Then I think the “fallacy” presented by the OP would apply, wouldn’t it?
(nice example!) the real answer here is that you don’t start accumulating evidence with the first boss hit, but well before that. Lots of things in the world give you information about how real people will most likely have programmed a boss in this case. Or more practically relevant, you’d consider your prior knowledge to affect your choice of the prior in this case. Pretty sure sticky here is pretty unlikely, and it’s either switchy or randomly.
First, thanks for this comment—I thought the original post was interesting, but also figured there was probably a mistake in reasoning happening somewhere.
However...
”This is not a fallacy; it happens because you’ve given the agent the wrong prior!”
This begs the question of how to develop priors. I thought the benefit of Bayes is that it can converge on the best probabilities no matter your starting point, when you’ve been presented enough evidence, so long as you don’t assign anything a 0% or 100% prior.
Like, in real life, people who understand math can understand coin flips and gambling and independent events well enough to understand the gambler’s fallacy, and assigning a prior of 1⁄3 to Switchy or Sticky would be ridiculous. But what about other areas of life? For instance, you could be playing a videogame and you don’t know whether an enemy boss was programmed to cycle between both of its possible attacks randomly, or if it was programmed to be Switchy or Sticky. Then I think the “fallacy” presented by the OP would apply, wouldn’t it?
This is true, and this still happens, the post didn’t say anything about convergence in the limit
So two things about this
there is a theoretical answer on the priors problem, though it’s computationally intractable. It’s a huge rabbit hole, if you want to go down
(nice example!) the real answer here is that you don’t start accumulating evidence with the first boss hit, but well before that. Lots of things in the world give you information about how real people will most likely have programmed a boss in this case. Or more practically relevant, you’d consider your prior knowledge to affect your choice of the prior in this case. Pretty sure sticky here is pretty unlikely, and it’s either switchy or randomly.