Are dogs bad?

This is my response to Trapped Priors As A Basic Problem Of Rationality by Scott Alexander.

I felt that the discussion in the post and comments lack some references to lessons learned from Yudkowsky’s Causal Diagrams and Causal Models and Fake Causality and One Argument Against An Army which is that you can’t bounce back and forth between verdict and assumptions double-counting the evidence in self-reinforcing loop.

Also, I posit that you have to be careful to distinguish separate concepts of “prior on this particular encounter going badly” and “prior on the fraction of the encounters which will go badly” etc. Scott’s post seems to hide both under the same phrase “prior that dogs are terrifying”.

I’ve made a colorful charts showing the story of three encounters with dogs, which illustrate how I believe one should update their prior (and meta-prior) and what can go badly if you let your final verdicts influence them instead:

https://​​docs.google.com/​​spreadsheets/​​d/​​1vQfK-g0zAMEcXwbbPbAWWMTpHprE1jpf6bmwDjtJkHo/​​edit?usp=sharing

One important feature of the epistemology algorithm I propose above, is that you can admit you were wrong about your past judgments: after exposure to many friendly puppies and updating your prior on “fraction of good dogs” up sufficiently, you may reevaluate your previous verdicts (“see them in a new light”) and decide that perhaps they were also good dogs. And there is no risk of this causing a loop in the reasoning, because there are no edges from “verdicts” to “the prior” in this algorithm.

There are many open questions, though. Like for example: should one also update the operational definition of word “bad” over time—in my example, initially we assume that “bad dog” will make us feel threatened 23 of the time, and this remains fixed throughout the lifetime. But perhaps this should also update, adding one more layer of Bayes computations. (Perhaps this is exactly the part which is “broken” in people suffering from phobia—perhaps they have a different value than 23, or let it drift too much or too little w.r.t. rest of us?). I don’t know how to do it, without risking that “bad” and “good” will lose their meaning over time. The 13 vs 23 split might also change for a different reason: perhaps over time we learn something about our sensors’ reliability, their mapping to reality, etc. This might introduce another layer of Bayes. I’d like to know how a full-fledged, correct, Bayesian, mental model of this “simple” issue of “Are dogs bad?” should really look like.