Suppose you want to help non-experts fill out the probabilities for a simple Bayesian network. For each link, you need them to specify the chance of a child node being true depending on the parent node’s state.
What’s the most accurate and psychologically easy way to ask for these numbers?
Is it better to ask directly for two conditionals (e.g., “How likely is X if Y is true? How likely is X if Y is false?”)?
Or to ask for a “baseline” probability for X, and then how knowing Y is true or false would shift that estimate (i.e., a base rate and two “deltas”)?
Or is there a smart way to fold all that into a single, intuitive question?
Most research seems to agree that decomposing complex probability judgments into simpler steps improves reliability, and that methods like visual scales or “anchor and adjust” approaches are promising. But I haven’t found a clear consensus or best practice for lay users or practical tools.
Are there common pitfalls to avoid when trying to get honest, consistent numbers from users?
Would appreciate any links, references, or practical experience. I’m actually leaning toward the ‘three question approach’ myself by the way. It’s more onerous, but each question is less of a cognitive load it seems to me.
In this webinar Douglas Hubbard talks about inconsistency even with Calibrated Experts that I feel is very close to the problems you’re grappling with. He says that the simplest solution is to average expert’s answers together.
But I have a feeling elsewhere he’s also said you can account for personal inconsistency by getting the same person to answer the same question on two separate occasions and average their answers. That might be my faulty memory.