Here’s a broader article with some pointers on masks.
Good explanation #3- we perceive probabilities differently from their objective values (i.e., a question of calibration). Our responses to questions will be both a function of our “underlying” subjective probabilities and also the mapping of that to the response format. In the link, for example, responding with (p) 10% to 90% feels like being from (w(p)) 20% to 70% sure.
Charlie Steiner, right, it’s not doable for, say, all products on (or could be on) the market, but it is certainly doable among the products in a person’s consideration set. If we posit that they would make a choice among 4, then eliciting binary preferences might—but also might not—faithfully reflect how preferences look in the 4 set. So to MichaelA’s point, if preferences are context-dependent, then you need to identify appropriate contexts, or reasonable situations.
Context-dependent preferences present a big problem because “true” context-less preferences...maybe don’t exist. At the very least, we can make sure we’re eliciting preferences in an ecologically-valid way.
Binary choices are useful, but when they lead to inconsistencies, one should wonder whether it’s because preferences are inconsistent or whether it’s an elicitation thing. If people really would choose between A and B and not consider C or D, then ranking A and B is the relevant question. If people would consider A, B, C, and D (or at least pick between A and B in the context of C and D) then ranking all four (or at least ranking A and B in the context of C and D) is the relevant question.
Very neat post.
Intransitive preferences can be found from a series of binary choices, but if you force a ranking among the full set, you won’t have intransitive preferences (i.e., you can write out a gradient). This also means the elicitation procedure affects your inferences about the vectors. It would seem that circular preferences “fit,” but really they could just be fitting the (preferences | elicitation method) rather than “unconditional” (“core” + “irrationality,” whatever irrationality means) preferences. Preferences are also not independent of “irrelevant” alternatives as perceived attribute levels are evaluated contextually (that’s necessarily irrational?).
One implication I see here is that 0 vectors are points with no inclination to switch or having “no desire.” These would be useful model falsification points (e.g., Figure 7 implies that people don’t care about sportiness at all conditional on weight being “right”). But they would also only seem to correspond to ideal points or “ideal configuration” points. Without data on what the agent wants and only on what they are being offered (“I want a sporty car, but not too sporty; Car A is closest, but still not quite right, too bad”), you’ll be fitting the wrong hill to run up.
Related: Fungus arbitrage https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6584331/
What is the difference between a generic “signal” and a “price signal”? What is a “price” in physiology? I think it would be interesting to see what insights an economic perspective of physiology would provide, but the constructs need to be defined pretty clearly so analogies can be drawn.
Another question is which basic assumptions embraced in economics can reasonably apply to the units of analysis in physiology (cells, etc.). Economists already have a hard enough time validating assumptions for humans.
This is aleatory (inherent randomness) vs. epistemic (knowledge) uncertainty. You can parse this as uncertainty inherent in the parameters vs. uncertainty inherent in your estimates of the parameters / the parameterization of the model.
This is a very important distinction that has received treatment in the prediction literature but, indeed, is not applied enough to interpreting others’ predictions among laypeople.