This feels related to the predictive processing framework, in which the classifications of one model are then predicted by another.
More tangentially, I’ve previously thought about merging cognitive biases and values, since we can view both of them as deviations from the optimal resource-maximizing policy. For example, suppose that I am willing to bet even when I am being Dutch booked. You could think of that as a type of irrationality, or you could think of it as an expression of me valuing being Dutch booked, and therefore being willing to pay to experience it.
This is related to the Lacanian/”existential kink” idea that most dysfunctions are actually deliberate, caused by subagents that are trying to pursue some goal at odds with the rest of your goals.
Let’s look at a specific example: the Allais paradox. (See page 9 of the TDT paper (page 12 of the pdf) for the treatment I’m referencing.)
It is not plausible to me that the commonly-labeled-irrational behavior in the Allais paradox arises from a desire to be money-pumped. It seems more plausible, for example, that it arises from a cognitive heuristic which makes decisions by identifying the most relevant dimensions along which options differ, weighing how significant the various differences feel, and combining those results to make a decision. Moving from 100% probability to 33⁄34 probability feels significant because we are moving from certainty to uncertainty, whereas the difference in payouts feels relatively uncertain. The reduction in total payout feels insignificant compared to this. In contrast, moving from 34⁄100 to 33⁄100 feels insignificant compared to the reduction in total payout.
Of course, this is still consistent with a biases-as-values analysis. EG, we can treat the heuristic weights I mention as values rather than mere heuristics. Or, reaching for a different explanation, we can say that we don’t want to feel like a fool in the case that we choose 33⁄34 and lose, when we could have had certainty. Probabilities are subjective, so no matter how much we’re assured 33⁄34 is the true probability, we can imagine a friend with a different evaluation of the odds who finds our decision foolish. Humans evolved to avoid such criticism. A statement of 100% probability is, in some sense, taken more seriously than a statement of near-100% probability. In that case, if we lose anyway, we can blame the person who told us it was 100%, so we are absolved from any potential feeling of embarrassment. In the 33⁄100 vs 34⁄100 version, there is no such effect.
I want to say something like “the optimal resource-maximizing policy is an illusion” though. Like, it is privileging some sort of reference frame. In economics, profit maximization privileges the wellbeing of the shareholders. A more holistic view would treat all parties involved as stakeholders (employees, owners, customers, and even local communities where the company operates) and treat corporate policy as a bargaining problem between those stakeholders. This would better reflect long-term viability of strategies. (Naively profit-maximizing behavior has a tendency to create high turnover in employees, drive away customers, and turn local communities against the company.)
So yes, you can view everything as values, but I would include “resource-maximizing” in that as well.
A further question: what’s at stake when you classify something as ‘values’?
EG, in the Allais paradox, one thing that’s at stake is whether the ‘irrational’ person should change their answer to be rational.
This feels related to the predictive processing framework, in which the classifications of one model are then predicted by another.
More tangentially, I’ve previously thought about merging cognitive biases and values, since we can view both of them as deviations from the optimal resource-maximizing policy. For example, suppose that I am willing to bet even when I am being Dutch booked. You could think of that as a type of irrationality, or you could think of it as an expression of me valuing being Dutch booked, and therefore being willing to pay to experience it.
This is related to the Lacanian/”existential kink” idea that most dysfunctions are actually deliberate, caused by subagents that are trying to pursue some goal at odds with the rest of your goals.
Let’s look at a specific example: the Allais paradox. (See page 9 of the TDT paper (page 12 of the pdf) for the treatment I’m referencing.)
It is not plausible to me that the commonly-labeled-irrational behavior in the Allais paradox arises from a desire to be money-pumped. It seems more plausible, for example, that it arises from a cognitive heuristic which makes decisions by identifying the most relevant dimensions along which options differ, weighing how significant the various differences feel, and combining those results to make a decision. Moving from 100% probability to 33⁄34 probability feels significant because we are moving from certainty to uncertainty, whereas the difference in payouts feels relatively uncertain. The reduction in total payout feels insignificant compared to this. In contrast, moving from 34⁄100 to 33⁄100 feels insignificant compared to the reduction in total payout.
Of course, this is still consistent with a biases-as-values analysis. EG, we can treat the heuristic weights I mention as values rather than mere heuristics. Or, reaching for a different explanation, we can say that we don’t want to feel like a fool in the case that we choose 33⁄34 and lose, when we could have had certainty. Probabilities are subjective, so no matter how much we’re assured 33⁄34 is the true probability, we can imagine a friend with a different evaluation of the odds who finds our decision foolish. Humans evolved to avoid such criticism. A statement of 100% probability is, in some sense, taken more seriously than a statement of near-100% probability. In that case, if we lose anyway, we can blame the person who told us it was 100%, so we are absolved from any potential feeling of embarrassment. In the 33⁄100 vs 34⁄100 version, there is no such effect.
I want to say something like “the optimal resource-maximizing policy is an illusion” though. Like, it is privileging some sort of reference frame. In economics, profit maximization privileges the wellbeing of the shareholders. A more holistic view would treat all parties involved as stakeholders (employees, owners, customers, and even local communities where the company operates) and treat corporate policy as a bargaining problem between those stakeholders. This would better reflect long-term viability of strategies. (Naively profit-maximizing behavior has a tendency to create high turnover in employees, drive away customers, and turn local communities against the company.)
So yes, you can view everything as values, but I would include “resource-maximizing” in that as well.
A further question: what’s at stake when you classify something as ‘values’?
EG, in the Allais paradox, one thing that’s at stake is whether the ‘irrational’ person should change their answer to be rational.