Lot’s of interesting vibes in this post. Some thoughts:
I like personifying. I feel that humans have evolved for dealing with other people to such a degree that we personify all sorts of things, and personifying worldviews probably helps to treat them more objectively. Although, it’s possible it might have exactly the opposite effect if your personas for one worldview seem like friends and personas for other worldviews seem like enemies.
In type 1, the coin is a chaotic system, small changes to the initial input lead to large changes in the output, and since we can’t detect or model those small input changes, we know that we can’t know the resulting output, only the distribution of possibilities. Type 2 feels like it applies more to situations that are already one way or another, but you don’t know which. It could be the coin was already flipped, but it seems like it would apply better to a question like, “is this dice biased or is it fair”. It’s not going to change (presumably) but we don’t know which, we can just get more confident the more times we roll it. So in the first case we have a very good understanding of the dynamics of the uncertainty and why we can’t make a better prediction than 50:50. In the second case, making any claim about the dice before starting to roll it seems foolish. Maybe you have a prior that most dice are fair, but otherwise “I don’t know if it’s biased or not” seems like a good state of knowledge. You don’t think that dice being fair or not is the result of a well understood chaotic system that results in biased ones 50% of the time and fair ones 50% of the time, you just know that you don’t know. I think tracking where and what kind of uncertainty is resulting in a probability distribution is important, because not all uncertainty is the same.
The “why are you supporting this monster” talk get’s me thinking about both strategy under worldview uncertainty and political influence, which are very different beasts.
Worldview uncertainty reminds me of this talk, and also of how from a math perspective utility maximization looks like taking the product of the probability of an outcome by how good that outcome is for each action, but from an actual planning perspective, it feels like trying to make plans that will work well in multiple different worlds given that you don’t know which world you are actually in.
Political influence must unfortunately acknowledge that most people are not speaking in terms of probabilities and hypotheses, but instead in terms of ingroup and outgroup and virtues and flaws. When designing a message for influencing general audiences, you should definitely use statistics and probability to design the message, but the message itself probably shouldn’t itself contain probability.
Lot’s of interesting vibes in this post. Some thoughts:
I like personifying. I feel that humans have evolved for dealing with other people to such a degree that we personify all sorts of things, and personifying worldviews probably helps to treat them more objectively. Although, it’s possible it might have exactly the opposite effect if your personas for one worldview seem like friends and personas for other worldviews seem like enemies.
In type 1, the coin is a chaotic system, small changes to the initial input lead to large changes in the output, and since we can’t detect or model those small input changes, we know that we can’t know the resulting output, only the distribution of possibilities. Type 2 feels like it applies more to situations that are already one way or another, but you don’t know which. It could be the coin was already flipped, but it seems like it would apply better to a question like, “is this dice biased or is it fair”. It’s not going to change (presumably) but we don’t know which, we can just get more confident the more times we roll it. So in the first case we have a very good understanding of the dynamics of the uncertainty and why we can’t make a better prediction than 50:50. In the second case, making any claim about the dice before starting to roll it seems foolish. Maybe you have a prior that most dice are fair, but otherwise “I don’t know if it’s biased or not” seems like a good state of knowledge. You don’t think that dice being fair or not is the result of a well understood chaotic system that results in biased ones 50% of the time and fair ones 50% of the time, you just know that you don’t know. I think tracking where and what kind of uncertainty is resulting in a probability distribution is important, because not all uncertainty is the same.
The “why are you supporting this monster” talk get’s me thinking about both strategy under worldview uncertainty and political influence, which are very different beasts.
Worldview uncertainty reminds me of this talk, and also of how from a math perspective utility maximization looks like taking the product of the probability of an outcome by how good that outcome is for each action, but from an actual planning perspective, it feels like trying to make plans that will work well in multiple different worlds given that you don’t know which world you are actually in.
Political influence must unfortunately acknowledge that most people are not speaking in terms of probabilities and hypotheses, but instead in terms of ingroup and outgroup and virtues and flaws. When designing a message for influencing general audiences, you should definitely use statistics and probability to design the message, but the message itself probably shouldn’t itself contain probability.