Two Types of (Human) Uncertainty
There seem to be (at least) two different types of uncertainty that feel very different from the inside:
Type 1
I have a coin that I believe to be fair, so , where is the bias of the coin. In that case, I have hypothesis in which I fully believe, and it assigns equal probabilities to the coin landing heads and landing tails.
Type 2
I have a coin, and I’m unsure which way it will land, such that ‘coin lands on tails’ and ‘coin lands on heads’. In that case, I have 2 hypotheses which I am unsure about.
From the inside
Being in Type 1 feels like reality containing randomness. “It could go one way, it could go the other way, whatever.” In practice, we deal with it with epistemology, to get a better estimate of this probability, and expected utility maximization, to get the most of what we know.
But being in the state of Type 2 uncertainty feels like two competing worldviews. It feels like two debaters, interchangeably stealing your brain hardware, arguing for their position. And while being in one worldview, the other one feels completely wrong and stupid and immoral, because of perfectly sound arguments this worldview gives you. Until you give the wheel to the other worldview, which debunks those arguments from the ground up. Each worldview argues for itself as being the ultimate truth.
But math?
Now, mathematically, those two types should be completely the same and give identical results. But humans are not perfect Bayesians. We do not have immediate access to the sum of all mutually exclusive hypotheses’ plausibility to use as a denominator for current plausibility. To calculate, you only need to consider a single hypothesis, but to calculate you need to consider all mutually exclusive, collectively exhaustive hypotheses. So from the inside, Type 2 feels like going back and forth between having absolute certainty in one belief to having absolute certainty in another.
How to deal with Type 2
To understand that you are actually in the state of Type 2 uncertainty, to step into the outside frame of reference, is to introduce a new debater to the table. And this debater is being in the state of Type 1 uncertainty, where he gives some probability to the first worldview and to the second. We, being aspiring rationalists, could try to give that guy a bit more credence, because he does offer some benefits (epistemology, utility maximization). But those original worldviews didn’t go anywhere. They would start arguing with this guy too. They will try to shake his “trying to please everyone” attitude, try to invalidate all the benefits he tries to offer (“what even is ‘truth’?”, “utilitarianism is evil”). And, of course, they will ask “why are you supporting this monster” (which is the other worldview).
I am not sure what to make of this. Writing this post and being in the meta-meta state to the object-level hypotheses, being unsure of what type of uncertainty to use, I think that I personally would benefit from introducing the outside debater more. But sometimes it can be harmful to be the Devil’s advocate: some worldviews aren’t worth being debated and argued with.
The two states differ mathematically mainly with respect to how they update. In the first case, one is confident in the bias of the coin, so the probability will not shift much as new evidence comes in (like e.g. coinflips). In the second case, the probability will shift as new evidence comes in.
As a general rule, insofar as humans are well-described as thinking probabilistically, our probabilistic models are little parts of a big world model. Those little parts don’t just exist for e.g. one coin flip; they stick around after the coin is flipped and interact with the rest of the world model. So the way they update is an inherent part of their type signature; that’s why little models which update differently feel different.
Another difference would be expectations for when the coin gets tossed more than once.
With “Type 1” if I toss coin 2 times I expect “HH”, “HT”, “TH”, “TT”—each with 25% probability
With “Type 2” I’d expect “HH” or “TT” with 50% each.
Lot’s of interesting vibes in this post. Some thoughts:
I like personifying. I feel that humans have evolved for dealing with other people to such a degree that we personify all sorts of things, and personifying worldviews probably helps to treat them more objectively. Although, it’s possible it might have exactly the opposite effect if your personas for one worldview seem like friends and personas for other worldviews seem like enemies.
In type 1, the coin is a chaotic system, small changes to the initial input lead to large changes in the output, and since we can’t detect or model those small input changes, we know that we can’t know the resulting output, only the distribution of possibilities. Type 2 feels like it applies more to situations that are already one way or another, but you don’t know which. It could be the coin was already flipped, but it seems like it would apply better to a question like, “is this dice biased or is it fair”. It’s not going to change (presumably) but we don’t know which, we can just get more confident the more times we roll it. So in the first case we have a very good understanding of the dynamics of the uncertainty and why we can’t make a better prediction than 50:50. In the second case, making any claim about the dice before starting to roll it seems foolish. Maybe you have a prior that most dice are fair, but otherwise “I don’t know if it’s biased or not” seems like a good state of knowledge. You don’t think that dice being fair or not is the result of a well understood chaotic system that results in biased ones 50% of the time and fair ones 50% of the time, you just know that you don’t know. I think tracking where and what kind of uncertainty is resulting in a probability distribution is important, because not all uncertainty is the same.
The “why are you supporting this monster” talk get’s me thinking about both strategy under worldview uncertainty and political influence, which are very different beasts.
Worldview uncertainty reminds me of this talk, and also of how from a math perspective utility maximization looks like taking the product of the probability of an outcome by how good that outcome is for each action, but from an actual planning perspective, it feels like trying to make plans that will work well in multiple different worlds given that you don’t know which world you are actually in.
Political influence must unfortunately acknowledge that most people are not speaking in terms of probabilities and hypotheses, but instead in terms of ingroup and outgroup and virtues and flaws. When designing a message for influencing general audiences, you should definitely use statistics and probability to design the message, but the message itself probably shouldn’t itself contain probability.