(Thanks to Lukas Finnveden for discussion that prompted these examples, and for authoring examples #3-#6 verbatim.)
A key concept in the theory of open-minded updatelessness (OMU) is “awareness growth”, i.e., conceiving of hypotheses you hadn’t considered before. It’s helpful to gesture at “discovering crucial considerations” as examples of awareness growth. But not all CC discoveries are awareness growth. And we might think we don’t need this OMU idea if awareness growth is just logical updating, i.e. you already had nonzero credence in some hypothesis, but you changed this credence purely by thinking more. What’s the difference? Here are some examples.
I realize it’s possible that I’m in a simulation.
Awareness growth. Because before realizing that, I simply hadn’t conceived of “I’m in a sim” as a way the world could be. (I don’t know how/why I could/should model myself-before-I-read-Bostrom as having had nonzero credence in “I’m in a sim”.)
I realize that the simulation argument implies (under such and such assumptions) I should have high credence in “I’m in a simulation”.
Logical update. Because I’m not discovering a new way the world could be, I’m just learning of a logical implication governing my credences over ways the world could be.
Someone has conceived of the idea that they might be in a simulation (“like a video game!”) but hasn’t considered that simulations might be done for scientific investigations of the past. This updates their view on how likely it is that they’re in a simulation, and also makes them think that more of the probability mass goes to the specific type “investigation of the past”-sim.
The first sentence is awareness growth “by refinement” (Steele and Stefansson (2021, Sec 3.3.2)). The more specific hypotheses you’re now aware of were covered by a more coarse hypothesis you were previously aware of. And the second sentence is a logical update.
I see a rainbow colored car. I had never previously explicitly pictured a rainbow colored car, or thought about those words.
Pretty straightforward awareness growth.
I’m forecasting the likelihood of regime change in a country. I look up the base rates. I forecast based on them and some inside-view adjustments. Later on, the news come in that there was a regime change due to chaos from a natural disaster. I hadn’t conceptualized that possible contribution to regime change! But fortunately I had implicitly accounted for it via base rates that (it turns out) included some examples of natural disasters.
Also pretty straightforward awareness growth. It’s just not the decision-relevant kind, to the extent that you already implicitly priced in “natural disasters lead to regime change” via the base rate. (I think in practice it will very often be ambiguous how precisely we’re pricing in hypotheses we’re unaware of, even if one might argue we’re always kinda sorta pricing them in if you squint.)
Someone learns Newtonian physics.
A mix of awareness growth and a logical update. E.g., the exact law “F = ma” wouldn’t have occurred to most people before learning any Newtonian physics — that’s awareness growth. But some people might’ve both (a) conceived of an object getting pushed in a vacuum never slowing down, yet (b) not assigned very high P(object never slows down | object is pushed in a vacuum) before learning this was a law — that’s a logical update.
Examples of awareness growth vs. logical updates
(Thanks to Lukas Finnveden for discussion that prompted these examples, and for authoring examples #3-#6 verbatim.)
A key concept in the theory of open-minded updatelessness (OMU) is “awareness growth”, i.e., conceiving of hypotheses you hadn’t considered before. It’s helpful to gesture at “discovering crucial considerations” as examples of awareness growth. But not all CC discoveries are awareness growth. And we might think we don’t need this OMU idea if awareness growth is just logical updating, i.e. you already had nonzero credence in some hypothesis, but you changed this credence purely by thinking more. What’s the difference? Here are some examples.
I realize it’s possible that I’m in a simulation.
Awareness growth. Because before realizing that, I simply hadn’t conceived of “I’m in a sim” as a way the world could be. (I don’t know how/why I could/should model myself-before-I-read-Bostrom as having had nonzero credence in “I’m in a sim”.)
I realize that the simulation argument implies (under such and such assumptions) I should have high credence in “I’m in a simulation”.
Logical update. Because I’m not discovering a new way the world could be, I’m just learning of a logical implication governing my credences over ways the world could be.
Someone has conceived of the idea that they might be in a simulation (“like a video game!”) but hasn’t considered that simulations might be done for scientific investigations of the past. This updates their view on how likely it is that they’re in a simulation, and also makes them think that more of the probability mass goes to the specific type “investigation of the past”-sim.
The first sentence is awareness growth “by refinement” (Steele and Stefansson (2021, Sec 3.3.2)). The more specific hypotheses you’re now aware of were covered by a more coarse hypothesis you were previously aware of. And the second sentence is a logical update.
I see a rainbow colored car. I had never previously explicitly pictured a rainbow colored car, or thought about those words.
Pretty straightforward awareness growth.
I’m forecasting the likelihood of regime change in a country. I look up the base rates. I forecast based on them and some inside-view adjustments. Later on, the news come in that there was a regime change due to chaos from a natural disaster. I hadn’t conceptualized that possible contribution to regime change! But fortunately I had implicitly accounted for it via base rates that (it turns out) included some examples of natural disasters.
Also pretty straightforward awareness growth. It’s just not the decision-relevant kind, to the extent that you already implicitly priced in “natural disasters lead to regime change” via the base rate. (I think in practice it will very often be ambiguous how precisely we’re pricing in hypotheses we’re unaware of, even if one might argue we’re always kinda sorta pricing them in if you squint.)
Someone learns Newtonian physics.
A mix of awareness growth and a logical update. E.g., the exact law “F = ma” wouldn’t have occurred to most people before learning any Newtonian physics — that’s awareness growth. But some people might’ve both (a) conceived of an object getting pushed in a vacuum never slowing down, yet (b) not assigned very high P(object never slows down | object is pushed in a vacuum) before learning this was a law — that’s a logical update.