Introduction To The Infra-Bayesianism Sequence
TLDR: Infra-Bayesianism is a new approach to epistemology / decision theory / reinforcement learning theory, which builds on “imprecise probability” to solve the problem of prior misspecification / grain-of-truth / nonrealizability which plagues Bayesianism and Bayesian reinforcement learning. Infra-Bayesianism also naturally leads to an implementation of UDT, and (more speculatively at this stage) has applications to multi-agent theory, embedded agency and reflection. This post is the first in a sequence which lays down the foundation of the approach.
Diffractor and Vanessa proudly present: The thing we’ve been working on for the past five months. I initially decided that Vanessa’s scattered posts about incomplete models were interesting, and could benefit from being written up in a short centralized post. But as we dug into the mathematical details, it turned out it didn’t really work, and then Vanessa ran across the true mathematical thing (which had previous ideas as special cases) and scope creep happened.
This now looks like a new, large, and unusually tractable vein of research. Accordingly, this sequence supersedes all previous posts about incomplete models, and by now we’ve managed to get quite a few interesting results, and have ideas for several new research directions.
Diffractor typed everything up and fleshed out the proof sketches, Vanessa originated almost all of the ideas and theorems. It was a true joint effort, this sequence would not exist if either of us were absent. Alex Mennen provided feedback on drafts to make it much more comprehensible than it would otherwise be, and Turntrout and John Maxwell also helped a bit in editing.
Be aware this sequence of posts has the math textbook issue where it requires loading a tower of novel concepts that build on each other into your head, and cannot be read in a single sitting. We will be doing a group readthrough on MIRIxDiscord where we can answer questions and hopefully get collaborators, PM me to get a link.
Learning theory traditionally deals with two kinds of setting: “realizable” and “agnostic” or “non-realizable”. In realizable settings, we assume that the environment can be described perfectly by a hypothesis inside our hypothesis space. (AIXI is an example of this) We then expect the algorithm to converge to acting as if it already knew the correct hypothesis. In non-realizable settings, we make no such assumption. We then expect the algorithm to converge to the best approximation of the true environment within the available hypothesis space.
As long as the computational complexity of the environment is greater than the computational complexity of the learning algorithm, the algorithm cannot use an easy-to-compute hypothesis that would describe the environment perfectly, so we are in the nonrealizable setting. When we discuss AGI, this is necessarily the case, since the environment is the entire world: a world that, in particular, contains the agent itself and can support other agents that are even more complex, much like how halting oracles (which you need to run Solomonoff Induction) are nowhere in the hypotheses which Solomonoff considers. Therefore, the realizable setting is usually only a toy model. So, instead of seeking guarantees of good behavior assuming the environment is easy to compute, we’d like to get good behavior simply assuming that the environment has some easy-to-compute properties that can be exploited.
For offline and online learning there are classical results in the non-realizable setting, in particular VC theory naturally extends to the non-realizable setting. However, for reinforcement learning there are few analogous results. Even for passive Bayesian inference, the best non-realizable result found in our literature search is Shalizi’s which relies on ergodicity assumptions about the true environment. Since reinforcement learning is the relevant setting for AGI and alignment theory, this poses a problem.
Logical inductors operate in the nonrealizable setting, and the general reformulation of them in Forecasting Using Incomplete Models is of interest for broader lessons applicable to acting in an unknown environment. In said paper, reality can be drawn from any point in the space of probability distributions over infinite sequences of observations, . Almost all of the points in this space aren’t computable, and because of that, we shouldn’t expect convergence to the true environment, as occurs in the realizable setting where the true environment lies in your hypothesis space.
However, even if we can’t hope to learn the true environment, we can at least hope to learn some property of the true environment, like “every other bit is a 0”, and have our predictions reflect that if it holds. A hypothesis in this setting is a closed convex subset of which can be thought of as “I don’t know what the true environment is, but it lies within this set”. The result obtained in the above-linked paper was, if we fix a countable family of properties that reality may satisfy, and define the inductor based on them, then for all of those which reality fulfills, the predictions of the inductor converge to that closed convex set and so fulfill the property in the limit.
What About Environments?
However, this just involves sequence prediction. Ideally, we’d want some space that corresponds to environments that you can interact with, instead of an environment that just outputs bits. And then, given a suitable set in it… Well, we don’t have a fixed environment to play against. The environment could be anything, even a worst-case one within . We have Knightian uncertainty over our set of environments, it is not a probability distribution over environments. So, we might as well go with the maximin policy.
Where is the distribution over histories produced by policy interacting with environment . is just some utility function.
When we refer to “Murphy”, this is referring to whatever force is picking the worst-case environment to be interacting with. Of course, if you aren’t playing against an adversary, you’ll do better than the worst-case utility that you’re guaranteed. Any provable guarantees come in the form of establishing lower bounds on expected utility if a policy is selected.
The problem of generating a suitable space of environments was solved in Reinforcement Learning With Imperceptible Rewards. If two environments are indistinguishable by any policy they are identified, a mixture of environments corresponds to picking one of the component environments with the appropriate probability at the start of time, and there was a notion of update.
However, this isn’t good enough. We could find no good update rule for a set of environments, we had to go further.
Which desiderata should be fulfilled to make maximin policy selection over a set of environments (actually, we’ll have to generalize further than this) to work successfully? We’ll have three starting desiderata.
Desideratum 1: There should be a sensible notion of what it means to update a set of environments or a set of distributions, which should also give us dynamic consistency. Let’s say we’ve got two policies, and which are identical except they differ after history . If, after updating on history , the continuation of looks better than the continuation of , then it had better be the case that, viewed from the start, outperforms .
Desideratum 2: Our notion of a hypothesis (set of environments) in this setting should collapse “secretly equivalent” sets, such that any two distinct hypotheses behave differently in some relevant aspect. This will require formalizing what it means for two sets to be “meaningfully different”, finding a canonical form for an equivalence class of sets that “behave the same in all relevant ways”, and then proving some theorem that says we got everything.
Desideratum 3: We should be able to formalize the “Nirvana trick” (elaborated below) and cram any UDT problem where the environment cares about what you would do, into this setting. The problem is that we’re just dealing with sets of environments which only depend on what you do, not what your policy is, which hampers our ability to capture policy-dependent problems in this framework. However, since Murphy looks at your policy and then picks which environment you’re in, there is an acausal channel available for the choice of policy to influence which environment you end up in.
The “Nirvana trick” is as follows. Consider a policy-dependent environment, a function (Ie, the probability distribution over the next observation depends on the history so far, the action you selected, and your policy). We can encode a policy-dependent environment as a set of policy-independent environments that don’t care about your policy, by hard-coding every possible deterministic policy into the policy slot, making a family of functions of type , which is the type of policy-independent environments. It’s similar to taking a function , and plugging in all possible to get a family of functions that only depend on .
Also, we will impose a rule that, if your action ever violates what the hard-coded policy predicts you do, you attain Nirvana (a state of high or infinite reward). Then, Murphy, when given this set of environments, will go “it’d be bad if they got high or infinite reward, thus I need to pick an environment where the hard-coded policy matches their actual policy”. When playing against Murphy, you’ll act like you’re selecting a policy for an environment that does pay attention to what policy you pick. As-stated, this doesn’t quite work, but it can be repaired.
There’s two options. One is making Nirvana count as infinite reward. We will advance this to a point where we can capture any UDT/policy-selection problem, at the cost of some mathematical ugliness. The other option is making Nirvana count as 1 reward forever afterward, which makes things more elegant, and it is much more closely tied to learning theory, but that comes at the cost of only capturing a smaller (but still fairly broad) class of decision-theory problems. We will defer developing that avenue further until a later post.
A Digression on Deterministic Policies
We’ll be using deterministic policies throughout. The reason for using deterministic policies instead of probabilistic policies (despite the latter being a larger class), is that the Nirvana trick (with infinite reward) doesn’t work with probabilistic policies. Also, probabilistic policies don’t interact well with embeddedness, because it implicitly assumes that you have a source of random bits that the rest of the environment can never interact with (except via your induced action) or observe.
Deterministic policies can emulate probabilistic policies by viewing probabilistic choice as deterministically choosing a finite bitstring to enter into a random number generator (RNG) in the environment, and then you get some bits back and act accordingly.
However, we aren’t assuming that the RNG is a good one. It could be insecure or biased or nonexistent. Thus, we can model cases like Death In Damascus or Absent-Minded Driver where you left your trusty coin at home and don’t trust yourself to randomize effectively. Or a nanobot that’s too small to have a high bitrate RNG in it, so it uses a fast insecure PRNG (pseudorandom number generator). Or game theory against a mindreader that can’t see your RNG, just the probability distribution over actions you’re using the RNG to select from, like an ideal CDT opponent. It can also handle cases where plugging certain numbers into your RNG chip cause lots of heat to be released, or maybe the RNG is biased towards outputting 0′s in strong magnetic fields. Assuming you have a source of true randomness that the environment can’t read isn’t general enough!
Sets of probability distributions or environments aren’t enough, we need to add in some extra data. This can be best motivated by thinking about how updates should work in order to get dynamic consistency.
Throughout, we’ll be using a two-step view of updating, where first, we chop down the measures accordingly (the “raw update”), and then we renormalize back up to 1.
So, let’s say we have a set of two probability distributions and . We have Knightian uncertainty within this set, we genuinely don’t know which one will be selected, it may even be adversarial. says observation has 0.5 probability, says observation has 0.01 probability. And then you see observation ! The wrong way to update would be to go “well, both probability distributions are consistent with observed data, I guess I’ll update them individually and resume being completely uncertain about which one I’m in”, you don’t want to ignore that one of them assigns 50x higher probability to the thing you just saw.
However, neglecting renormalization, we can do the “raw update” to each of them individually, and get and (finite measures, not probability distributions), where has 0.5 measure and has 0.01 measure.
Ok, so instead of a set of probability distributions, since that’s insufficient for updates, let’s consider a set of measures , instead. Each individual measure in that set can be viewed as , where is a probability distribution, and is a scaling term. Note that is not uniform across your set, it varies depending on which point you’re looking at.
However, this still isn’t enough. Let’s look at a toy example for how to design updating to get dynamic consistency. We’ll see we need to add one more piece of data. Consider two environments where a fair coin is flipped, you see it and then say “heads” or “tails”, and then you get some reward. The COPY Environment gives you 0 reward if you say something different than what the coin shows, and 1 reward if you match it. The REVERSE HEADS Environment always you 0.5 reward if the coin comes up tails, but it comes up heads, saying “tails” gets you 1 reward and “heads” gets you 0 reward. We have Knightian uncertainty between the two environments.
For finding the optimal policy, we can observe that saying “tails” when the coin is tails helps out in COPY and doesn’t harm you in REVERSE HEADS, so that’s a component of an optimal policy.
Saying “tails” no matter what the coin shows means you get utility on COPY, and utility on REVERSE HEADS. Saying “tails” when the coin is tails and “heads” when the coin is heads means you get utility on COPY and utility on REVERSE HEADS. Saying “tails” no matter what has a better worst-case value, so it’s the optimal maximin policy.
Now, if we see the coin come up heads, how should we update? The wrong way to do it would be to go “well, both environments are equally likely to give this observation, so I’ve got Knightian uncertainty re: whether saying heads or tails gives me 1 or 0 utility, both options look equally good”. This is because, according to past-you, regardless of what you did upon seeing the coin come up “tails”, the maximin expected values of saying “heads” when the coin comes up heads, and saying “tails” when the coin comes up heads, are unequal. Past-you is yelling at you from the sidelines not to just shrug and view the two options as equally good.
Well, let’s say you already know that you would say “tails” when the coin comes up tails and are trying to figure out what to do now that the coin came up heads. The proper way to reason through it is going “I have Knightian uncertainty between COPY which has 0.5 expected utility assured off-history since I say “tails” on tails, and REVERSE HEADS, which has 0.25 expected utility assured off-history. Saying “heads” now that I see the coin on heads would get me expected utility in COPY and utility in REVERSE HEADS, saying “tails” would get me utility in COPY and utility in REVERSE HEADS, I get higher worst-case value by saying “tails”.” And then you agree with your past self re: how good the various decisions are.
Huh, the proper way of doing this update to get dynamic consistency requires keeping track of the fragment of expected utility we get off-history.
Similarly, if you messed up and precommitted to saying “heads” when the coin comes up tails (a bad move), we can run through a similar analysis and show that keeping track of the expected utility off-history leads you to take the action that past-you would advise, after seeing the coin come up heads.
So, with the need to keep track of that fragment of expected utility off-history to get dynamic consistency, it isn’t enough to deal with finite measures , that still isn’t keeping track of the information we need. What we need is , where is a finite measure, and is a number . That term keeps track of the expected value off-history so we make the right decision after updating. (We’re glossing over the distinction between probability distributions and environments here, but it’s inessential)
We will call such a pair an “affine measure”, or “a-measure” for short. The reason for this terminology is because a measure can be thought of as a linear function from the space of continuous functions to . But then there’s this term stuck on that acts as utility, and a linear function plus a constant is an affine function. So, that’s an a-measure. A pair of a finite measure and a term where .
But wait, we can go even further! Let’s say our utility function of interest is bounded. Then we can do a scale-and-shift until it’s in .
Since our utility function is bounded in … what would happen if you let in measures with negative parts, but only if they’re paired with a sufficiently large term? Such a thing is called an sa-measure, for signed affine measure. It’s a pair of a finite signed measure and a term that’s as-large-or-larger than the amount of negative measure present. No matter your utility function, even if it assigns 0 reward to outcomes with positive measure and 1 reward to outcomes with negative measure, you’re still assured nonnegative expected value because of that term. It turns out we actually do need to expand in this direction to keep track of equivalence between sets of a-measures, get a good tie-in with convex analysis because signed measures are dual to continuous functions, and have elegant formulations of concepts like minimal points and the upper completion.
Negative measures may be a bit odd, but as we’ll eventually see, we can ignore them and they only show up in intermediate steps, not final results, much like negative probabilities in quantum mechanics. And if negative measures ever become relevant for an application, it’s effortless to include them.
Belief Function Motivation
Also, we’ll have to drop the framework we set up at the beginning where we’re considering sets of environments, because working with sets of environments has redundant information. As an example, consider two environments where you pick one of two actions, and get one of two outcomes. In environment , regardless of action, you get outcome 0. In environment , regardless of action, you get outcome 1. Then, we should be able to freely add an environment , where action 0 implies outcome 0, and where action 1 implies outcome 1. Why?
Well, if your policy is to take action 0, and behave identically. And if your policy is to take action 1, and behave identically. So, adding an environment like this doesn’t affect anything, because it’s a “chameleon environment” that will perfectly mimic some preexisting environment regardless of which policy you select. However, if you consider the function mapping an action to the set of possible probability distributions over outcomes, adding didn’t change that at all. Put another way, if it’s impossible to distinguish in any way whether an environment was added to a set of environments because no matter what you do it mimics a preexisting environment, we might as well add it, and seek some alternate formulation instead of “set of environments” that doesn’t have the unobservable degrees of freedom in it.
To eliminate this redundancy, the true thing we should be looking at isn’t a set of environments, but the “belief function” from policies to sets of probability distributions over histories. This is the function produced by having a policy interact with your set of environments and plotting the probability distributions you could get. Given certain conditions on a belief function, it is possible to recover a set of environments from it, but belief functions are more fundamental. We’ll provide tools for taking a wide range of belief functions and turning them into sets of environments, if it is desired.
Well, actually, from our previous discussion, sets of probability distributions are insufficient, we need a function from policies to sets of sa-measures. But that’s material for later.
So, our fundamental mathematical object that we’re studying to get a good link to decision theory is not sets of probability distributions, but sets of sa-measures. And instead of sets of environments, we have functions from policies to sets of sa-measures over histories. This is because probability distributions alone aren’t flexible enough for the sort of updating we need to get dynamic consistency, and in addition to this issue, sets of environments have the problem where adding a new environment to your set can be undetectable in any way.
In the next post, we build up the basic mathematical details of the setting, until we get to a duality theorem that reveals a tight parallel between sets of sa-measures fulfilling certain special properties, and probability distributions, allowing us to take the first steps towards building up a version of probability theory fit for dealing with nonrealizability. There are analogues of expectation values, updates, renormalizing back to 1, priors, Bayes’ Theorem, Markov kernels, and more. We use the “infra” prefix to refer to this setting. An infradistribution is the analogue of a probability distribution. An infrakernel is the analogue of a Markov kernel. And so on.
The post after that consists of extensive work on belief functions and the Nirvana trick to get the decision-theory tie-ins, such as UDT behavior while still having an update rule, and the update rule is dynamically consistent. Other components of that section include being able to specify your entire belief function with only part of its data, and developing the concept of Causal, Pseudocausal, and Acausal hypotheses. We show that you can encode almost any belief function as an Acausal hypothesis, and you can translate Pseudocausal and Acausal hypotheses to Causal ones by adding Nirvana appropriately (kinda). And Causal hypotheses correspond to actual sets of environments (kinda). Further, we can mix belief functions to make a prior, and there’s an analogue of Bayes for updating a mix of belief functions. We cap it off by showing that the starting concepts of learning theory work appropriately, and show our setting’s version of the Complete Class Theorem.
Later posts (not written yet) will be about the “1 reward forever” variant of Nirvana and InfraPOMDP’s, developing inframeasure theory more, applications to various areas of alignment research, the internal logic which infradistributions are models of, unrealizable bandits, game theory, attempting to apply this to other areas of alignment research, and… look, we’ve got a lot of areas to work on, alright?
If you’ve got the relevant math skills, as previously mentioned, you should PM me to get a link to the MIRIxDiscord server and participate in the group readthrough, and you’re more likely than usual to be able to contribute to advancing research further, there’s a lot of shovel-ready work available.
Links to Further Posts:
Infra-Bayesian Physicalism: a formal theory of naturalized induction
- AGI safety career advice by 2 May 2023 7:36 UTC; 202 points) (EA Forum;
- AGI safety career advice by 2 May 2023 7:36 UTC; 120 points) (
- The Learning-Theoretic Agenda: Status 2023 by 19 Apr 2023 5:21 UTC; 116 points) (
- Voting Results for the 2020 Review by 2 Feb 2022 18:37 UTC; 108 points) (
- Davidad’s Bold Plan for Alignment: An In-Depth Explanation by 19 Apr 2023 16:09 UTC; 100 points) (
- A mostly critical review of infra-Bayesianism by 28 Feb 2023 18:37 UTC; 93 points) (
- [Closed] Hiring a mathematician to work on the learning-theoretic AI alignment agenda by 19 Apr 2022 6:44 UTC; 84 points) (
- $500 Bounty/Contest: Explain Infra-Bayes In The Language Of Game Theory by 25 Mar 2023 17:29 UTC; 82 points) (
- MIRI: 2020 Updates and Strategy by 23 Dec 2020 21:27 UTC; 77 points) (
- 2020 Review Article by 14 Jan 2022 4:58 UTC; 74 points) (
- Prize and fast track to alignment research at ALTER by 17 Sep 2022 16:58 UTC; 71 points) (
- A very non-technical explanation of the basics of infra-Bayesianism by 26 Apr 2023 22:57 UTC; 60 points) (
- EA Israel: 2022 Progress and 2023 Plans by 23 Feb 2023 18:35 UTC; 53 points) (EA Forum;
- Infra-Bayesianism Unwrapped by 20 Jan 2021 13:35 UTC; 51 points) (
- Making decisions using multiple worldviews by 13 Jul 2022 19:15 UTC; 49 points) (
- Making decisions using multiple worldviews by 13 Jul 2022 19:15 UTC; 41 points) (EA Forum;
- 11 Jan 2022 13:32 UTC; 39 points)'s comment on The Solomonoff Prior is Malign by (
- Prize and fast track to alignment research at ALTER by 18 Sep 2022 9:15 UTC; 38 points) (EA Forum;
- Infra-Bayesianism Distillation: Realizability and Decision Theory by 26 May 2022 21:57 UTC; 36 points) (
- Basic Inframeasure Theory by 27 Aug 2020 8:02 UTC; 35 points) (
- EA Organization Updates: August 2020 by 20 Sep 2020 18:18 UTC; 30 points) (EA Forum;
- Competent Preferences by 2 Sep 2021 14:26 UTC; 28 points) (
- The Many Faces of Infra-Beliefs by 6 Apr 2021 10:43 UTC; 28 points) (
- Inframeasures and Domain Theory by 28 Mar 2021 9:19 UTC; 27 points) (
- Intuitive Explanation of AIXI by 12 Jun 2022 21:41 UTC; 20 points) (
- 23 Dec 2021 14:44 UTC; 17 points)'s comment on 2021 AI Alignment Literature Review and Charity Comparison by (
- Belief Functions And Decision Theory by 27 Aug 2020 8:00 UTC; 15 points) (
- Commensurable Scientific Paradigms; or, computable induction by 13 Apr 2022 0:01 UTC; 14 points) (
- 16 Sep 2022 20:41 UTC; 9 points)'s comment on Self-Embedded Agent’s Shortform by (
- Performance guarantees in classical learning theory and infra-Bayesianism by 28 Feb 2023 18:37 UTC; 9 points) (
- 18 Sep 2020 17:55 UTC; 8 points)'s comment on Causal Reality vs Social Reality by (
- 1 Mar 2023 8:03 UTC; 5 points)'s comment on A mostly critical review of infra-Bayesianism by (
- 10 Oct 2022 20:12 UTC; 4 points)'s comment on Algon’s Shortform by (
- 19 Jun 2022 17:15 UTC; 2 points)'s comment on Infra-Bayesianism Distillation: Realizability and Decision Theory by (
This post is still endorsed, it still feels like a continually fruitful line of research. A notable aspect of it is that, as time goes on, I keep finding more connections and crisper ways of viewing things which means that for many of the further linked posts about inframeasure theory, I think I could explain them from scratch better than the existing work does. One striking example is that the “Nirvana trick” stated in this intro (to encode nonstandard decision-theory problems), has transitioned from “weird hack that happens to work” to “pops straight out when you make all the math as elegant as possible”. Accordingly, I’m working on a “living textbook” (like a textbook, but continually being updated with whatever cool new things we find) where I try to explain everything from scratch in the crispest way possible, to quickly catch up on the frontier of what we’re working on. That’s my current project.
I still do think that this is a large and tractable vein of research to work on, and the conclusion hasn’t changed much.
I’m feeling very excited about this agenda. Is there currently a publicly-viewable version of the living textbook? Or any more formal writeup which I can include in my curriculum? (If not I’ll include this post, but I expect many people would appreciate a more polished writeup.)
If you’re looking for curriculum materials, I believe that the most useful reference would probably be my “Infra-exercises”, a sequence of posts containing all the math exercises you need to reinvent a good chunk of the theory yourself. Basically, it’s the textbook’s exercise section, and working through interesting math problems and proofs on one’s own has a much better learning feedback loop and retention of material than slogging through the old posts. The exercises are short on motivation and philosophy compared to the posts, though, much like how a functional analysis textbook takes for granted that you want to learn functional analysis and doesn’t bother motivating it.
The primary problem is that the exercises aren’t particularly calibrated in terms of difficulty, and in order for me to get useful feedback, someone has to actually work through all of them, so feedback has been a bit sparse. So I’m stuck in a situation where I keep having to link everyone to the infra-exercises over and over and it’d be really good to just get them out and publicly available, but if they’re as important as I think, then the best move is something like “release them one at a time and have a bunch of people work through them as a group” like the fixpoint exercises, instead of “just dump them all as public documents”.
I’ll ask around about speeding up the public—ation of the exercises and see what can be done there.
I’d strongly endorse linking this introduction even if the exercises are linked as well, because this introduction serves as the table of contents to all the other applicable posts.
I’m confused about the Nirvana trick then. (Maybe here’s not the best place, but oh well...) Shouldn’t it break the instant you do anything with your Knightian uncertainty other than taking the worst-case?
Notice that some non-worst-case decision rules are reducible to the worst-case decision rule.
Well, taking worst-case uncertainty is what infradistributions do. Did you have anything in mind that can be done with Knightian uncertainty besides taking the worst-case (or best-case)?
And if you were dealing with best-case uncertainty instead, then the corresponding analogue would be assuming that you go to hell if you’re mispredicted (and then, since best-case things happen to you, the predictor must accurately predict you).
What if you assumed the stuff you had the hypothesis about was independent of the stuff you have Knightian uncertainty about (until proven otherwise)?
E.g. if you’re making hypotheses about a multi-armed bandit and the world also contains a meteor that might smash through your ceiling and kill you at any time, you might want to just say “okay, ignore the meteor, pretend my utility has a term for gambling wins that doesn’t depend on the meteor at all.”
The reason I want to consider stuff more like this is because I don’t like having to evaluate my utility function over all possibilities to do either an argmax or an argmin—I want to be lazy.
The weird thing about this is now whether this counts as argmax or argmin (or something else) depends on what my utility function looks like when I do include the meteor. If getting hit by the meteor only makes things worse (though potentially the meteor can still depend on which arm of of the bandit I pull!) then ignoring it is like being optimistic. If it only makes things better (like maybe the world I’m ignoring isn’t a meteor, it’s a big space full of other games I could be playing) then ignoring it is like being pessimistic.
Something analogous to what you are suggesting occurs. Specifically, let’s say you assign 95% probability to the bandit game behaving as normal, and 5% to “oh no, anything could happen, including the meteor”. As it turns out, this behaves similarly to the ordinary bandit game being guaranteed, as the “maybe meteor” hypothesis assigns all your possible actions a score of “you’re dead” so it drops out of consideration.
The important aspect which a hypothesis needs, in order for you to ignore it, is that no matter what you do you get the same outcome, whether it be good or bad. A “meteor of bliss hits the earth and everything is awesome forever” hypothesis would also drop out of consideration because it doesn’t really matter what you do in that scenario.
To be a wee bit more mathy, probabilistic mix of inframeasures works like this. We’ve got a probability distribution ζ∈ΔN, and a bunch of hypotheses ψi∈□X, things that take functions as input, and return expectation values. So, your prior, your probabilistic mixture of hypotheses according to your probability distribution, would be the function
It gets very slightly more complicated when you’re dealing with environments, instead of static probability distributions, but it’s basically the same thing. And so, if you vary your actions/vary your choice of function f, and one of the hypotheses ψi is assigning all these functions/choices of actions the same expectation value, then it can be ignored completely when you’re trying to figure out the best function/choice of actions to plug in.
So, hypotheses that are like “you’re doomed no matter what you do” drop out of consideration, an infra-Bayes agent will always focus on the remaining hypotheses that say that what it does matters.
The meteor doesn’t have to really flatten things out, there might be some actions that we think remain valuable (e.g. hedonism, saying tearful goodbyes).
And so if we have Knightian uncertainty about the meteor, maximin (as in Vanessa’s link) means we’ll spend a lot of time on tearful goodbyes.
Said actions or lack thereof cause a fairly low utility differential compared to the actions in other, non-doomy hypotheses. Also I want to draw a critical distinction between “full knightian uncertainty over meteor presence or absence”, where your analysis is correct, and “ordinary probabilistic uncertainty between a high-knightian-uncertainty hypotheses, and a low-knightian uncertainty one that says the meteor almost certainly won’t happen” (where the meteor hypothesis will be ignored unless there’s a meteor-inspired modification to what you do that’s also very cheap in the “ordinary uncertainty” world, like calling your parents, because the meteor hypothesis is suppressed in decision-making by the low expected utility differentials, and we’re maximin-ing expected utility)
Of the agent foundations work from 2020, I think this sequence is my favorite, and I say this without actually understanding it.
The core idea is that Bayesianism is too hard. And so what we ultimately want is to replace probability distributions over all possible things with simple rules that don’t have to put a probability on all possible things. In some ways this is the complement to logical uncertainty—logical uncertainty is about not having to have all possible probability distributions possible, this is about not having to put probability distributions on everything.
I’ve found this a highly productive metaphor for cognition—we sometimes like to think of the brain as a Bayesian engine, but of necessity the brain can’t be laying down probabilities for every single possible thing—we want a perspective that allows the brain to be considering hypotheses that only specify the pattern of some small part of the world while still retaining some sort of Bayesian seal of approval.
That said, this sequence is tricky to understand and I’m bad at it! I look forward to brave souls helping to digest it for the community at large.
Some of the ways in which this framework still relies on physical impossibilities are things like operations over all possible infra-Bayesian hypotheses, and the invocation of worst-case reasoning that relies on global evaluation. I’m super interested in what’s going to come from pushing those boundaries.
I interviewed Vanessa here in an attempt to make this more digestible: it hopefully acts as context for the sequence, rather than a replacement for reading it.