# A Semitechnical Introductory Dialogue on Solomonoff Induction

(*Originally posted in December 2015: A dialogue between Ashley, a computer scientist who’s never heard of **Solomonoff’s theory of inductive inference**, and Blaine, who thinks it is the best thing since sliced bread.*)

### i. Unbounded analysis

**ASHLEY:** Good evening, Msr. Blaine.

**BLAINE:** Good evening, Msr. Ashley.

**ASHLEY:** I’ve heard there’s this thing called “Solomonoff’s theory of inductive inference”.

**BLAINE:** The rumors have spread, then.

**ASHLEY:** Yeah, so, what the heck is that about?

**BLAINE:** Invented in the 1960s by the mathematician Ray Solomonoff, the key idea in Solomonoff induction is to do sequence prediction by using Bayesian updating on a prior composed of a mixture of all computable probability distributions—

**ASHLEY:** Wait. Back up a lot. Before you try to explain what Solomonoff induction *is*, I’d like you to try to tell me what it *does*, or why people study it in the first place. I find that helps me organize my listening. Right now I don’t even know why I should be interested in this.

**BLAINE:** Um, okay. Let me think for a second...

**ASHLEY:** Also, while I can imagine things that “sequence prediction” might mean, I haven’t yet encountered it in a technical context, so you’d better go a bit further back and start more at the beginning. I do know what “computable” means and what a “probability distribution” is, and I remember the formula for Bayes’s Rule although it’s been a while.

**BLAINE:** Okay. So… one way of framing the usual reason why people study this general field in the first place, is that sometimes, by studying certain idealized mathematical questions, we can gain valuable intuitions about epistemology. That’s, uh, the field that studies how to reason about factual questions, how to build a map of reality that reflects the territory—

**ASHLEY:** I have some idea what ‘epistemology’ is, yes. But I think you might need to start even further back, maybe with some sort of concrete example or something.

**BLAINE:** Okay. Um. So one anecdote that I sometimes use to frame the value of computer science to the study of epistemology is Edgar Allen Poe’s argument in 1833 that chess was uncomputable.

**ASHLEY:** That doesn’t sound like a thing that actually happened.

**BLAINE:** I know, but it totally *did* happen and not in a metaphorical sense either! Edgar Allen Poe wrote an essay explaining why no automaton would ever be able to play chess, and he specifically mentioned “Mr. Babbage’s computing engine” as an example.

You see, in the nineteenth century, there was for a time this sensation known as the Mechanical Turk—supposedly a machine, an automaton, that could play chess. At the grandmaster level, no less.

Now today, when we’re accustomed to the idea that it takes a reasonably powerful computer to do that, we can know *immediately* that the Mechanical Turk must have been a fraud and that there must have been a concealed operator inside—a person with dwarfism, as it turned out. *Today* we know that this sort of thing is *hard* to build into a machine. But in the 19th century, even that much wasn’t known.

So when Edgar Allen Poe, who besides being an author was also an accomplished magician, set out to write an essay about the Mechanical Turk, he spent the *second* half of the essay dissecting what was known about the Turk’s appearance to (correctly) figure out where the human operator was hiding. But Poe spent the first half of the essay arguing that no automaton—nothing like Mr. Babbage’s computing engine—could possibly play chess, which was how he knew *a priori* that the Turk had a concealed human operator.

**ASHLEY:** And what was Poe’s argument?

**BLAINE:** Poe observed that in an algebraical problem, each step followed from the previous step of necessity, which was why the steps in solving an algebraical problem could be represented by the deterministic motions of gears in something like Mr. Babbage’s computing engine. But in a chess problem, Poe said, there are many possible chess moves, and no move follows with necessity from the position of the board; and even if you did select one move, the opponent’s move would not follow with necessity, so you couldn’t represent it with the determined motion of automatic gears. Therefore, Poe said, whatever was operating the Mechanical Turk must have the nature of Cartesian mind, rather than the nature of deterministic matter, and this was knowable *a priori*. And then he started figuring out where the required operator was hiding.

**ASHLEY:** That’s some amazingly impressive reasoning for being completely wrong.

**BLAINE:** I know! Isn’t it great?

**ASHLEY:** I mean, that sounds like Poe correctly identified the *hard* part of playing computer chess, the branching factor of moves and countermoves, which is the reason why no *simple* machine could do it. And he just didn’t realize that a deterministic machine could deterministically check many possible moves in order to figure out the game tree. So close, and yet so far.

**BLAINE:** More than a century later, in 1950, Claude Shannon published the first paper ever written on computer chess. And in passing, Shannon gave the formula for playing perfect chess if you had unlimited computing power, the algorithm you’d use to extrapolate the entire game tree. We could say that Shannon gave a short program that would solve chess if you ran it on a hypercomputer, where a hypercomputer is an ideal computer that can run any finite computation immediately. And then Shannon passed on to talking about the problem of locally guessing how good a board position was, so that you could play chess using only a *small* search.

I say all this to make a point about the value of knowing how to solve problems using hypercomputers, even though hypercomputers don’t exist. Yes, there’s often a *huge* gap between the unbounded solution and the practical solution. It wasn’t until 1997, forty-seven years after Shannon’s paper giving the unbounded solution, that Deep Blue actually won the world chess championship—

**ASHLEY:** And that wasn’t just a question of faster computing hardware running Shannon’s ideal search algorithm. There were a lot of new insights along the way, most notably the alpha-beta pruning algorithm and a lot of improvements in positional evaluation.

**BLAINE:** Right!

But I think some people overreact to that forty-seven year gap, and act like it’s *worthless* to have an unbounded understanding of a computer program, just because you might still be forty-seven years away from a practical solution. But if you don’t even have a solution that would run on a hypercomputer, you’re Poe in 1833, not Shannon in 1950.

The reason I tell the anecdote about Poe is to illustrate that Poe was *confused* about computer chess in a way that Shannon was not. When we don’t know how to solve a problem even given infinite computing power, the very work we are trying to do is in some sense murky to us. When we can state code that would solve the problem given a hypercomputer, we have become *less* confused. Once we have the unbounded solution we understand, in some basic sense, *the kind of work we are trying to perform,* and then we can try to figure out how to do it efficiently.

**ASHLEY:** Which may well require new insights into the structure of the problem, or even a conceptual revolution in how we imagine the work we’re trying to do.

**BLAINE:** Yes, but the point is that you can’t even get started on that if you’re arguing about how playing chess has the nature of Cartesian mind rather than matter. At that point you’re not 50 years away from winning the chess championship, you’re 150 years away, because it took an extra 100 years to move humanity’s understanding to the point where Claude Shannon could trivially see how to play perfect chess using a large-enough computer. I’m not trying to exalt the unbounded solution by denigrating the work required to get a bounded solution. I’m not saying that when we have an unbounded solution we’re practically there and the rest is a matter of mere lowly efficiency. I’m trying to compare having the unbounded solution to the horrific confusion of *not understanding what we’re trying to do.*

**ASHLEY:** Okay. I think I understand why, on your view, it’s important to know how to solve problems using infinitely fast computers, or hypercomputers as you call them. When we can say how to answer a question using infinite computing power, that means we crisply understand the question itself, in some sense; while if we can’t figure out how to solve a problem using unbounded computing power, that means we’re *confused* about the problem, in some sense. I mean, anyone who’s ever tried to teach the more doomed sort of undergraduate to write code knows what it means to be confused about what it takes to compute something.

**BLAINE:** Right.

**ASHLEY:** So what does this have to do with “Solomonoff induction”?

**BLAINE:** Ah! Well, suppose I asked you how to do epistemology using infinite computing power?

**ASHLEY:** My good fellow, I would at once reply, “Beep. Whirr. Problem ‘do epistemology’ not crisply specified.” At this stage of affairs, I do not think this reply indicates any fundamental confusion on my part; rather I think it is you who must be clearer.

**BLAINE:** Given unbounded computing power, how would you reason in order to construct an accurate map of reality?

**ASHLEY:** That still strikes me as rather underspecified.

**BLAINE:** Perhaps. But even there I would suggest that it’s a mark of intellectual progress to be able to take vague and underspecified ideas like ‘do good epistemology’ and turn them *into* crisply specified problems. Imagine that I went up to my friend Cecil, and said, “How would you do good epistemology given unlimited computing power and a short Python program?” and Cecil at once came back with an answer—a good and reasonable answer, once it was explained. Cecil would probably know something quite interesting that you do not presently know.

**ASHLEY:** I confess to being rather skeptical of this hypothetical. But if that actually happened—if I agreed, to my own satisfaction, that someone had stated a short Python program that would ‘do good epistemology’ if run on an unboundedly fast computer—then I agree that I’d probably have learned something *quite interesting* about epistemology.

**BLAINE:** What Cecil knows about, in this hypothetical, is Solomonoff induction. In the same way that Claude Shannon answered “Given infinite computing power, how would you play perfect chess?”, Ray Solomonoff answered “Given infinite computing power, how would you perfectly find the best hypothesis that fits the facts?”

**ASHLEY:** Suddenly, I find myself strongly suspicious of whatever you are about to say to me.

**BLAINE:** That’s understandable.

**ASHLEY:** In particular, I’ll ask at once whether “Solomonoff induction” assumes that our hypotheses are being given to us on a silver platter along with the exact data we’re supposed to explain, or whether the algorithm is organizing its own data from a big messy situation and inventing good hypotheses from scratch.

**BLAINE:** Great question! It’s the second one.

**ASHLEY:** Really? Okay, now I have to ask whether Solomonoff induction is a recognized concept in good standing in the field of academic computer science, because that does not sound like something modern-day computer science knows how to do.

**BLAINE:** I wouldn’t say it’s a widely known concept, but it’s one that’s in good academic standing. The method isn’t used in modern machine learning because it requires an infinitely fast computer and isn’t easily approximated the way that chess is.

**ASHLEY:** This really sounds very suspicious. Last time I checked, we hadn’t *begun* to formalize the creation of good new hypotheses from scratch. I’ve heard about claims to have ‘automated’ the work that, say, Newton did in inventing classical mechanics, and I’ve found them all to be incredibly dubious. Which is to say, they were rigged demos and lies.

**BLAINE:** I know, but—

**ASHLEY:** And then I’m even more suspicious of a claim that someone’s algorithm would solve this problem if only they had infinite computing power. Having some researcher claim that their Good-Old-Fashioned AI semantic network *would* be intelligent if run on a computer so large that, conveniently, nobody can ever test their theory, is not going to persuade me.

**BLAINE:** Do I really strike you as that much of a charlatan? What have I ever done to you, that you would expect me to try pulling a scam like that?

**ASHLEY:** That’s fair. I shouldn’t accuse you of planning that scam when I haven’t seen you say it. But I’m pretty sure the problem of “coming up with good new hypotheses in a world full of messy data” is AI-complete. And even Mentif-

**BLAINE:** Do not say the name, or he will appear!

**ASHLEY:** Sorry. Even the legendary first and greatest of all AI crackpots, He-Who-Googles-His-Name, could assert that his algorithms would be all-powerful on a computer large enough to make his claim unfalsifiable. So what?

**BLAINE:** That’s a very sensible reply and this, again, is exactly the kind of mental state that reflects a problem that is *confusing* rather than just hard to implement. It’s the sort of confusion Poe might feel in 1833, or close to it. In other words, it’s just the sort of conceptual issue we *would* have solved at the point where we could state a short program that could run on a hypercomputer. Which Ray Solomonoff did in 1964.

**ASHLEY:** Okay, let’s hear about this supposed general solution to epistemology.

### ii. Sequences

**BLAINE:** First, try to solve the following puzzle. 1, 3, 4, 7, 11, 18, 29...?

**ASHLEY:** Let me look at those for a moment… 47.

**BLAINE:** Congratulations on engaging in, as we snooty types would call it, ‘sequence prediction’.

**ASHLEY:** I’m following you so far.

**BLAINE:** The smarter you are, the more easily you can find the hidden patterns in sequences and predict them successfully. You had to notice the resemblance to the Fibonacci rule to guess the next number. Someone who didn’t already know about Fibonacci, or who was worse at mathematical thinking, would have taken longer to understand the sequence or maybe never learned to predict it at all.

**ASHLEY:** Still with you.

**BLAINE:** It’s not a sequence of *numbers* per se… but can you see how the question, “The sun has risen on the last million days. What is the probability that it rises tomorrow?” could be viewed as a kind of sequence prediction problem?

**ASHLEY:** Only if some programmer neatly parses up the world into a series of “Did the Sun rise on day X starting in 4.5 billion BCE, 0 means no and 1 means yes? 1, 1, 1, 1, 1...” and so on. Which is exactly the sort of shenanigan that I see as cheating. In the real world, you go outside and see a brilliant ball of gold touching the horizon, not a giant “1″.

**BLAINE:** Suppose I have a robot running around with a webcam showing it a pixel field that refreshes 60 times a second with 32-bit colors. I could view that as a giant sequence and ask the robot to predict what it will see happen when it rolls out to watch a sunrise the next day.

**ASHLEY:** I can’t help but notice that the ‘sequence’ of webcam frames is absolutely enormous, like, the sequence is made up of 66-megabit ‘numbers’ appearing 3600 times per minute… oh, right, computers much bigger than the universe. And now you’re smiling evilly, so I guess that’s the point. I also notice that the sequence is no longer deterministically predictable, that it is no longer a purely mathematical object, and that the sequence of webcam frames observed will depend on the robot’s choices. This makes me feel a bit shaky about the analogy to predicting the mathematical sequence 1, 1, 2, 3, 5.

**BLAINE:** I’ll try to address those points in order. First, Solomonoff induction is about assigning *probabilities* to the next item in the sequence. I mean, if I showed you a box that said 1, 1, 2, 3, 5, 8 you would not be absolutely certain that the next item would be 13. There could be some more complicated rule that just looked Fibonacci-ish but then diverged. You might guess with 90% probability but not 100% probability, or something like that.

**ASHLEY:** This has stopped feeling to me like math.

**BLAINE:** There is a *large* branch of math, to say nothing of computer science, that deals in probabilities and statistical prediction. We are going to be describing absolutely lawful and deterministic ways of assigning probabilities after seeing 1, 3, 4, 7, 11, 18.

**ASHLEY:** Okay, but if you’re later going to tell me that this lawful probabilistic prediction rule underlies a generally intelligent reasoner, I’m already skeptical.

No matter how large a computer it’s run on, I find it hard to imagine that some simple set of rules for assigning probabilities is going to encompass truly and generally intelligent answers about sequence prediction, like Terence Tao would give after looking at the sequence for a while. We just have no idea how Terence Tao works, so we can’t duplicate his abilities in a formal rule, no matter how much computing power that rule gets… you’re smiling evilly again. I’ll be *quite* interested if that evil smile turns out to be justified.

**BLAINE:** Indeed.

**ASHLEY:** I also find it hard to imagine that this deterministic mathematical rule for assigning probabilities would notice if a box was outputting an encoded version of “To be or not to be” from Shakespeare by mapping A to Z onto 1 to 26, which I would notice eventually though not immediately upon seeing 20, 15, 2, 5, 15, 18… And you’re *still* smiling evilly.

**BLAINE:** Indeed. That is *exactly* what Solomonoff induction does. Furthermore, we have theorems establishing that Solomonoff induction can do it way better than you or Terence Tao.

**ASHLEY:** A *theorem* proves this. As in a necessary mathematical truth. Even though we have no idea how Terence Tao works empirically… and there’s evil smile number four. Okay. I am very skeptical, but willing to be convinced.

**BLAINE:** So if you actually did have a hypercomputer, you could cheat, right? And Solomonoff induction is the most ridiculously cheating cheat in the history of cheating.

**ASHLEY:** Go on.

**BLAINE:** We just run all possible computer programs to see which are the simplest computer programs that best predict the data seen so far, and use those programs to predict what comes next. This mixture contains, among other things, an exact copy of Terence Tao, thereby allowing us to prove theorems about their relative performance.

**ASHLEY:** Is this an actual reputable math thing? I mean really?

**BLAINE:** I’ll deliver the formalization later, but you did ask me to first state the point of it all. The point of Solomonoff induction is that it gives us a gold-standard ideal for sequence prediction, and this gold-standard prediction only errs by a bounded amount, over infinite time, relative to the best computable sequence predictor. We can also see it as formalizing the intuitive idea that was expressed by William Ockham a few centuries earlier that simpler theories are more likely to be correct, and as telling us that ‘simplicity’ should be measured in algorithmic complexity, which is the size of a computer program required to output a hypothesis’s predictions.

**ASHLEY:** I think I would have to read more on this subject to actually follow that. What I’m hearing is that Solomonoff induction is a reputable idea that is important because it gives us a kind of ideal for sequence prediction. This ideal also has something to do with Occam’s Razor, and stakes a claim that the simplest theory is the one that can be represented by the shortest computer program. You identify this with “doing good epistemology”.

**BLAINE:** Yes, those are legitimate takeaways. Another way of looking at it is that Solomonoff induction is an ideal but uncomputable answer to the question “What should our priors be?”, which is left open by understanding Bayesian updating.

**ASHLEY:** Can you say how Solomonoff induction answers the question of, say, the prior probability that Canada is planning to invade the United States? I once saw a crackpot website that tried to invoke Bayesian probability about it, but only after setting the prior at 10% or something like that, I don’t recall exactly. Does Solomonoff induction let me tell him that he’s making a math error, instead of just calling him silly in an informal fashion?

**BLAINE:** If you’re expecting to sit down with Leibniz and say, “Gentlemen, let us calculate” then you’re setting your expectations too high. Solomonoff gives us an idea of how we *should* compute that quantity given unlimited computing power. It doesn’t give us a firm recipe for how we can best approximate that ideal in real life using bounded computing power, or human brains. That’s like expecting to play perfect chess after you read Shannon’s 1950 paper. But knowing the ideal, we can extract some intuitive advice that might help our online crackpot if only he’d listen.

**ASHLEY:** But according to you, Solomonoff induction does say in principle what is the prior probability that Canada will invade the United States.

**BLAINE:** Yes, up to a choice of universal Turing machine.

**ASHLEY: ** *(looking highly skeptical) * So I plug a universal Turing machine into the formalism, and in principle, I get out a uniquely determined probability that Canada invades the USA.

**BLAINE:** Exactly!

**ASHLEY:** Uh huh. Well, go on.

**BLAINE:** So, first, we have to transform this into a sequence prediction problem.

**ASHLEY:** Like a sequence of years in which Canada has and hasn’t invaded the US, mostly zero except around 1812—

**BLAINE:** *No!* To get a good prediction about Canada we need much more data than that, and I don’t mean a graph of Canadian GDP either. Imagine a sequence that contains all the sensory data you have ever received over your lifetime. Not just the hospital room that you saw when you opened your eyes right after your birth, but the darkness your brain received as input while you were still in your mother’s womb. Every word you’ve ever heard. Every letter you’ve ever seen on a computer screen, not as ASCII letters but as the raw pattern of neural impulses that gets sent down from your retina.

**ASHLEY:** That seems like a lot of data and some of it is redundant, like there’ll be lots of similar pixels for blue sky—

**BLAINE:** That data is what *you* got as an agent. If we want to translate the question of the prediction problem Ashley faces into theoretical terms, we should give the sequence predictor *all* the data that you had available, including all those repeating blue pixels of the sky. Who knows? Maybe there was a Canadian warplane somewhere in there, and you didn’t notice.

**ASHLEY:** But it’s impossible for my brain to remember all that data. If we neglect for the moment how the retina actually works and suppose that I’m seeing the same @60Hz feed the robot would, that’s far more data than my brain can realistically learn per second.

**BLAINE:** So then Solomonoff induction can do better than you can, using its unlimited computing power and memory. That’s fine.

**ASHLEY:** But what if you can do better by forgetting more?

**BLAINE:** If you have limited computing power, that makes sense. With unlimited computing power, that really shouldn’t happen and that indeed is one of the lessons of Solomonoff induction. An unbounded Bayesian never expects to do worse by updating on another item of evidence—for one thing, you can always just do the same policy you would have used if you hadn’t seen that evidence. That kind of lesson *is* one of the lessons that might not be intuitively obvious, but which you can feel more deeply by walking through the math of probability theory. With unlimited computing power, nothing goes wrong as a result of trying to process 4 gigabits per second; every extra bit just produces a better expected future prediction.

**ASHLEY:** Okay, so we start with literally all the data I have available. That’s 4 gigabits per second if we imagine frames of 32-bit pixels repeating 60 times per second. Though I remember hearing 100 megabits per second would be a better estimate of what the retina sends out, and that it’s pared down to 1 megabit per second very quickly by further processing.

**BLAINE:** Right. We start with all of that data, going back to when you were born. Or maybe when your brain formed in the womb, though it shouldn’t make much difference.

**ASHLEY:** I note that there are some things I know that don’t come from my sensory inputs at all. Chimpanzees learn to be afraid of skulls and snakes much faster than they learn to be afraid of other arbitrary shapes. I was probably better at learning to walk in Earth gravity than I would have been at navigating in zero G. Those are heuristics I’m born with, based on how my brain was wired, which ultimately stems from my DNA specifying the way that proteins should fold to form neurons—not from any photons that entered my eyes later.

**BLAINE:** So, for purposes of following along with the argument, let’s say that your DNA is analogous to the code of a computer program that makes predictions. What you’re observing here is that humans have 750 megabytes of DNA, and even if most of that is junk and not all of what’s left is specifying brain behavior, it still leaves a pretty large computer program that could have a lot of prior information programmed into it.

Let’s say that your brain, or rather, your infant pre-brain wiring algorithm, was effectively a 7.5 megabyte program—if it’s actually 75 megabytes, that makes little difference to the argument. By exposing that 7.5 megabyte program to all the information coming in from your eyes, ears, nose, proprioceptive sensors telling you where your limbs were, and so on, your brain updated itself into forming the modern Ashley, whose hundred trillion synapses might be encoded by, say, one petabyte of information.

**ASHLEY:** The thought does occur to me that some environmental phenomena have effects on me that can’t be interpreted as “sensory information” in any simple way, like the direct effect that alcohol has on my neurons, and how that feels to me from the inside. But it would be perverse to claim that this prevents you from trying to summarize all the information that the Ashley-agent receives into a single sequence, so I won’t press the point.

(**ELIEZER: **(*whispering*) * More on this topic later.*)

**ASHLEY:** Oh, and for completeness’s sake, wouldn’t there also be further information embedded in the laws of physics themselves? Like, the way my brain executes implicitly says something about the laws of physics in the universe I’m in.

**BLAINE:** Metaphorically speaking, our laws of physics would play the role of a particular choice of Universal Turing Machine, which has some effect on which computations count as “simple” inside the Solomonoff formula. But normally, the UTM should be very simple compared to the amount of data in the sequence we’re trying to predict, just like the laws of physics are very simple compared to a human brain. In terms of algorithmic complexity, the laws of physics are very simple compared to watching a @60Hz visual field for a day.

**ASHLEY:** Part of my mind feels like the laws of physics are quite complicated compared to going outside and watching a sunset. Like, I realize that’s false, but I’m not sure how to say out loud exactly why it’s false...

**BLAINE:** Because the algorithmic complexity of a system isn’t measured by how long a human has to go to college to understand it, it’s measured by the size of the computer program required to generate it. The language of physics is differential equations, and it turns out that this is something difficult to beat into some human brains, but differential equations are simple to program into a simple Turing Machine.

**ASHLEY:** Right, like, the laws of physics actually have much fewer details to them than, say, human nature. At least on the Standard Model of Physics. I mean, in principle there could be another decillion undiscovered particle families out there.

**BLAINE:** The concept of “algorithmic complexity” isn’t about seeing something with lots of gears and details, it’s about the size of computer program required to compress all those details. The Mandelbrot set looks very complicated visually, you can keep zooming in using more and more detail, but there’s a very simple rule that generates it, so we say the algorithmic complexity is very low.

**ASHLEY:** All the visual information I’ve seen is something that happens *within* the physical universe, so how can it be more complicated than the universe? I mean, I have a sense on some level that this shouldn’t be a problem, but I don’t know why it’s not a problem.

**BLAINE:** That’s because particular parts of the universe can have much higher algorithmic complexity than the entire universe!

Consider a library that contains all possible books. It’s very easy to write a computer program that generates all possible books. So any *particular* book in the library contains much more algorithmic information than the *entire* library; it contains the information required to say ‘look at this particular book here’.

If pi is normal, then somewhere in its digits is a copy of Shakespeare’s *Hamlet*—but the number saying which particular digit of pi to start looking at, will be just about exactly as large as *Hamlet* itself. The copy of Shakespeare’s *Hamlet* that exists in the decimal expansion of pi is more complex than pi itself.

If you zoomed way in and restricted your vision to a particular part of the Mandelbrot set, what you saw might be much more *algorithmically* complex than the entire Mandelbrot set, because the specification has to say where in the Mandelbrot set you are.

Similarly, the world Earth is much more algorithmically complex than the laws of physics. Likewise, the visual field you see over the course of a second can easily be far more algorithmically complex than the laws of physics.

**ASHLEY:** Okay, I think I get that. And similarly, even though the ways that proteins fold up are very complicated, in principle we could get all that info using just the simple fundamental laws of physics plus the relatively simple DNA code for the protein. There are all sorts of obvious caveats about epigenetics and so on, but those caveats aren’t likely to change the numbers by a whole order of magnitude.

**BLAINE:** Right!

**ASHLEY:** So the laws of physics are, like, a few kilobytes, and my brain has say 75 megabytes of innate wiring instructions. And then I get to see a lot more information than that over my lifetime, like a megabit per second after my initial visual system finishes preprocessing it, and then most of that is forgotten. Uh… what does that have to do with Solomonoff induction again?

**BLAINE:** Solomonoff induction quickly catches up to any single computer program at sequence prediction, even if the original program is very large and contains a lot of prior information about the environment. If a program is 75 megabytes long, it can only predict 75 megabytes worth of data better than the Solomonoff inductor before the Solomonoff inductor catches up to it.

That doesn’t mean that a Solomonoff inductor knows everything a baby does after the first second of exposure to a webcam feed, but it does mean that after the first second, the Solomonoff inductor is already no more surprised than a baby by the vast majority of pixels in the next frame.

Every time the Solomonoff inductor assigns half as much probability as the baby to the next pixel it sees, that’s one bit spent permanently out of the 75 megabytes of error that can happen before the Solomonoff inductor catches up to the baby.

That your brain is written in the laws of physics also has some implicit correlation with the environment, but that’s like saying that a program is written in the same programming language as the environment. The language can contribute something to the power of the program, and the environment being written in the same programming language can be a kind of prior knowledge. But if Solomonoff induction starts from a standard Universal Turing Machine as its language, that doesn’t contribute any more bits of lifetime error than the complexity of that programming language in the UTM.

**ASHLEY:** Let me jump back a couple of steps and return to the notion of my brain wiring itself up in response to environmental information. I’d expect an important part of that process was my brain learning to *control* the environment, not just passively observing it. Like, it mattered to my brain’s wiring algorithm that my brain saw the room shift in a certain way when it sent out signals telling my eyes to move.

**BLAINE:** Indeed. But talking about the sequential *control* problem is more complicated math. AIXI is the ideal agent that uses Solomonoff induction as its epistemology and expected reward as its decision theory. That introduces extra complexity, so it makes sense to talk about just Solomonoff induction first. We can talk about AIXI later. So imagine for the moment that we were *just* looking at your sensory data, and trying to predict what would come next in that.

**ASHLEY:** Wouldn’t it make more sense to look at the brain’s inputs *and* outputs, if we wanted to predict the next input? Not just look at the series of previous inputs?

**BLAINE:** It’d make the problem easier for a Solomonoff inductor to solve, sure; but it also makes the problem more complicated. Let’s talk instead about what would happen if you took the complete sensory record of your life, gave it to an ideally smart agent, and asked the agent to predict what you would see next. Maybe the agent could do an even better job of prediction if we also told it about your brain’s outputs, but I don’t think that subtracting the outputs would leave it helpless to see patterns in the inputs.

**ASHLEY:** It sounds like a pretty hard problem to me, maybe even an unsolvable one. I’m thinking of the distinction in computer science between needing to learn from non-chosen data, versus learning when you can choose particular queries. Learning can be much faster in the second case.

**BLAINE:** In terms of what can be predicted *in principle* given the data, what facts are *actually reflected in it* that Solomonoff induction might uncover, we shouldn’t imagine a human trying to analyze the data. We should imagine an entire advanced civilization pondering it for years. If you look at it from that angle, then the alien civilization isn’t going to balk at the fact that it’s looking at the answers to the queries that Ashley’s brain chose, instead of the answers to the queries it chose itself.

Like, if the Ashley had already read Shakespeare’s *Hamlet*—if the image of those pages had already crossed the sensory stream—and then the Ashley saw a mysterious box outputting 20, 15, 2, 5, 15, 18, I think somebody eavesdropping on that sensory data would be equally able to guess that this was encoding ‘tobeor’ and guess that the next thing the Ashley saw might be the box outputting 14. You wouldn’t even need an entire alien civilization of superintelligent cryptographers to guess that. And it definitely wouldn’t be a killer problem that Ashley was controlling the eyeball’s saccades, even if you could learn even faster by controlling the eyeball yourself.

So far as the computer-science distinction goes, Ashley’s eyeball *is* being controlled to make intelligent queries and seek out useful information; it’s just Ashley controlling the eyeball instead of you—that eyeball is not a query-oracle answering *random* questions.

**ASHLEY:** Okay, I think this example is helping my understanding of what we’re doing here. In the case above, the next item in the Ashley-sequence wouldn’t actually be 14. It would be this huge visual field that showed the box flashing a little picture of ’14′.

**BLAINE:** Sure. Otherwise it would be a rigged demo, as you say.

**ASHLEY:** I think I’m confused about the idea of *predicting* the visual field. It seems to me that what with all the dust specks in my visual field, and maybe my deciding to tilt my head using motor instructions that won’t appear in the sequence, there’s no way to *exactly* predict the 66-megabit integer representing the next visual frame. So it must be doing something other than the equivalent of guessing “14” in a simpler sequence, but I’m not sure what.

**BLAINE:** Indeed, there’d be some element of thermodynamic and quantum randomness preventing that exact prediction even in principle. So instead of predicting one particular next frame, we put a probability distribution on it.

**ASHLEY:** A probability distribution over possible 66-megabit frames? Like, a table with entries, summing to 1?

**BLAINE:** Sure. isn’t a large number when you have unlimited computing power. As Martin Gardner once observed, “Most finite numbers are very much larger.” Like I said, Solomonoff induction is an epistemic ideal that requires an unreasonably large amount of computing power.

**ASHLEY:** I don’t deny that big computations can sometimes help us understand little ones. But at the point when we’re talking about probability distributions that large, I have some trouble holding onto what the probability distribution is supposed to *mean*.

**BLAINE:** Really? Just imagine a probability distribution over possibilities, then let go to . If we were talking about a letter ranging from A to Z, then putting 100 times as much probability mass on (X, Y, Z) as on the rest of the alphabet, would say that although you didn’t know *exactly* what letter would happen, you expected it would be toward the end of the alphabet. You would have used 26 probabilities, summing to 1, to precisely state that prediction.

In Solomonoff induction, since we have unlimited computing power, we express our uncertainty about a video frame the same way. All the various pixel fields you could see if your eye jumped to a plausible place, saw a plausible number of dust specks, and saw the box flash something that visually encoded ’14′, would have high probability. Pixel fields where the box vanished and was replaced with a glow-in-the-dark unicorn would have very low, though not zero, probability.

**ASHLEY:** Can we really get away with viewing things that way?

**BLAINE:** If we could not make identifications like these *in principle*, there would be no principled way in which we could say that you had ever *expected to see something happen*—no way to say that one visual field your eyes saw had higher probability than any other sensory experience. We couldn’t justify science; we couldn’t say that, having performed Galileo’s experiment by rolling an inclined cylinder down a plane, Galileo’s theory was thereby to some degree supported by having assigned *a high relative probability* to the only actual observations our eyes ever report.

**ASHLEY:** I feel a little unsure of that jump, but I suppose I can go along with that for now. Then the question of “What probability does Solomonoff induction assign to Canada invading?” is to be identified, in principle, with the question “Given my past life experiences and all the visual information that’s entered my eyes, what is the relative probability of seeing visual information that encodes Google News with the headline ‘CANADA INVADES USA’ at some point during the next 300 million seconds?”

**BLAINE:** Right!

**ASHLEY:** And Solomonoff induction has an in-principle way of assigning this a relatively low probability, which that online crackpot could do well to learn from as a matter of principle, even if he couldn’t *begin* to carry out the exact calculations that involve assigning probabilities to exponentially vast tables.

**BLAINE:** Precisely!

**ASHLEY:** Fairness requires that I congratulate you on having come further in formalizing ‘do good epistemology’ as a sequence prediction problem than I previously thought you might.

I mean, you haven’t satisfied me yet, but I wasn’t expecting you to get even this far.

### iii. Hypotheses

**BLAINE:** Next, we consider how to represent a *hypothesis* inside this formalism.

**ASHLEY:** Hmm. You said something earlier about updating on a probabilistic mixture of computer programs, which leads me to suspect that in this formalism, a hypothesis or *way the world can be* is a computer program that outputs a sequence of integers.

**BLAINE:** There’s indeed a version of Solomonoff induction that works like that. But I prefer the version where a hypothesis assigns *probabilities* to sequences. Like, if the hypothesis is that the world is a fair coin, then we shouldn’t try to make that hypothesis predict “heads—tails—tails—tails—heads” but should let it just assign a prior probability to the sequence **HTTTH**.

**ASHLEY:** I can see that for coins, but I feel a bit iffier on what this means as a statement *about the real world*.

**BLAINE:** A single hypothesis inside the Solomonoff mixture would be a computer program that took in a series of video frames, and assigned a probability to each possible next video frame. Or for greater simplicity and elegance, imagine a program that took in a sequence of bits, ones and zeroes, and output a rational number for the probability of the next bit being ‘1’. We can readily go back and forth between a program like that, and a probability distribution over sequences.

Like, if you can answer all of the questions, “What’s the probability that the coin comes up heads on the first flip?”, “What’s the probability of the coin coming up heads on the second flip, if it came up heads on the first flip?”, and “What’s the probability that the coin comes up heads on the second flip, if it came up tails on the first flip?” then we can turn that into a probability distribution over sequences of two coinflips. Analogously, if we have a program that outputs the probability of the next bit, conditioned on a finite number of previous bits taken as input, that program corresponds to a probability distribution over infinite sequences of bits.

**ASHLEY:** I think I followed along with that in theory, though it’s not a type of math I’m used to (yet). So then in what sense is a program that assigns probabilities to sequences, a way the world could be—a hypothesis about the world?

**BLAINE:** Well, I mean, for one thing, we can see the infant Ashley as a program with 75 megabytes of information about how to wire up its brain in response to sense data, that sees a bunch of sense data, and then experiences some degree of relative surprise. Like in the baby-looking-paradigm experiments where you show a baby an object disappearing behind a screen, and the baby looks longer at those cases, and so we suspect that babies have a concept of object permanence.

**ASHLEY:** That sounds like a program that’s a way Ashley could be, not a program that’s a way the world could be.

**BLAINE:** Those indeed are dual perspectives on the meaning of Solomonoff induction. Maybe we can shed some light on this by considering a simpler induction rule, Laplace’s Rule of Succession, invented by the Reverend Thomas Bayes in the 1750s, and named after Pierre-Simon Laplace, the inventor of Bayesian reasoning.

**ASHLEY:** Pardon me?

**BLAINE:** Suppose you have a biased coin with an unknown bias, and every possible bias between and is equally probable.

**ASHLEY:** Okay. Though in the real world, it’s quite likely that an unknown frequency is exactly , , or . If you assign equal probability density to every part of the real number field between and , the probability of is . Indeed, the probability of all rational numbers put together is zero.

**BLAINE:** The original problem considered by Thomas Bayes was about an ideal billiard ball bouncing back and forth on an ideal billiard table many times and eventually slowing to a halt; and then bouncing other billiards to see if they halted to the left or the right of the first billiard. You can see why, in first considering the simplest form of this problem without any complications, we might consider every position of the first billiard to be equally probable.

**ASHLEY:** Sure. Though I note with pointless pedantry that if the billiard was really an ideal rolling sphere and the walls were perfectly reflective, it’d never halt in the first place.

**BLAINE:** Suppose we’re told that, after rolling the original billiard ball and then 5 more billiard balls, one billiard ball was to the right of the original, an **R**. The other four were to the left of the original, or **L**s. Again, that’s 1 **R** and 4 **L**s. Given only this data, what is the probability that the next billiard ball rolled will be on the left of the original, another **L**?

**ASHLEY:** Five sevenths.

**BLAINE:** Ah, you’ve heard this problem before?

**ASHLEY:** No, but it’s obvious.

**BLAINE:** Uh… really?

**ASHLEY:** Combinatorics. Consider just the orderings of the balls, instead of their exact positions. Designate the original ball with the symbol **❚**, the next five balls as **LLLLR**, and the next ball to be rolled as **✚**. Given that the current ordering of these six balls is **LLLL❚R** and that all positions and spacings of the underlying balls are equally likely, after rolling the **✚**, there will be seven equally likely orderings **✚LLLL❚R**, **L✚LLL❚R**, **LL✚LL❚R**, and so on up to **LLLL❚✚R** and **LLLL❚R✚**. In five of those seven orderings, the **✚** is on the left of the **❚**. In general, if we see of **L** and of **R**, the probability of the next item being an **L** is .

**BLAINE:** Gosh… Well, the much more complicated proof originally devised by Thomas Bayes starts by considering every position of the original ball to be equally likely *a priori*, the additional balls as providing evidence about that position, and then integrating over the posterior probabilities of the original ball’s possible positions to arrive at the probability that the next ball lands on the left or right.

**ASHLEY:** Heh. And is all that extra work useful if you also happen to know a little combinatorics?

**BLAINE:** Well, it tells me exactly how my beliefs about the original ball change with each new piece of evidence—the new posterior probability function on the ball’s position. Suppose I instead asked you something along the lines of, “Given 4 **L** and 1 **R**, where do you think the original ball **✚** is most likely to be on the number line? How likely is it to be within 0.1 distance of there?”

**ASHLEY:** That’s fair; I don’t see a combinatoric answer for the later part. You’d have to actually integrate over the density function .

**BLAINE:** Anyway, let’s just take at face value that Laplace’s Rule of Succession says that, after observing 1s and 0s, the probability of getting a 1 next is .

**ASHLEY:** But of course.

**BLAINE:** We can consider Laplace’s Rule as a short Python program that takes in a sequence of 1s and 0s, and spits out the probability that the next bit in the sequence will be 1. We can also consider it as a probability distribution over infinite sequences, like this:

**0**:**1**:**00**:**01**:**000**:**001**:**010**:

… and so on.

Now, we can view this as a rule someone might espouse for *predicting* coinflips, but also view it as corresponding to a particular class of possible worlds containing randomness.

I mean, Laplace’s Rule isn’t the only rule you could use. Suppose I had a barrel containing ten white balls and ten green balls. If you already knew this about the barrel, then after seeing white balls and green balls, you’d predict the next ball being white with probability .

If you use Laplace’s Rule, that’s like believing the world was like a billiards table with an original ball rolling to a stop at a random point and new balls ending up on the left or right. If you use , that’s like the hypothesis that there are ten green balls and ten white balls in a barrel. There isn’t really a sharp border between rules we can use to predict the world, and rules for how the world behaves—

**ASHLEY:** Well, that sounds just plain wrong. The map is not the territory, don’cha know? If Solomonoff induction can’t tell the difference between maps and territories, maybe it doesn’t contain all epistemological goodness after all.

**BLAINE:** Maybe it’d be better to say that there’s a dualism between good ways of computing predictions and being in actual worlds where that kind of predicting works well? Like, you could also see Laplace’s Rule as implementing the rules for a world with randomness where the original billiard ball ends up in a random place, so that the first thing you see is equally likely to be 1 or 0. Then to ask what probably happens on round 2, we tell the world what happened on round 1 so that it can update what the background random events were.

**ASHLEY:** Mmmaybe.

**BLAINE:** If you go with the version where Solomonoff induction is over programs that just spit out a determined string of ones and zeroes, we could see those programs as corresponding to particular environments—ways the world *could be* that would produce our sensory input, the sequence.

We could jump ahead and consider the more sophisticated decision-problem that appears in AIXI: an environment is a program that takes your motor outputs as its input, and then returns your sensory inputs as its output. Then we can see a program that produces Bayesian-updated predictions as corresponding to a hypothetical probabilistic environment that implies those updates, although they’ll be conjugate systems rather than mirror images.

**ASHLEY:** Did you say something earlier about the deterministic and probabilistic versions of Solomonoff induction giving the same answers? Like, is it a distinction without a difference whether we ask about simple programs that reproduce the observed data versus simple programs that assign high probability to the data? I can’t see why that should be true, especially since Turing machines don’t include a randomness source.

**BLAINE:** I’m *told* the answers are the same but I confess I can’t quite see why, unless there’s some added assumption I’m missing. So let’s talk about programs that assign probabilities for now, because I think that case is clearer.

### iv. Simplicity

**BLAINE:** The next key idea is to prefer *simple* programs that assign high probability to our observations so far.

**ASHLEY:** It seems like an obvious step, especially considering that you were already talking about “simple programs” and Occam’s Razor a while back. Solomonoff induction is part of the Bayesian program of inference, right?

**BLAINE:** Indeed. Very much so.

**ASHLEY:** Okay, so let’s talk about the program, or hypothesis, for “This barrel has an unknown frequency of white and green balls”, versus the hypothesis “This barrel has 10 white and 10 green balls”, versus the hypothesis, “This barrel always puts out a green ball after a white ball and vice versa.”

Let’s say we see a green ball, then a white ball, the sequence **GW**. The first hypothesis assigns this probability , the second hypothesis assigns this probability or roughly , and the third hypothesis assigns probability .

Now it seems to me that there’s some important sense in which, even though Laplace’s Rule assigned a lower probability to the data, it’s significantly simpler than the second and third hypotheses and is the wiser answer. Does Solomonoff induction agree?

**BLAINE:** I think you might be taking into account some prior knowledge that isn’t in the sequence itself, there. Like, things that alternate either **101010…** or **010101…** are *objectively* simple in the sense that a short computer program simulates them or assigns probabilities to them. It’s just unlikely to be true about an actual barrel of white and green balls.

If **10** is literally the first sense data that you ever see, when you are a fresh new intelligence with only two bits to rub together, then “The universe consists of alternating bits” is no less reasonable than “The universe produces bits with an unknown random frequency anywhere between and .”

**ASHLEY:** Conceded. But as I was going to say, we have three hypotheses that assigned , , and to the observed data; but to know the posterior probabilities of these hypotheses we need to actually say how relatively likely they were *a priori*, so we can multiply by the odds ratio. Like, if the prior odds were , the posterior odds would be . Now, how would Solomonoff induction assign prior probabilities to those computer programs? Because I remember you saying, way back when, that you thought Solomonoff was the answer to “How should Bayesians assign priors?”

**BLAINE:** Well, how would you do it?

**ASHLEY:** I mean… yes, the simpler rules should be favored, but it seems to me that there’s some deep questions as to the exact relative ‘simplicity’ of the rules , or the rule , or the rule “alternate the bits”...

**BLAINE:** Suppose I ask you to just make up some simple rule.

**ASHLEY:** Okay, if I just say the rule I think you’re looking for, the rule would be, “The complexity of a computer program is the number of bits needed to specify it to some arbitrary but reasonable choice of compiler or Universal Turing Machine, and the prior probability is to the power of the number of bits. Since, e.g., there’s 32 possible 5-bit programs, so each such program has probability . So if it takes 16 bits to specify Laplace’s Rule of Succession, which seems a tad optimistic, then the prior probability would be , which seems a tad pessimistic.

**BLAINE:** Now just apply that rule to the infinity of possible computer programs that assign probabilities to the observed data, update their posterior probabilities based on the probability they’ve assigned to the evidence so far, sum over all of them to get your next prediction, and we’re done. And yes, that requires a hypercomputer that can solve the halting problem, but we’re talking ideals here. Let be the set of all programs and also written be the sense data so far, then

**ASHLEY:** Uh.

**BLAINE:** Yes?

**ASHLEY:** Um...

**BLAINE:** What is it?

**ASHLEY:** You invoked a countably infinite set, so I’m trying to figure out if my predicted probability for the next bit must necessarily converge to a limit as I consider increasingly large finite subsets in any order.

**BLAINE:** *(sighs)* Of course you are.

**ASHLEY:** I think you might have left out some important caveats. Like, if I take the rule literally, then the program “**0**” has probability , the program “**1**″ has probability , the program “**01**” has probability and now the total probability is which is *too much.* So I can’t actually normalize it because the series sums to infinity. Now, this just means we need to, say, decide that the probability of a program having length 1 is , the probability of it having length 2 is , and so on out to infinity, but it’s an added postulate.

**BLAINE:** The conventional method is to require a prefix-free code. If “**0111**” is a valid program then “**01110**″ cannot be a valid program. With that constraint, assigning ” to the power of the length of the code”, to all valid codes, will sum to less than ; and we can normalize their relative probabilities to get the actual prior.

**ASHLEY:** Okay. And you’re sure that it doesn’t matter in what order we consider more and more programs as we approach the limit, because… no, I see it. Every program has positive probability mass, with the total set summing to , and Bayesian updating doesn’t change that. So as I consider more and more programs, in any order, there are only so many large contributions that can be made from the mix—there’s only so often that the final probability can change.

Like, let’s say there are at most 99 programs with probability 1% that assign probability to the next bit being a 1; that’s only 99 times the final answer can go down by as much as , as the limit is approached.

**BLAINE:** This idea generalizes, and is important. List all possible computer programs, in any order you like. Use any definition of *simplicity* that you like, so long as for any given amount of simplicity, there are only a finite number of computer programs that simple. As you go on carving off chunks of prior probability mass and assigning them to programs, it *must* be the case that as programs get more and complicated, their prior probability approaches zero!—though it’s still positive for every finite program, because of Cromwell’s Rule.

You can’t have more than 99 programs assigned 1% prior probability and still obey Cromwell’s Rule, which means there must be some *most complex* program that is assigned 1% probability, which means every more complicated program must have less than 1% probability out to the end of the infinite list.

**ASHLEY:** Huh. I don’t think I’ve ever heard that justification for Occam’s Razor before. I think I like it. I mean, I’ve heard a lot of appeals to the empirical simplicity of the world, and so on, but this is the first time I’ve seen a *logical* proof that, in the limit, more complicated hypotheses *must* be less likely than simple ones.

**BLAINE:** Behold the awesomeness that is Solomonoff Induction!

**ASHLEY:** Uh, but you didn’t actually use the notion of *computational* simplicity to get that conclusion; you just required that the supply of probability mass is finite and the supply of potential complications is infinite. Any way of counting discrete complications would imply that conclusion, even if it went by surface wheels and gears.

**BLAINE:** Well, maybe. But it so happens that Yudkowsky did invent or reinvent that argument after pondering Solomonoff induction, and if it predates him (or Solomonoff) then Yudkowsky doesn’t know the source. Concrete inspiration for simplified arguments is also a credit to a theory, especially if the simplified argument didn’t exist before that.

**ASHLEY:** Fair enough.

### v. Choice of Universal Turing Machine

**ASHLEY:** My next question is about the choice of Universal Turing Machine—the choice of compiler for our program codes. There’s an infinite number of possibilities there, and in principle, the right choice of compiler can make our probability for the next thing we’ll see be anything we like. At least I’d expect this to be the case, based on how the “problem of induction” usually goes. So with the right choice of Universal Turing Machine, our online crackpot can still make it be the case that Solomonoff induction predicts Canada invading the USA.

**BLAINE:** One way of looking at the problem of good epistemology, I’d say, is that the job of a good epistemology is not to make it *impossible* to err. You can still blow off your foot if you really insist on pointing the shotgun at your foot and pulling the trigger.

The job of good epistemology is to make it *more obvious* when you’re about to blow your own foot off with a shotgun. On this dimension, Solomonoff induction excels. If you claim that we ought to pick an enormously complicated compiler to encode our hypotheses, in order to make the ‘simplest hypothesis that fits the evidence’ be one that predicts Canada invading the USA, then it should be obvious to everyone except you that you are in the process of screwing up.

**ASHLEY:** Ah, but of course they’ll say that their code is just the simple and natural choice of Universal Turing Machine, because they’ll exhibit a meta-UTM which outputs that UTM given only a short code. And if you say the meta-UTM is complicated—

**BLAINE:** Flon’s Law says, “There is not now, nor has there ever been, nor will there ever be, any programming language in which it is the least bit difficult to write bad code.” You can’t make it impossible for people to screw up, but you can make it *more obvious.* And Solomonoff induction would make it even more obvious than might at first be obvious, because—

**ASHLEY:** Your Honor, I move to have the previous sentence taken out and shot.

**BLAINE:** Let’s say that the whole of your sensory information is the string **10101010…** Consider the stupid hypothesis, “This program has a 99% probability of producing a **1** on every turn”, which you jumped to after seeing the first bit. What would you need to claim your priors were like—what Universal Turing Machine would you need to endorse—in order to maintain blind faith in that hypothesis in the face of ever-mounting evidence?

**ASHLEY:** You’d need a Universal Turing Machine **blind-utm** that assigned a very high probability to the **blind** program “def ProbNextElementIsOne(previous_sequence): return 0.99”. Like, if **blind-utm** sees the code **0**, it executes the **blind** program “return 0.99″.

And to defend yourself against charges that your UTM **blind-utm** was not itself simple, you’d need a meta-UTM, **blind-meta**, which, when it sees the code **10**, executes **blind-utm**.

And to really wrap it up, you’d need to take a fixed point through all towers of meta and use diagonalization to create the UTM **blind-diag** that, when it sees the program code **0**, executes “return 0.99”, and when it sees the program code **10**, executes **blind-diag**.

I guess I can see some sense in which, even if that doesn’t resolve Hume’s problem of induction, anyone *actually advocating that* would be committing blatant shenanigans on a commonsense level, arguably more blatant than it would have been if we hadn’t made them present the UTM.

**BLAINE:** Actually, the shenanigans have to be much worse than that in order to fool Solomonoff induction. Like, Solomonoff induction using your **blind-diag** isn’t fooled for a minute, even taking **blind-diag** entirely on its own terms.

**ASHLEY:** Really?

**BLAINE:** Assuming 60 sequence items per second? Yes, absolutely, Solomonoff induction shrugs off the delusion in the first minute, unless there are further and even more blatant shenanigans.

We did require that your **blind-diag** be a *Universal* Turing Machine, meaning that it can reproduce every computable probability distribution over sequences, given some particular code to compile. Let’s say there’s a 200-bit code **laplace** for Laplace’s Rule of Succession, “lambda sequence: return (sequence.count(‘1’) + 1) / (len(sequence) + 2)”, so that its prior probability relative to the 1-bit code for **blind** is . Let’s say that the sense data is around ^{50}⁄_{50} 1s and 0s. Every time we see a 1, **blind** gains a factor of 2 over **laplace** (99% vs. 50% probability), and every time we see a 0, **blind** loses a factor of 50 over **laplace** (1% vs. 50% probability).

On average, every 2 bits of the sequence, **blind** is losing a factor of 25 or, say, a bit more than 4 bits, i.e., on average **blind** is losing two bits of probability per element of the sequence observed.

So it’s only going to take 100 bits, or a little less than two seconds, for **laplace** to win out over **blind**.

**ASHLEY:** I see. I was focusing on a UTM that assigned lots of prior probability to **blind**, but what I really needed was a compiler that, *while still being universal* and encoding every possibility somewhere, still assigned a really tiny probability to **laplace**, **faircoin** that encodes “return 0.5”, and every other hypothesis that does better, round by round, than **blind**. So what I really need to carry off the delusion is **obstinate-diag** that is universal, assigns high probability to **blind**, requires billions of bits to specify **laplace**, and also requires billions of bits to specify any UTM that can execute **laplace** as a shorter code than billions of bits. Because otherwise we will say, “Ah, but given the evidence, this other UTM would have done better.” I agree that those are even more blatant shenanigans than I thought.

**BLAINE:** Yes. And even *then*, even if your UTM takes two billion bits to specify **faircoin**, Solomonoff induction will lose its faith in **blind** after seeing a billion bits.

Which will happen before the first year is out, if we’re getting 60 bits per second.

And if you turn around and say, “Oh, well, I didn’t mean *that* was my UTM, I really meant *this* was my UTM, this thing over here where it takes a *trillion* bits to encode **faircoin**”, then that’s probability-theory-violating shenanigans where you’re changing your priors as you go.

**ASHLEY:** That’s actually a very interesting point—that what’s needed for a Bayesian to maintain a delusion in the face of mounting evidence is not so much a blindly high prior for the delusory hypothesis, as a blind skepticism of all its alternatives.

But what if their UTM requires a googol bits to specify **faircoin**? What if **blind** and **blind-diag**, or programs pretty much isomorphic to them, are the only programs that can be specified in less than a googol bits?

**BLAINE:** Then your desire to shoot your own foot off has been made very, very visible to anyone who understands Solomonoff induction. We’re not going to get absolutely objective prior probabilities as a matter of logical deduction, not without principles that are unknown to me and beyond the scope of Solomonoff induction. But we can make the stupidity really *blatant* and force you to construct a downright embarrassing Universal Turing Machine.

**ASHLEY:** I guess I can see that. I mean, I guess that if you’re presenting a ludicrously complicated Universal Turing Machine that just refuses to encode the program that would predict Canada not invading, that’s more *visibly* silly than a verbal appeal that says, “But you must just have faith that Canada will invade.” I guess part of me is still hoping for a more objective sense of “complicated”.

**BLAINE:** We could say that reasonable UTMs should contain a small number of wheels and gears in a material instantiation under our universe’s laws of physics, which might in some ultimate sense provide a prior over priors. Like, the human brain evolved from DNA-based specifications, and the things you can construct out of relatively small numbers of physical objects are ‘simple’ under the ‘prior’ implicitly searched by natural selection.

**ASHLEY:** Ah, but what if I think it’s likely that our physical universe or the search space of DNA won’t give us a good idea of what’s complicated?

**BLAINE:** For your alternative notion of what’s complicated to go on being believed even as other hypotheses are racking up better experimental predictions, you need to assign a *ludicrously low probability* that our universe’s space of physical systems buildable using a small number of objects, could *possibly* provide better predictions of that universe than your complicated alternative notion of prior probability.

We don’t need to appeal that it’s *a priori* more likely than not that “a universe can be predicted well by low-object-number machines built using that universe’s physics.” Instead, we appeal that it would violate Cromwell’s Rule, and would constitute exceedingly special pleading, to assign the possibility of a physically learnable universe a probability of *less* than . It then takes only a megabit of exposure to notice that the universe seems to be regular.

**ASHLEY:** In other words, so long as you don’t start with an absolute and blind prejudice against the universe being predictable by simple machines encoded in our universe’s physics—so long as, on this planet of seven billion people, you don’t assign probabilities less than to the other person being right about what is a good Universal Turing Machine—then the pure logic of Bayesian updating will rapidly force you to the conclusion that induction works.

### vi. Why algorithmic complexity?

**ASHLEY:** Hm. I don’t know that good *pragmatic* answers to the problem of induction were ever in short supply. Still, on the margins, it’s a more forceful pragmatic answer than the last one I remember hearing.

**BLAINE:** Yay! *Now* isn’t Solomonoff induction wonderful?

**ASHLEY:** Maybe?

You didn’t really use the principle of *computational* simplicity to derive that lesson. You just used that *some inductive principle* ought to have a prior probability of more than .

**BLAINE:** …

**ASHLEY:** Can you give me an example of a problem where the *computational* definition of simplicity matters and can’t be factored back out of an argument?

**BLAINE:** As it happens, yes I can. I can give you *three* examples of how it matters.

**ASHLEY:** Vun… two… three! Three examples! Ah-ah-ah!

**BLAINE:** Must you do that every—oh, never mind. Example one is that galaxies are not so improbable that no one could ever believe in them, example two is that the limits of possibility include Terrence Tao, and example three is that diffraction is a simpler explanation of rainbows than divine intervention.

**ASHLEY:** These statements are all so obvious that no further explanation of any of them is required.

**BLAINE:** On the contrary! And I’ll start with example one. Back when the Andromeda Galaxy was a hazy mist seen through a telescope, and someone first suggested that maybe that hazy mist was an incredibly large number of distant stars—that many “nebulae” were actually *distant galaxies*, and our own Milky Way was only one of them—there was a time when Occam’s Razor was invoked against that hypothesis.

**ASHLEY:** What? Why?

**BLAINE:** They invoked Occam’s Razor against the galactic hypothesis, because if that were the case, then there would be *a much huger number of stars* in the universe, and the stars would be entities, and Occam’s Razor said “Entities are not to be multiplied beyond necessity.”

**ASHLEY:** That’s not how Occam’s Razor works. The “entities” of a theory are its types, not its objects. If you say that the hazy mists are distant galaxies of stars, then you’ve reduced the number of laws because you’re just postulating a previously seen type, namely stars organized into galaxies, instead of a new type of hazy astronomical mist.

**BLAINE:** Okay, but imagine that it’s the nineteenth century and somebody replies to you, “Well, I disagree! William of Ockham said not to multiply entities, this galactic hypothesis obviously creates a huge number of entities, and that’s the way I see it!”

**ASHLEY:** I think I’d give them your spiel about there being no human epistemology that can stop you from shooting off your own foot.

**BLAINE:** I *don’t* think you’d be justified in giving them that lecture.

I’ll parenthesize at this point that you ought to be very careful when you say “I can’t stop you from shooting off your own foot”, lest it become a Fully General Scornful Rejoinder. Like, if you say that to someone, you’d better be able to explain exactly why Occam’s Razor counts types as entities but not objects. In fact, you’d better explain that to someone *before* you go advising them not to shoot off their own foot. And once you’ve told them what you think is foolish and why, you might as well stop there. Except in really weird cases of people presenting us with enormously complicated and jury-rigged Universal Turing Machines, and then we say the shotgun thing.

**ASHLEY:** That’s fair. So, I’m not sure what I’d have answered before starting this conversation, which is much to your credit, friend Blaine. But now that I’ve had this conversation, it’s obvious that it’s new types and not new objects that use up the probability mass we need to distribute over all hypotheses. Like, I need to distribute my probability mass over “Hypothesis 1: there are stars” and “Hypothesis 2: there are stars plus huge distant hazy mists”. I don’t need to distribute my probability mass over all the actual stars in the galaxy!

**BLAINE:** In terms of Solomonoff induction, we penalize a program’s *lines of code* rather than its *runtime* or *RAM used*, because we need to distribute our probability mass over possible alternatives each time we add a line of code. There’s no corresponding *choice between mutually exclusive alternatives* when a program uses more runtime or RAM.

(**ELIEZER:** (*whispering*) *Unless we need a **leverage prior** to consider the hypothesis of being a particular agent inside all that RAM or runtime.*)

**ASHLEY:** Or to put it another way: any fully detailed model of the universe would require some particular arrangement of stars, and the more stars there are, the more possible arrangements there are. But when we look through the telescope and see a hazy mist, we get to sum over all arrangements of stars that would produce that hazy mist. If some galactic hypothesis required a hundred billion stars to *all* be in *particular exact places* without further explanation or cause, then that would indeed be a grave improbability.

**BLAINE:** Precisely. And if you needed all the hundred billion stars to be in particular exact places, that’s just the kind of hypothesis that would take a huge computer program to specify.

**ASHLEY:** But does it really require learning Solomonoff induction to understand that point? Maybe the bad argument against galaxies was just a motivated error somebody made in the nineteenth century, because they didn’t want to live in a big universe for emotional reasons.

**BLAINE:** The same debate is playing out today over no-collapse versions of quantum mechanics, also somewhat unfortunately known as “many-worlds interpretations”. Now, regardless of what anyone thinks of all the other parts of that debate, there’s a *particular* sub-argument where somebody says, “It’s simpler to have a collapse interpretation because all those extra quantum ‘worlds’ are extra entities that are unnecessary under Occam’s Razor since we can’t see them.” And Solomonoff induction tells us that this invocation of Occam’s Razor is flatly misguided because Occam’s Razor does not work like that.

Basically, they’re trying to cut down the RAM and runtime of the universe, at the expensive of adding an extra line of code, namely the code for the collapse postulate that prunes off parts of the wavefunction that are in undetectably weak causal contact with us.

**ASHLEY:** Hmm. Now that you put it that way, it’s not so obvious to me that it makes sense to have *no* prejudice against *sufficiently* enormous universes. I mean, the universe we see around us is exponentially vast but not superexponentially vast—the visible atoms are in number or so, not or “bigger than Graham’s Number”. Maybe there’s some fundamental limit on how much gets computed.

**BLAINE:** You, um, know that on the Standard Model, the universe doesn’t just cut out and stop existing at the point where our telescopes stop seeing it? There isn’t a giant void surrounding a little bubble of matter centered perfectly on Earth? It calls for a literally infinite amount of matter? I mean, I guess if you don’t like living in a universe with more than entities, a universe where *too much gets computed,* you could try to specify *extra laws of physics* that create an abrupt spatial boundary with no further matter beyond them, somewhere out past where our telescopes can see—

**ASHLEY:** All right, point taken.

(**ELIEZER: **(*whispering*) *Though I personally suspect that the spatial multiverse and the quantum multiverse are the same multiverse, and that what lies beyond the reach of our telescopes is not entangled with us—meaning that the universe is as finitely large as the superposition of all possible quantum branches, rather than being literally infinite in space.*)

**BLAINE:** I mean, there is in fact an alternative formalism to Solomonoff induction, namely Levin search, which says that program complexities are further penalized by the logarithm of their runtime. In other words, it would say that ‘explanations’ or ‘universes’ that require a long time to run are inherently less probable.

Some people like Levin search more than Solomonoff induction because it’s more computable. I dislike Levin search because (a) it has no fundamental epistemic justification and (b) it assigns probability zero to quantum mechanics.

**ASHLEY:** Can you unpack that last part?

**BLAINE:** If, as is currently suspected, there’s no way to simulate quantum computers using classical computers without an exponential slowdown, then even in principle, this universe requires exponentially vast amounts of classical computing power to simulate.

Let’s say that with sufficiently advanced technology, you can build a quantum computer with a million qubits. On Levin’s definition of complexity, for the universe to be like that is as improbable *a priori* as *any particular* set of laws of physics that must specify on the order of one million equations.

Can you imagine how improbable it would be to see a list of one hundred thousand differential equations, without any justification or evidence attached, and be told that they were the laws of physics? That’s the kind of penalty that Levin search or Schmidhuber’s Speed Prior would attach to any laws of physics that could run a quantum computation of a million qubits, or, heck, any physics that claimed that a protein was being folded in a way that ultimately went through considering millions of quarks interacting.

If you’re *not* absolutely certain *a priori* that the universe *isn’t* like that, you don’t believe in Schmidhuber’s Speed Prior. Even with a collapse postulate, the amount of computation that goes on before a collapse would be prohibited by the Speed Prior.

**ASHLEY:** Okay, yeah. If you’re phrasing it that way—that the Speed Prior assigns probability nearly zero to quantum mechanics, so we shouldn’t believe in the Speed Prior—then I can’t easily see a way to extract out the same point without making reference to ideas like penalizing algorithmic complexity but not penalizing runtime. I mean, maybe I could extract the lesson back out but it’s easier to say, or more obvious, by pointing to the idea that Occam’s Razor should penalize algorithmic complexity but not runtime.

**BLAINE:** And that isn’t just *implied by* Solomonoff induction, it’s pretty much the whole idea of Solomonoff induction, right?

**ASHLEY:** Maaaybe.

**BLAINE:** For example two, that Solomonoff induction outperforms even Terence Tao, we want to have a theorem that says Solomonoff induction catches up to every computable way of reasoning in the limit. Since we iterated through all possible computer programs, we know that somewhere in there is a simulated copy of Terence Tao in a simulated room, and if this requires a petabyte to specify, then we shouldn’t have to make more than a quadrillion bits of error relative to Terence Tao before zeroing in on the Terence Tao hypothesis.

I mean, in practice, I’d expect far less than a quadrillion bits of error before the system was behaving like it was vastly smarter than Terence Tao. It’d take a lot less than a quadrillion bits to give you some specification of a universe with simple physics that gave rise to a civilization of vastly greater than intergalactic extent. Like, Graham’s Number is a very simple number, so it’s easy to specify a universe that runs for that long before it returns an answer. It’s not obvious how you’d extract Solomonoff predictions from that civilization and incentivize them to make good ones, but I’d be surprised if there were no Turing machine of fewer than one thousand states which did that somehow.

**ASHLEY:** …

**BLAINE:** And for all I know there might be even better ways than that of getting exceptionally good predictions, somewhere in the list of the first decillion computer programs. That is, somewhere in the first 100 bits.

**ASHLEY:** So your basic argument is, “Never mind Terence Tao, Solomonoff induction dominates *God*.”

**BLAINE:** Solomonoff induction isn’t the epistemic prediction capability of a superintelligence. It’s the epistemic prediction capability of something that eats superintelligences like potato chips.

**ASHLEY:** Is there any point to contemplating an epistemology so powerful that it will never begin to fit inside the universe?

**BLAINE:** Maybe? I mean, a lot of times, you just find people *failing to respect* the notion of ordinary superintelligence, doing the equivalent of supposing that a superintelligence behaves like a bad Hollywood genius and misses obvious-seeming moves. And a lot of times you find them insisting that “there’s a limit to how much information you can get from the data” or something along those lines. “That Alien Message” is intended to convey the counterpoint, that smarter entities can extract more info than is immediately apparent on the surface of things.

Similarly, thinking about Solomonoff induction might also cause someone to realize that if, say, you simulated zillions of possible simple universes, you could look at which agents were seeing exact data like the data you got, and figure out where you were inside that range of possibilities, so long as there was literally *any* correlation to use.

And if you say that an agent *can’t* extract that data, you’re making a claim about which shortcuts to Solomonoff induction are and aren’t computable. In fact, you’re probably pointing at some *particular* shortcut and claiming nobody can ever figure that out using a reasonable amount of computing power *even though the info is there in principle.* Contemplating Solomonoff induction might help people realize that, yes, the data *is* there in principle. Like, until I ask you to imagine a civilization running for Graham’s Number of years inside a Graham-sized memory space, you might not imagine them trying all the methods of analysis that *you personally* can imagine being possible.

**ASHLEY:** If somebody is making that mistake in the first place, I’m not sure you can beat it out of them by telling them the definition of Solomonoff induction.

**BLAINE:** Maybe not. But to brute-force somebody into imagining that sufficiently advanced agents have Level 1 protagonist intelligence, that they are epistemically efficient rather than missing factual questions that are visible even to us, you might need to ask them to imagine an agent that can see *literally anything seeable in the computational limit* just so that their mental simulation of the ideal answer isn’t running up against stupidity assertions.

Like, I think there are a lot of people who could benefit from looking over the evidence they already personally have, and asking what a Solomonoff inductor could deduce from it, so that they wouldn’t be running up against stupidity assertions *about themselves. *It’s the same trick as asking yourself what God, Richard Feynman, or a “perfect rationalist” would believe in your shoes. You just have to pick a real or imaginary person that you respect enough for your model of that person to lack the same stupidity assertions that you believe about yourself.

**ASHLEY:** Well, let’s once again try to factor out the part about Solomonoff induction in particular. If we’re trying to imagine something epistemically smarter than ourselves, is there anything we get from imagining a complexity-weighted prior over programs in particular? That we don’t get from, say, trying to imagine the reasoning of one particular Graham-Number-sized civilization?

**BLAINE:** We get the surety that even anything we imagine *Terence Tao himself* as being able to figure out, is something that is allowed to be known after some bounded number of errors versus Terence Tao, because Terence Tao is inside the list of all computer programs and gets promoted further each time the dominant paradigm makes a prediction error relative to him.

We can’t get that dominance property without invoking “all possible ways of computing” or something like it—we can’t incorporate the power of all reasonable processes, unless we have a set such that all the reasonable processes are in it. The enumeration of all possible computer programs is one such set.

**ASHLEY:** Hm.

**BLAINE:** Example three, diffraction is a simpler explanation of rainbows than divine intervention.

I don’t think I need to belabor this point very much, even though in one way it might be the most central one. It sounds like “Jehovah placed rainbows in the sky as a sign that the Great Flood would never come again” is a ‘simple’ explanation; you can explain it to a child in nothing flat. Just the diagram of diffraction through a raindrop, to say nothing of the Principle of Least Action underlying diffraction, is something that humans don’t usually learn until undergraduate physics, and it *sounds* more alien and less intuitive than Jehovah. In what sense is this intuitive sense of simplicity wrong? What gold standard are we comparing it to, that could be a better sense of simplicity than just ‘how hard is it for me to understand’?

The answer is Solomonoff induction and the rule which says that simplicity is measured by the size of the computer program, not by how hard things are for human beings to understand. Diffraction is a small computer program; any programmer who understands diffraction can simulate it without too much trouble. Jehovah would be a much huger program—a complete mind that implements anger, vengeance, belief, memory, consequentialism, etcetera. Solomonoff induction is what tells us to retrain our intuitions so that differential equations feel like less burdensome explanations than heroic mythology.

**ASHLEY:** Now hold on just a second, if that’s actually how Solomonoff induction works then it’s not working very well. I mean, Abraham Lincoln was a great big complicated mechanism from an algorithmic standpoint—he had a hundred trillion synapses in his brain—but that doesn’t mean I should look at the historical role supposedly filled by Abraham Lincoln, and look for simple mechanical rules that would account for the things Lincoln is said to have done. If you’ve already seen humans and you’ve already learned to model human minds, it shouldn’t cost a vast amount to say there’s one *more* human, like Lincoln, or one more entity that is *cognitively humanoid*, like the Old Testament jealous-god version of Jehovah. It may be *wrong* but it shouldn’t be vastly improbable *a priori*.

If you’ve already been forced to acknowledge the existence of some humanlike minds, why not others? Shouldn’t you get to reuse the complexity that you postulated to explain humans, in postulating Jehovah?

In fact, shouldn’t that be what Solomonoff induction *does?* If you have a computer program that can model and predict humans, it should only be a slight modification of that program—only slightly longer in length and added code—to predict the modified-human entity that is Jehovah.

**BLAINE:** Hm. That’s fair. I may have to retreat from that example somewhat.

In fact, that’s yet another point to the credit of Solomonoff induction! The ability of programs to reuse code, incorporates our intuitive sense that if you’ve already postulated one kind of thing, it shouldn’t cost as much to postulate a similar kind of thing elsewhere!

**ASHLEY:** Uh huh.

**BLAINE:** Well, but even if I was wrong that Solomonoff induction should make Jehovah seem very improbable, it’s still Solomonoff induction that says that the alternative hypothesis of ‘diffraction’ shouldn’t itself be seen as burdensome—even though diffraction might require a longer time to explain to a human, it’s still at heart a simple program.

**ASHLEY:** Hmm.

I’m trying to think if there’s some notion of ‘simplicity’ that I can abstract away from ‘simple program’ as the nice property that diffraction has as an explanation for rainbows, but I guess anything I try to say is going to come down to some way of counting the wheels and gears inside the explanation, and justify the complexity penalty on probability by the increased space of possible configurations each time we add a new gear. And I can’t make it be about surface details because that will make whole humans seem way too improbable.

If I have to use simply specified systems and I can’t use surface details or runtime, that’s probably going to end up basically equivalent to Solomonoff induction. So in that case we might as well use Solomonoff induction, which is probably simpler than whatever I’ll think up and will give us the same advice. Okay, you’ve mostly convinced me.

**BLAINE:** *Mostly?* What’s left?

### vii. Limitations

**ASHLEY:** Well, several things. Most of all, I think of how the ‘language of thought’ or ‘language of epistemology’ seems to be different in some sense from the ‘language of computer programs’.

Like, when I think about the laws of Newtonian gravity, or when I think about my Mom, it’s not just one more line of code tacked onto a big black-box computer program. It’s more like I’m crafting an explanation with modular parts—if it contains a part that looks like Newtonian mechanics, I step back and reason that it might contain other parts with differential equations. If it has a line of code for a Mom, it might have a line of code for a Dad.

I’m worried that if I understood how humans think like that, maybe I’d look at Solomonoff induction and see how it doesn’t incorporate some further key insight that’s needed to do good epistemology.

**BLAINE:** Solomonoff induction literally incorporates a copy of you thinking about whatever you’re thinking right now.

**ASHLEY:** Okay, great, but that’s *inside* the system. If Solomonoff learns to promote computer programs containing good epistemology, but is not itself good epistemology, then it’s not the best possible answer to “How do you compute epistemology?”

Like, natural selection produced humans but population genetics is not an answer to “How does intelligence work?” because the intelligence is in the inner content rather than the outer system. In that sense, it seems like a reasonable worry that Solomonoff induction might incorporate only *some* principles of good epistemology rather than *all* the principles, even if the *internal content* rather than the *outer system* might bootstrap the rest of the way.

**BLAINE:** Hm. If you put it *that* way...

(*long pause*)

… then I guess I have to agree. I mean, Solomonoff induction doesn’t explicitly say anything about, say, the distinction between analytic propositions and empirical propositions, and knowing that is part of good epistemology on my view. So if you want to say that Solomonoff induction is something that bootstraps to good epistemology rather than being all of good epistemology by itself, I guess I have no choice but to agree.

I do think the outer system already contains a *lot* of good epistemology and inspires a lot of good advice all on its own. Especially if you give it credit for formally reproducing principles that are “common sense”, because correctly formalizing common sense is no small feat.

**ASHLEY:** Got a list of the good advice you think is derivable?

**BLAINE:** Um. Not really, but off the top of my head:

The best explanation is the one with the best mixture of simplicity and matching the evidence.

“Simplicity” and “matching the evidence” can both be measured in bits, so they’re commensurable.

The simplicity of a hypothesis is the number of bits required to formally specify it—for example, as a computer program.

When a hypothesis assigns twice as much probability to the exact observations seen so far as some other hypothesis, that’s one bit’s worth of relatively better matching the evidence.

You should actually be making your predictions using all the explanations, not just the single best one, but explanations that poorly match the evidence will drop down to tiny contributions very quickly.

Good explanations let you compress lots of data into compact reasons which strongly predict seeing just that data and no other data.

Logic can’t dictate prior probabilities absolutely, but if you assign probability less than to the prior that mechanisms constructed using a small number of objects from your universe might be able to well predict that universe, you’re being unreasonable.

So long as you don’t assign infinitesimal prior probability to hypotheses that let you do induction, they will very rapidly overtake hypotheses that don’t.

It is a logical truth, not a contingent one, that more complex hypotheses must in the limit be less probable than simple ones.

Epistemic rationality is a precise art with no user-controlled degrees of freedom in how much probability you ideally ought to assign to a belief. If you think you can tweak the probability depending on what you want the answer to be, you’re doing something wrong.

Things that you’ve seen in one place might reappear somewhere else.

Once you’ve learned a new language for your explanations, like differential equations, you can use it to describe other things, because your best hypotheses will now already encode that language.

We can learn meta-reasoning procedures as well as object-level facts by looking at which meta-reasoning rules are simple and have done well on the evidence so far.

So far, we seem to have no

*a priori*reason to believe that universes which are more expensive to compute are less probable.People were wrong about galaxies being

*a priori*improbable because that’s not how Occam’s Razor works. Today, other people are equally wrong about other parts of a continuous wavefunction counting as extra entities for the purpose of evaluating hypotheses’ complexity.If something seems “weird” to you but would be a consequence of simple rules that fit the evidence so far, well, there’s nothing in these explicit laws of epistemology that adds an extra penalty term for weirdness.

Your epistemology shouldn’t have extra rules in it that aren’t needed to do Solomonoff induction or something like it, including rules like “science is not allowed to examine this particular part of reality”—

**ASHLEY:** This list isn’t finite, is it.

**BLAINE:** Well, there’s a *lot* of outstanding debate about epistemology where you can view that debate through the lens of Solomonoff induction and see what Solomonoff suggests.

**ASHLEY:** But if you don’t mind my stopping to look at your last item, #17 above—again, it’s attempts to add *completeness* clauses to Solomonoff induction that make me the most nervous.

I guess you could say that a good rule of epistemology ought to be one that’s promoted by Solomonoff induction—that it should arise, in some sense, from the simple ways of reasoning that are good at predicting observations. But that doesn’t mean a good rule of epistemology ought to explicitly be in Solomonoff induction or it’s out.

**BLAINE:** Can you think of good epistemology that doesn’t seem to be contained in Solomonoff induction? Besides the example I already gave of distinguishing logical propositions from empirical ones.

**ASHLEY:** I’ve been trying to. First, it seems to me that when I reason about laws of physics and how those laws of physics might give rise to higher levels of organization like molecules, cells, human beings, the Earth, and so on, I’m not constructing in my mind a great big chunk of code that reproduces my observations. I feel like this difference might be important and it might have something to do with ‘good epistemology’.

**BLAINE:** I guess it could be? I think if you’re saying that there might be this unknown other thing and therefore Solomonoff induction is terrible, then that would be the nirvana fallacy. Solomonoff induction is the best formalized epistemology we have *right now*—

**ASHLEY:** I’m not saying that Solomonoff induction is terrible. I’m trying to look in the direction of things that might point to some future formalism that’s better than Solomonoff induction. Here’s another thing: I feel like I didn’t have to learn how to model the human beings around me from scratch based on environmental observations. I got a jump-start on modeling other humans by observing *myself*, and by recruiting my brain areas to run in a sandbox mode that models other people’s brain areas—empathy, in a word.

I guess I feel like Solomonoff induction doesn’t incorporate that idea. Like, maybe *inside* the mixture there are programs which do that, but there’s no explicit support in the outer formalism.

**BLAINE:** This doesn’t feel to me like much of a disadvantage of Solomonoff induction—

**ASHLEY:** I’m not *saying* it would be a disadvantage if we actually had a hypercomputer to run Solomonoff induction. I’m saying it might point in the direction of “good epistemology” that isn’t explicitly included in Solomonoff induction.

I mean, now that I think about it, a generalization of what I just said is that Solomonoff induction assumes I’m separated from the environment by a hard, Cartesian wall that occasionally hands me observations. Shouldn’t a more realistic view of the universe be about a simple program that *contains me somewhere inside it,* rather than a simple program that hands observations to some other program?

**BLAINE:** Hm. Maybe. How would you formalize *that?* It seems to open up a big can of worms—

**ASHLEY:** But that’s what my actual epistemology actually says. My world-model is not about a big computer program that provides inputs to my soul, it’s about an enormous mathematically simple physical universe that instantiates Ashley as one piece of it. And I think it’s good and important to have epistemology that works that way. It wasn’t *obvious* that we needed to think about a simple universe that embeds us. Descartes *did* think in terms of an impervious soul that had the universe projecting sensory information onto its screen, and we had to get *away* from that kind of epistemology.

**BLAINE:** You understand that Solomonoff induction makes only a bounded number of errors relative to any computer program which does reason the way you prefer, right? If thinking of yourself as a contiguous piece of the universe lets you make better experimental predictions, programs which reason that way will rapidly be promoted.

**ASHLEY:** It’s still unnerving to see a formalism that seems, in its own structure, to harken back to the Cartesian days of a separate soul watching a separate universe projecting sensory information on a screen. Who knows, maybe that would somehow come back to bite you?

**BLAINE:** Well, it wouldn’t bite you in the form of repeatedly making wrong experimental predictions.

**ASHLEY:** But it might bite you in the form of having no way to represent the observation of, “I drank this ‘wine’ liquid and then my emotions changed; could my emotions themselves be instantiated in stuff that can interact with some component of this liquid? Can alcohol touch neurons and influence them, meaning that I’m not a separate soul?” If we interrogated the Solomonoff inductor, would it be able to understand that reasoning?

Which brings up that dangling question from before about modeling the effect that my actions and choices have on the environment, and whether, say, an agent that used Solomonoff induction would be able to correctly predict “If I drop an anvil on my head, my sequence of sensory observations will *end*.”

**ELIEZER: **And that’s my cue to step in!

The natural next place for this dialogue to go, if I ever write a continuation, is the question of actions and choices, and the agent that uses Solomonoff induction for beliefs and expected reward maximization for selecting actions—the perfect rolling sphere of advanced agent theory, AIXI.

Meanwhile: For more about the issues Ashley raised with agents being a contiguous part of the universe, see “Embedded Agency.”

- Examples of Highly Counterfactual Discoveries? by 23 Apr 2024 22:19 UTC; 183 points) (
- K-complexity is silly; use cross-entropy instead by 20 Dec 2022 23:06 UTC; 140 points) (
- an Evangelion dialogue explaining the QACI alignment plan by 10 Jun 2023 3:28 UTC; 46 points) (
- Excerpt from Arbital Solomonoff induction dialogue by 17 Jan 2021 3:49 UTC; 36 points) (
- On the Universal Distribution by 29 Oct 2021 17:50 UTC; 35 points) (
- [AN #141]: The case for practicing alignment work on GPT-3 and other large models by 10 Mar 2021 18:30 UTC; 27 points) (
- On the Universal Distribution by 29 Oct 2021 17:52 UTC; 25 points) (EA Forum;
- Forecasting Newsletter: March 2021 by 1 Apr 2021 17:12 UTC; 23 points) (
- QACI: the problem of blob location, causality, and counterfactuals by 5 Mar 2023 14:06 UTC; 23 points) (
- Forecasting Newsletter: March 2021 by 1 Apr 2021 17:01 UTC; 22 points) (EA Forum;
- program searches by 5 Sep 2022 20:04 UTC; 21 points) (
- What Jason has been reading, April 2023 by 11 Apr 2023 16:29 UTC; 18 points) (
- 10 Mar 2021 15:42 UTC; 17 points) 's comment on Deflationism isn’t the solution to philosophy’s woes by (
- 19 Mar 2023 11:35 UTC; 16 points) 's comment on To MIRI-style folk, you can’t simulate the universe from the beginning by (
- all claw, no world — and other thoughts on the universal distribution by 14 Dec 2022 18:55 UTC; 15 points) (
- What Jason has been reading, March 2023 by 9 Mar 2023 18:46 UTC; 12 points) (
- What Jason has been reading, March 2023 by 9 Mar 2023 18:46 UTC; 5 points) (Progress Forum;
- 6 Apr 2022 2:40 UTC; 5 points) 's comment on Call For Distillers by (
- What Jason has been reading, April 2023 by 11 Apr 2023 16:29 UTC; 4 points) (Progress Forum;
- 6 May 2024 20:40 UTC; 4 points) 's comment on Introducing AI-Powered Audiobooks of Rational Fiction Classics by (
- Learning Written Hindi From Scratch by 11 Apr 2024 11:13 UTC; 3 points) (
- 12 Apr 2023 19:56 UTC; 2 points) 's comment on TurnTrout’s shortform feed by (
- 29 Jan 2022 10:46 UTC; 1 point) 's comment on Coping with Undecidability by (
- In plain English—in what ways are Bayes’ Rule and Popperian falsificationism conflicting epistemologies? by 2 Apr 2021 21:21 UTC; 1 point) (

I think there’s something a bit fishy about the Poe/Shannon analogy.

In order to understand what’s wrong with Poe’s argument, what you need (or at least what

would have been helpful) is an understanding of how havingan absurdly large amount of computing powerwould enable you to solve the problem.Solomonoff induction assumes not merely an

absurdly largeamount, but aninfiniteamount. And simple approximations to Solomonoff assumemuch more absurdly largeamounts of compute than e.g. Shannon.Poe says “no mechanical device can possibly play a decent game of chess, because it would need to look at a variety of different possible sequences of moves, and that’s not a thing that can be done mechanically”. Shannon says two relevant things: first, “here is how, with an unreasonably large amount of computation, we could play a perfect game of chess”; second, “here is how, with a large but not so unreasonably large amount of computation, we could play a pretty decent game of chess”. Neither of these requires a hypercomputer; even

computing the whole search treeis a finite amount of work. And the second was something that could be done to some extent using hardware Shannon could have built. To play grandmaster-level chess using exactly Shannon’s method would require an infeasibly large amount of computing power (which we have since dealt with (1) by discovering alpha-beta and other sorts of pruning and (2) by making then-infeasibly powerful computers). But Shannon did show that machineryof a kind that can exist within the universecan play a not-so-awful game of chess; he genuinely refuted Poe’s argument. And he didn’t appeal to hypercomputers to do it.Whereas someone skeptical of the very idea of mechanized epistemology can respond to Solomonoff by saying “yeah, but your algorithm requires a literally infinite amount of computation. It’s not just that present technology falls short of what it needs, but that no possible technology could do it”. A Poe-alike couldn’t make such an argument for chess.

You might reply: yeah, sure, but we can “truncate” a Solomonoff inductor so that it only considers programs of, say, some finite maximum size; then it’s no longer infinite and so long as reality isn’t

toocomplicated it’ll still give good results. Except that unless that finite maximum is so tiny that Solomonoff does nothing useful, your truncated Solomonoff inductor is still too resource-hungry to do anything useful even if we turn the whole observable universe into computronium and let it run for the entire lifetime of that universe so far. Again, a Poe-alike couldn’t have made such an argument for chess; if you really had to, you could build a Babbage-like contraption that plays chess using some sort of brute-force search to a couple of ply. It would playbadlybut still better than some beginners, and that’s already enough to refute Poe.That doesn’t mean that Solomonoff induction isn’t a useful notion, or that it can’t provide insight into what’s going on when we interpret the evidence of our senses, or that it can’t be a useful inspiration for more practical attempts at mechanized epistemology. But I think the situation is sufficiently different from that of Poe contemplating the Mechanical Turk to make the analogy not very helpful.

Extending OP’s argument, I would put it like this. Suppose you start off having no idea whatsoever how you might do a thing, and end up having a practical algorithm you can implement on real hardware. You can (in principle) split that up into the following stages: (0) no idea whatsoever, (1) can do it with literally infinite compute, (2) can do it with superexponentially-huge compute, (3) can do it with tile-the-universe-with-computronium compute, (4) can do it with several orders of magnitude more than we have, (5) can do it with something like realistic resources. I suggest that

eachof those stages may correspond to a comparable advance in understanding. Solomonoff induction takes us from 0 to 1, butpacePeter Thiel this is not always the most important step and I don’t think it’s as big a step as from Poe to Shannon.Not the case!!! The OEIS can be viewed as an abridged Solomonoff inductor, and it is useful.

I think your hierarchy is useful, and it’s helped clarify my own thinking. I’ve never really liked the Poe/Shannon argument, because it seemed so historically contingent: if Shannon’s culture had played go instead of chess, then his completely correct insight about the theoretical tractability of game trees wouldn’t have helped us. In your model, he’d have taken us from 0 to 3 instead of 0 to 4, which wouldn’t have been enough to inspire a playable go program (pedantically, you need game trees before you can invent MCTS, but the connection from Shannon to AlphaGo is clearly much looser than from Shannon to Deep Blue).

I think the point is even stronger than that—Solomonoff induction requires not just infinite compute/time but doing something literally logically impossible—the prior is straight up uncomputable, not in any real-world tractability sense but as uncomputable as the Halting problem is. There’s a huge qualitative gulf between “we can’t solve this problem without idealised computers with unbounded time” and “we can’t solve this on a computer by definition”. Makes a huge difference to how much use the approach is for “crispening” ideas IMO

Yup, actual Solomonoff induction is uncomputable. I’m not sure what you mean by “not just infinite compute/time”, though; given truly infinite computation you absolutely could do it. (Though in a world where that was possible, you’d really want your Solomonoff inductor to consider possible explanations that likewise require an infinite amount of computation, and then you’d be back with the same problem again.) I guess the distinction you’re making is between “requires a finite but absurdly large amount of computation” and “requires literally infinite computation”, and I agree that the latter is what Solomonoff induction requires.

I think that reduces the credibility of the claim “Solomonoff induction is the One True Way to do inference, at least ideally speaking”. But I think the following weaker proposal is intact:

Given two rival explanations of our observations, in the happy (and pretty much unheard-of) case where they have both been expressed precisely in terms of computer programs, all else equal we should consider the shorter program “simpler”, “better”, and “more likely right”.

One way for all else not to be equal is if one program is shorter than the other just because it’s been more highly optimized in some way that doesn’t have much to do with the actual theory it embodies. So rather than “shorter program better” we should really say something more like “shorter program, after making it as small as possible, better”.

Obviously, coming up with an actual program that perfectly explains all our observations is unrealistic; those observations include reading scientific papers, so it seems like the program would need to include a complete Theory Of Everything in physics; those observations include interactions with other humans, so it seems like the program would need to include at least as much intelligence as any person we encounter; these are both famously hard problems that the human race has not yet cracked.

But given two proposals for explaining

the same subsetof our experience, if we are credibly able to reason about which corresponds (after optimization) to the shorter program, we should prefer the proposal that seems like it has the shorter program.In many real cases, it will be

completely unclearwhich of two proposals corresponds to the shorter program after optimization, and in that case we’ll have to use some other heuristic. And even in cases where it seems pretty clear which of two proposals corresponds to the shorter program, we need to be aware that we could be very wrong because what I’ve blithely called “optimization” is uncomputable and we can never be sure there isn’t a shorter program. And it’s not like the gods have handed down to us conclusive reason to think that simpler programs really are more likely to be right; it’s just that heuristics along those lines have worked out pretty well for us, and concepts of “simpler” nearer to “shorter program” and further from “seems simpler to naive human brain” generally seem to work better. So (unless there’s some clever point I’m missing, which there might be) some of EY’s more dramatic claims about how anything other than Solomonoff induction is Wrong and Biased and Irrational seem overblown. But if you treat it as avery promising heuristicrather than adefinitely known truthI’m still on board.Also, you totally

cando something that seems to me sufficiently along the lines of Solomonoff induction with an amount of computation that’s merely absurdly large, rather than merely infinite. At step N you try out all programs of length up to N running for up to N steps, and see whether they produce output consistent with your observations to date. Once N gets very large you have discovered things like “among programs shorter than a gigabyte running for less than 10^100 cycles and producing output consistent with these observations,theseare the shortest ones, and if we weight them SI-style then the probability distribution for our next observation isthis”. And if (as it seems to me kinda plausible that you actually should) you also somehow regardmore expensivecomputations as less probable, then once N gets large enough that you have any candidate programs that produce the right predictions you can start putting upper bounds on the errors in your predictions compared with an ideal no-longer-exactly-Solomonoff inductor, and those errors tend to 0 as the amount of computation you’re willing to do increases.To be clear, the amount of computation required for any version of this is absurdly impractical in the real world, and if your ideal is actual Solomonff induction then you don’t get any error bounds because you can’t rule out the possibility that some shorter program that hasn’t generated any output yet might do a good job eventually. But it’s not as if you literally can’t do anything Solomonoff-induction-shaped without literally infinite amounts of computation.

I think there’s a sense in which some problems can be uncomputable even with infinite compute no? For example if the Halting problem were computable even with literally infinite time, then we could construct a machine that halted when given its own description iff it ran forever when given its own description. I do think theres a distinction beyond just “arbitrarily large finite compute vs. infinite compute”. It seems like either some problems have to be uncomputable even by a hyper-computer, or else the concept of infinite compute time is less straightforward than it seems

I totally agree on your other points though, I think the concept of bounded Solomonoff induction could be interesting in itself, although I presume with it you lose all the theoretical guarantees around bounded error. Would definitely be interested to see if there’s literature on this

The halting problem

iscomputable with literally-infinite time. But, to be precise, what this means is that a hypercomputer could determine whether a (nonhyper)computer halts; in a universe containing hypercomputers, we would not be very interested in that, and we’d be asking for something that determines whether a given hypercomputer halts (or something like that; I haven’t given much thought to what corresponds to “halting” for any given model of hypercomputation...) which would be impossible for the same sort of reasons as the ordinary halting problem is impossible for ordinary computers.But I think it’s only fair to describe this by saying “the halting problem is impossible even with infinite computational resources” if you acknowledge that then “the halting problem” isn’t a single problem, it’s a thing that varies according to what computational resources you’ve got, getting harder when you have more resources to throw at it.

Anyone knows of 3rd party libraries that implement something like this?

Was this actually cross posted by EY, or by Rob or Ben? I prefer it being mentioned in the latter case

I posted this, and I’ll make a note that I did so for any future Eliezer content where I hit the ‘submit’ button.

The causal process for this article looked like this:

Eliezer wrote a lot of AI alignment content for Arbital. (This continues to be a good resource and I encourage you to browse it as-is, but it’s pretty disorganized and a lot of it has notes like ‘todo: fix X’.)

Eliezer delegated to me the task of cleaning up and cross-posting his Arbital stuff. This isn’t at the top of my priority list yet (partly because the content is already available on the public Internet, albeit unpolished and out-of-the-way), so I’ve only ported over stuff in dribs and drabs so far.

Richard Ngo posted about the Solomonoff induction dialogue, which reminded me of how much I wanted to cross-post that one (and made it more timely to do so). So I decided to cross-post now (after running the timing by Eliezer and Nate for approval), and I pushed the ‘submit’ button.

I’m also the one who proposed organizing these particular posts into a sequence (“Concepts in Formal Epistemology”), and who decided to cross-post the rest of the sequence when I did (rather than in a different order, or six months later, etc.)

Thanks!

Previously linked here: https://www.alignmentforum.org/posts/wsBpJn7HWEPCJxYai/excerpt-from-arbital-solomonoff-induction-dialogue

I think Ashley deserves an answer to “the objection “[a]ny way of counting discrete complications would imply that conclusion, even if it went by surface wheels and gears”, not a claim about who invented what first!

Curated. Solomonoff Induction is idealized induction, and as the post asserts, sometimes we learn about the non-idealized cases (get much less confused) by studying the idealized case. For that reason, I think this accessible albeit incredibly long dialogue is worth reading. Heck, it helps ground out Occam’s razor.

I wonder where this goes with questions about infinite domains. It seems to me that I understand what it means to argmax a generic bounded function on a generic domain, but I don’t know an algorithm for it and as far as I know there can’t be one. So it seems taking this very seriously would lead us to some form of constructionism.

Hmm. If we’re trying to argmax some function f over the real numbers, then the simplest algorithm would be something like “iterate over all mathematical expressions e; for each one, check whether the program ‘iterate over all provable theorems, halting when you find one that says e=argmaxf’ halts; if it does, return e.”

...but I guess that’s not guaranteed to ever halt, since there could conceivably be an infinite procession of ever-more-complex expressions, eking out ever-smaller gains on f. It seems

possiblethat no matter what (reasonably powerful) mathematical language you choose, there are function-expressions with finite maxima at values not expressible in your language. Which is maybe what you meant by “as far as I know there can’t be [an algorithm for it].”(I’m assuming our mathematical language doesn’t have the word argmax, since in that case we’d pretty quickly stumble on the expression argmaxf, verify that argmaxf=argmaxf, and return it, which is obviously a cop-out.)

Surely, ‘over finite sequences’?

I had a related question I’m still looking for a good answer to: https://www.lesswrong.com/posts/QCSEFxtNPXr5vsZyf/what-tools-exist-to-compute-all-possible-programs

I’m not convinced your chess example, where the practical solution resembles the hypercomputer one, is representative. One way to sort a list using a hypercomputer is to try every possible permutation of the list until we discover one which is sorted. I tend to see Solomonoff induction as being cartoonishly wasteful in a similar way.

The understanding I came away with: there are (at least) three stages of understanding a problem:

You can’t write a program to solve it.

You can write a cartoonishly wasteful program to solve it.

You can write a computationally feasible program to solve it.

“Shuffle-sort” achieves the second level of knowledge re: sorting lists. Yeah, it’s cartoonishly wasteful, and it doesn’t even

resembleany computationally feasible sorting algorithm (that I’m aware of) -- but, y’know, viewed through this lens, it’s still ahugestep up from not even understanding “sorting” well enough to sort a listat all.(Hmm, only marginally related but entertaining: if you reframe the problem of epistemology not as sequence prediction, but as “deduce what program is running your environment,” then a Solomonoff inductor can be pretty fairly described as “consider every possible object of type EnvironmentProgram; update its probability based on the sensory input; return the posterior PDF over EnvironmentProgram-space.” The equivalent program for list-sorting is “consider every possible object of type List<Int>; check if (a) it’s sorted, and (b) it matches the element-counts of the input-list; if so, return it.” Which is even more cartoonishly wasteful than shuffle-sort. Ooh, and if you want to generalize to cases where the list-elements are real numbers, I think you get/have to include something that looks a lot like Solomonoff induction, forcing countability on the the reals by iterating over all possible programs that evaluate to real numbers (and hoping to God that whatever process generated the input list, your mathematical-expression-language is powerful enough to describe all the elements).)

Solomonoff induction does have a blind spot: it assigns probability zero to the existence of halting oracles or other uncomputable sequences. Of course, every other computable prediction algorithm is just as incapable of predicting the output of a halting oracle and there don’t seem to be any uncomputable functions in the

actuallaws of physics, but it’s still a blind spot!Is this something that the infra-bayesianism idea could address? So, would an infra-bayesian version of AIXI be able to handle worlds that include halting oracles, even though they aren’t exactly in its hypothesis class?

Could someone explain why this doesn’t degenerate into an entirely circular concept when we postulate a

strongercompiler;orwhy it doesn’t becomeentirelydependent on the choice of the compiler?There are many programs that output identical sequences. That’s a waste. Make it so that no two different programs have the same output.

There are many sequences that when fed into the compiler don’t result in valid programs. That’s a waste. Make it so that every binary sequence represents a valid program.

Now we have a set of sequences that we’d like to encode: S = {ε, 0, 1, 00, 01, … }, a set of sequences that are interpreted by the compiler as programs: P = {ε, 0, 1, 00, 01, … } and the compiler which is a bijection from P to S. It better not turn out to be the identity function.. And that’s with the best possible compiler. If we postulate a reasonable but much weaker compiler then the programs that encode the sequences become on average longer than the sequences themselves!

The only way out of this that I see is to weight elements of S by their frequencies in our universe and/or by how much we care about them, and then let the compiler be a function that minimizes this frequency-importance score. In fact, this compiler starts looking more and more like an encoder (?!). The difficult part then seems to me to be the choice of the optimal encoder, and not the Solomonoff induction itself.

Edit: Of course, when there’s a 1 to 1 mapping, then selecting the

shortestprogram is trivial. So in a way, if we make the Solomonoff induction trivial then the only thing that’s left is the choice of the compiler. But why isn’t this still a problem with weaker, traditional compilers?Contents

1. What credit is Poe due? (Without doing lots of research.)

2. “Rationality” and Neural networks

3. Poe’s revenge

4. Where does “Rationality” lie?

5. The Way

6. A basic statistic question

7. Not-learned knowns, and bodies versus heuristics

8. Is physics hard, if we’re good at moving?

9. Is Occam’s Razor a retcon? (And far too much else.)

10. A problem with Solomonoff Induction

11. Does anyone have the code for Solomonoff Induction or AIXI?

1. What credit is Poe due? (Without doing lots of research.)

On Poe’s argument (as presented here):

Poe may be right about s

olvingchess. His opinions concerning randomness, are interesting—while those issues may have been better worked out since to show that deterministic algorithms may do as well as algorithms using randomness, I think this was not known at the time.Arguably his failure is conflating ‘finding the/an optimal solution’ (which proceeds from the rules), with ‘being good’. (Saying ‘an automaton can never do this’ seems obviously accurate if you note that the necessary computer would be too big to be an automaton. Shannon wrote in a time with better computers. While holding that ‘people follow deterministic rules therefore the necessary computation can fit inside a human body’ - which he might not have** - Poe might have maintained that your ‘Frankenstein monster’ (perhaps not the words of the time) is clearly a different type of thing than an automaton. And today, we are presently interested in neural networks—even the idea of simulating a person on ‘Babbage’s machine’ might not have occurred to Poe.**)

2. “Rationality” and Neural networks

EY once argued against neural networks (possibly in the context of friendly AI?) - the disagreement may be concerning ‘solving problems magically’. (And as a means for beating people at chess, they did come later.) Today it would appear you might not need to possess:

Correct knowledge of how brains (and minds) of people actually work

Complete knowledge of how to play chess (find a chess engine)

in order to come up with a solution, if you have enough resources and the formalism/algorithms of neural networks. Just train something that’s good enough. Interestingly, this might mean a ‘rational approach’ i.e. one with a good theory might not be necessary for technically well-specified problems (like chess), though it may be important for friendly AI (which remains to be seen).

3. Poe’s revenge

So Solmonoff Induction includes randomness?

4. Where does “Rationality” lie?

Not consciously.

5. The Way

Simulate Tao’s brain. (Did the OP really resist this pun, or just not see it? It doesn’t fit with ‘figure out how to solve the problem’...)

Yes, as a real world solution, there would be issue—ethics, how do you even do that, are computers powerful enough, etc.

6. A basic statistic question

So you don’t overfit?

This is great. (It also seems wrong for people, to some extent.)

A little handwaving, but it’s clear it’s handwaving. (If I told you processing that amount of info at that speed would destroy the world (it probably doesn’t)***, you might disagree with ‘just produces a better prediction’. This is nitpicking at the level of ‘watch your wishes’, but unlimited computing power might be very destructive.)

7. Not-learned knowns, and bodies versus heuristics

Swimming without having learned how is also an example, until it goes away. Learning to navigate better on Earth than in zero G (is an empirical claim), which might have more to do with the shape of the body, and the environment. That’s not ‘”heuristics” in thinking’ - that’s body design, etc.

8. Is physics hard, if we’re good at moving?

Perhaps our brains run a useful approximation? Neural networks may be more adapted to conditions than well, running such general formulas.

Then how are we ‘good’ at moving? Like, at a level that seems hard to train/program ‘robots’ to do?

It seems like the number would be longer. Like n^2 at least. (Unless you have a way of compressing it, which seems like it’d be hard to do.)

Because it also includes constants?

Which is implicitly a model of the entire universe. (Sort of.)

9. Is Occam’s Razor a retcon? (And far too much else.)

Did Occam mean that, or is this a retcon?

This is a circular argument. That’s like saying ‘The world is round hypothesis tells us the world is round’, when that is part of (in fact the whole) of the hypothesis itself.

People want the world to be simple. (The Simple World Fallacy, or The world I can understand (easily) fallacy?)

I’m glad this was eventually addressed, although I feel like this has the interpretability problem, except worse.

More like

“IsGod.” There might be some work on the flaws of this approach (SI/AIXI), even in theory, which seem immaterial prior to a switch being made to an approximation.smarter, better calibrated, experts in the domain.… Arguably, Solomonoff Induction is rather stupid/low information. It generates all hypothesis (which tells you nothing), then it does work on those hypothesis (most of the information/‘smart’ in it), and it learns the rest. Shipping something with information about this universe seems more efficient. SI is supposed to get the most out of that information—that’s why it’s ‘an ideal’ - but it costs an infinite amount of energy, takes forever, etc.

Takes some work to find those agents.

an unreasonable amount of computing power. Infinitely unreasonable.

There’s no reason in principle, that all types of minds will agree with you. What reason do you have to suppose humans will? (What reasonability guarantee is there?)

Because we don’t have SI/AIXI’s ‘flaw’ - it can never imagine a being such as itself.

Might just be a matter of the childhood. Would it be more intuitive to adults if it was explained to them as kids? (I’m going to call this ‘The Rainbow Religion’.)

a) Evolution

b) No, you should look for simple mechanical rules that would generate the story—why do you believe the person telling you the story? It’s the P(Observing A) not P(A).

The supposed infinities involved might do the job. If AIXI/SI cannot imagine itself, then that’s probably handled. (I could be wrong about this, but maybe ‘Machines don’t believe in infinities.’)

Because the thing hasn’t been postulated in isolation, in SI it showed up in a universe. With a cause (ultimately the beginning of the universe). Easy reuse just requires the right sort of cause—an engineer, evolution, duplication, etc.

So this is a time spent computing problem? We spend too much time thinking about humans, not enough time thinking about rainbows? (Insufficient Rainbow Contemplation.) Arguably this is rational—which is more likely to kill you, a rainbow or a human?

Sounds like stuff learned from experience.

Or they assumed the universe was small.

I think noticing confusion is important. (Retraining your intuitions might be useful though.)

I’ve considered ‘infinities are impossible’ myself. The only problem is ‘What happens if you find an infinity?’ (That being said, I still think it might be a useful tool—if you figure out the limit of a function as the input goes to infinity, and nothing short of infinity will reach the limit, then you’ve got bounds on that function even when you don’t know the input.)

10. A problem with Solomonoff Induction

But it requires infinite resources to run, and can only simulate finite programs, whereas a universe where it could be run would be a universe it couldn’t simulate.

In theory that’s an empirical question, but without a hypercomputer it seems untestable.

11. Does anyone have the code for Solomonoff Induction or AIXI?

. Does anyone have the code for Solomonoff Induction or AIXI? One with bounds, that actually runs on computers?

Footnotes (from 1.)

*From the original paper:

** Believing in souls might have tripped Poe up.

***Because eyeballs and brains seem to do fine. But how do they work?

You rang?

I’m not sure why, but I find these dialogues easier to learn from than an article expressing the same ideas in the same order, even in an explicitly Q&A format.

Nice to have you back Eliezer.

Comments further up the thread suggest this is old content, from Arbital, posted by Rob.

Is this typo? I’ve never heard of “Msr.” however it is used twice as if it is not typo.

https://en.wikipedia.org/wiki/Gender-neutral_title#Other_titles

I read it as “Messr” as in

Thanks for the reply! but I still want to know more; I am confused with that Internet says it is the plural form of “Mr.”, which isn’t the case here.

My guess is that “Msr.” was intended as an abbreviation for

monsieur(French formister), in ignorance of the fact that the standard abbreviation is actually justM.In

should the text read “unlikely” instead of “likely” ?