Even if they had almost destroyed the world, the story would still not properly be about their guilt or their regret, it would be about almost destroying the world. This is why, in a much more real and also famous case, President Truman was validly angered and told “that son of a bitch”, Oppenheimer, to fuck off, after Oppenheimer decided to be a drama queen at Truman. Oppenheimer was trying to have nuclear weapons be about Oppenheimer’s remorse at having helped create nuclear weapons. This feels obviously icky to me; I would not be surprised if Truman felt very nearly the same.
Fascinating, I always interpreted this as Truman being an asshole, but I guess that makes sense now that you explain it that way. I suppose a meeting with the president is precisely the wrong time to focus on your own guilt as opposed to trying to do what you can to steer the world towards positive outcomes.
One of the ways you can get up in the morning, if you are me, is by looking in the internal direction of your motor plans, and writing into your pending motor plan the image of you getting out of bed in a few moments, and then letting that image get sent to motor output and happen
The technique is older than the “active inference” malarky, but the way I wrote about it is influenced by my annoyance with “active inference” malarky.
I wondered the same thing. I’m not a fan of the idea that we do not act, merely predict what our actions will be and then observe the act happening of itself while our minds float epiphenomenally above, and I would be disappointed to discover that the meme has found a place for itself in Eliezer’s mind.
Oh, absolutely not. Our incredibly badly designed bodies do insane shit like repurposing superoxide as a metabolic signaling molecule. Our incredibly badly designed brains have some subprocesses that take a bit of predictive machinery lying around and repurpose it to send a control signal, which is even crazier than the superoxide thing, which is pretty crazy. Prediction and planning remain incredibly distinct as structures of cognitive work, and the people who try to deeply tie them together by writing wacky equations that sum them both together plus throwing in an entropy term, are nuts. It’s like the town which showed a sign with its elevation, population, and year founded, plus the total of those numbers. But one reason why the malarky rings true to the knowlessones is that the incredibly badly designed human brain actually is grabbing some bits of predictive machinery and repurposing them for control signals, just like the human metabolism has decided to treat insanely reactive molecular byproducts as control signals. The other reason of course is the general class of malarky which consists of telling a susceptible person that two different things are the same.
Prediction and planning remain incredibly distinct as structures of cognitive work,
I disagree. (Partially.) For a unitary agent who is working with a small number of possible hypotheses (e.g., 3), and a small number of possible actions, I agree with your quoted sentence.
But let’s say you’re dealing with a space of possible actions that’s much too large to let you consider each exhaustively, e.g. what blog post to write (considered concretely, as a long string of characters).
It’d be nice to have some way to consider recombinable pieces, e.g. “my blog post could include idea X”, “my blog post could open with joke J”, “my blog post could be aimed at a reader similar to Alice”.
Now consider the situation as seen by the line of thinking that is determining: “should my blog post be aimed mostly at readers similar to Alice, or at readers similar to Bob?”. For this line of thinking to do a good estimate of ExpectedUtility(post is aimed at Alice), it needs predictions about whether the post will contain idea X. However, for the line of thinking that is determining whether to include idea X (or the unified agent, at those moments when it is actively considering this), it’’ll of course need good plans (not predictions) about whether to include X, and how exactly to include X.
I don’t fully know what a good structure is for navigating this sort of recombinable plan space, but it might involve a lot of toggling between “this is a planning question, from the inside: shall I include X?” and “this is a prediction question, from the outside: is it likely that I’m going to end up including X, such that I should plan other things around that assumption?”.
My own cognition seems to me to toggle many combinatorial pieces back and forth between planning-from-the-inside and predicting-from-the-outside, like this. I agree with your point that human brains and bodies have all kinds of silly entanglements. But this part seems to me like a plausible way for other intelligences to evolve/grow too, not a purely one-off humans idiosyncrasy like having childbirth through the hips.
In this example, you’re trying to make various planning decisions; those planning decisions call on predictions; and the predictions are about (other) planning decisions; and these form a loopy network. This is plausibly an intrinsic / essential problem for intelligences, because it involves the intelligence making predictions about its own actions—and those actions are currently under consideration—and those actions kinda depend on those same predictions. The difficulty of predicting “what will I do” grows in tandem with the intelligence, so any sort of problem that makes a call to the whole intelligence might unavoidably make it hard to separate predictions from decisions.
A further wrinkle / another example is that a question like “what should I think about (in particular, what to gather information about / update about)”, during the design process, wants these predictions. For example, I run into problems like:
I’m doing some project X.
I could do a more ambitious version of X, or a less ambitious version of X.
If I’m doing the more ambitious version of X, I want to work on pretty different stuff right now, at the beginning, compared to if I’m doing the less ambitious version. Example 1: a programming project; should I put in the work ASAP to redo the basic ontology (datatypes, architecture), or should I just try to iterate a bit on the MVP and add epicycles? Example 2: an investigatory blog post; should I put in a bunch of work to get a deeper grounding in the domain I’m talking about, or should I just learn enough to check that the specific point I’m making probably makes sense?
The question of whether to do ambitious X vs. non-ambitious X also depends on / gets updated by those computations that I’m considering how to prioritize.
Another kind of example is common knowledge. What people actually do seems to be some sort of “conjecture / leap of faith”, where at some point they kinda just assume / act-as-though there is common knowledge. Even in theory, how is this supposed to work, for agents of comparable complexity* to each other? Notably, Lobian handshake stuff doesn’t AFAICT especially look like it has predictions / decisions separated out.
*(Not sure what complexity should mean in this context.)
A further wrinkle / another example is that a question like “what should I think about (in particular, what to gather information about / update about)”, during the design process, wants these predictions.
Yes; this (or something similar) is why I suspect that “‘believing in’ atoms” may involve the same cognitive structure as “‘believing in’ this bakery I am helping to create” or “‘believing in’ honesty” (and a different cognitive structure, at least for ideal minds, from predictions about outside events). The question of whether to “believe in” atoms can be a question of whether to invest in building out and maintaining/tuning an ontology that includes atoms.
Our incredibly badly designed brains have some subprocesses that take a bit of predictive machinery lying around and repurpose it to send a control signal
I like this, and will show it to some of my colleagues who are also sceptical of the FEP/ActInf paradigm.
Fascinating, I always interpreted this as Truman being an asshole, but I guess that makes sense now that you explain it that way. I suppose a meeting with the president is precisely the wrong time to focus on your own guilt as opposed to trying to do what you can to steer the world towards positive outcomes.
Was this inspired by active inference?
The technique is older than the “active inference” malarky, but the way I wrote about it is influenced by my annoyance with “active inference” malarky.
I wondered the same thing. I’m not a fan of the idea that we do not act, merely predict what our actions will be and then observe the act happening of itself while our minds float epiphenomenally above, and I would be disappointed to discover that the meme has found a place for itself in Eliezer’s mind.
Oh, absolutely not. Our incredibly badly designed bodies do insane shit like repurposing superoxide as a metabolic signaling molecule. Our incredibly badly designed brains have some subprocesses that take a bit of predictive machinery lying around and repurpose it to send a control signal, which is even crazier than the superoxide thing, which is pretty crazy. Prediction and planning remain incredibly distinct as structures of cognitive work, and the people who try to deeply tie them together by writing wacky equations that sum them both together plus throwing in an entropy term, are nuts. It’s like the town which showed a sign with its elevation, population, and year founded, plus the total of those numbers. But one reason why the malarky rings true to the knowlessones is that the incredibly badly designed human brain actually is grabbing some bits of predictive machinery and repurposing them for control signals, just like the human metabolism has decided to treat insanely reactive molecular byproducts as control signals. The other reason of course is the general class of malarky which consists of telling a susceptible person that two different things are the same.
I disagree. (Partially.) For a unitary agent who is working with a small number of possible hypotheses (e.g., 3), and a small number of possible actions, I agree with your quoted sentence.
But let’s say you’re dealing with a space of possible actions that’s much too large to let you consider each exhaustively, e.g. what blog post to write (considered concretely, as a long string of characters).
It’d be nice to have some way to consider recombinable pieces, e.g. “my blog post could include idea X”, “my blog post could open with joke J”, “my blog post could be aimed at a reader similar to Alice”.
Now consider the situation as seen by the line of thinking that is determining: “should my blog post be aimed mostly at readers similar to Alice, or at readers similar to Bob?”. For this line of thinking to do a good estimate of ExpectedUtility(post is aimed at Alice), it needs predictions about whether the post will contain idea X. However, for the line of thinking that is determining whether to include idea X (or the unified agent, at those moments when it is actively considering this), it’’ll of course need good plans (not predictions) about whether to include X, and how exactly to include X.
I don’t fully know what a good structure is for navigating this sort of recombinable plan space, but it might involve a lot of toggling between “this is a planning question, from the inside: shall I include X?” and “this is a prediction question, from the outside: is it likely that I’m going to end up including X, such that I should plan other things around that assumption?”.
My own cognition seems to me to toggle many combinatorial pieces back and forth between planning-from-the-inside and predicting-from-the-outside, like this. I agree with your point that human brains and bodies have all kinds of silly entanglements. But this part seems to me like a plausible way for other intelligences to evolve/grow too, not a purely one-off humans idiosyncrasy like having childbirth through the hips.
In this example, you’re trying to make various planning decisions; those planning decisions call on predictions; and the predictions are about (other) planning decisions; and these form a loopy network. This is plausibly an intrinsic / essential problem for intelligences, because it involves the intelligence making predictions about its own actions—and those actions are currently under consideration—and those actions kinda depend on those same predictions. The difficulty of predicting “what will I do” grows in tandem with the intelligence, so any sort of problem that makes a call to the whole intelligence might unavoidably make it hard to separate predictions from decisions.
A further wrinkle / another example is that a question like “what should I think about (in particular, what to gather information about / update about)”, during the design process, wants these predictions. For example, I run into problems like:
I’m doing some project X.
I could do a more ambitious version of X, or a less ambitious version of X.
If I’m doing the more ambitious version of X, I want to work on pretty different stuff right now, at the beginning, compared to if I’m doing the less ambitious version. Example 1: a programming project; should I put in the work ASAP to redo the basic ontology (datatypes, architecture), or should I just try to iterate a bit on the MVP and add epicycles? Example 2: an investigatory blog post; should I put in a bunch of work to get a deeper grounding in the domain I’m talking about, or should I just learn enough to check that the specific point I’m making probably makes sense?
The question of whether to do ambitious X vs. non-ambitious X also depends on / gets updated by those computations that I’m considering how to prioritize.
Another kind of example is common knowledge. What people actually do seems to be some sort of “conjecture / leap of faith”, where at some point they kinda just assume / act-as-though there is common knowledge. Even in theory, how is this supposed to work, for agents of comparable complexity* to each other? Notably, Lobian handshake stuff doesn’t AFAICT especially look like it has predictions / decisions separated out.
*(Not sure what complexity should mean in this context.)
Yes; this (or something similar) is why I suspect that “‘believing in’ atoms” may involve the same cognitive structure as “‘believing in’ this bakery I am helping to create” or “‘believing in’ honesty” (and a different cognitive structure, at least for ideal minds, from predictions about outside events). The question of whether to “believe in” atoms can be a question of whether to invest in building out and maintaining/tuning an ontology that includes atoms.
I like this, and will show it to some of my colleagues who are also sceptical of the FEP/ActInf paradigm.