Prediction and planning remain incredibly distinct as structures of cognitive work,
I disagree. (Partially.) For a unitary agent who is working with a small number of possible hypotheses (e.g., 3), and a small number of possible actions, I agree with your quoted sentence.
But let’s say you’re dealing with a space of possible actions that’s much too large to let you consider each exhaustively, e.g. what blog post to write (considered concretely, as a long string of characters).
It’d be nice to have some way to consider recombinable pieces, e.g. “my blog post could include idea X”, “my blog post could open with joke J”, “my blog post could be aimed at a reader similar to Alice”.
Now consider the situation as seen by the line of thinking that is determining: “should my blog post be aimed mostly at readers similar to Alice, or at readers similar to Bob?”. For this line of thinking to do a good estimate of ExpectedUtility(post is aimed at Alice), it needs predictions about whether the post will contain idea X. However, for the line of thinking that is determining whether to include idea X (or the unified agent, at those moments when it is actively considering this), it’’ll of course need good plans (not predictions) about whether to include X, and how exactly to include X.
I don’t fully know what a good structure is for navigating this sort of recombinable plan space, but it might involve a lot of toggling between “this is a planning question, from the inside: shall I include X?” and “this is a prediction question, from the outside: is it likely that I’m going to end up including X, such that I should plan other things around that assumption?”.
My own cognition seems to me to toggle many combinatorial pieces back and forth between planning-from-the-inside and predicting-from-the-outside, like this. I agree with your point that human brains and bodies have all kinds of silly entanglements. But this part seems to me like a plausible way for other intelligences to evolve/grow too, not a purely one-off humans idiosyncrasy like having childbirth through the hips.
In this example, you’re trying to make various planning decisions; those planning decisions call on predictions; and the predictions are about (other) planning decisions; and these form a loopy network. This is plausibly an intrinsic / essential problem for intelligences, because it involves the intelligence making predictions about its own actions—and those actions are currently under consideration—and those actions kinda depend on those same predictions. The difficulty of predicting “what will I do” grows in tandem with the intelligence, so any sort of problem that makes a call to the whole intelligence might unavoidably make it hard to separate predictions from decisions.
A further wrinkle / another example is that a question like “what should I think about (in particular, what to gather information about / update about)”, during the design process, wants these predictions. For example, I run into problems like:
I’m doing some project X.
I could do a more ambitious version of X, or a less ambitious version of X.
If I’m doing the more ambitious version of X, I want to work on pretty different stuff right now, at the beginning, compared to if I’m doing the less ambitious version. Example 1: a programming project; should I put in the work ASAP to redo the basic ontology (datatypes, architecture), or should I just try to iterate a bit on the MVP and add epicycles? Example 2: an investigatory blog post; should I put in a bunch of work to get a deeper grounding in the domain I’m talking about, or should I just learn enough to check that the specific point I’m making probably makes sense?
The question of whether to do ambitious X vs. non-ambitious X also depends on / gets updated by those computations that I’m considering how to prioritize.
Another kind of example is common knowledge. What people actually do seems to be some sort of “conjecture / leap of faith”, where at some point they kinda just assume / act-as-though there is common knowledge. Even in theory, how is this supposed to work, for agents of comparable complexity* to each other? Notably, Lobian handshake stuff doesn’t AFAICT especially look like it has predictions / decisions separated out.
*(Not sure what complexity should mean in this context.)
A further wrinkle / another example is that a question like “what should I think about (in particular, what to gather information about / update about)”, during the design process, wants these predictions.
Yes; this (or something similar) is why I suspect that “‘believing in’ atoms” may involve the same cognitive structure as “‘believing in’ this bakery I am helping to create” or “‘believing in’ honesty” (and a different cognitive structure, at least for ideal minds, from predictions about outside events). The question of whether to “believe in” atoms can be a question of whether to invest in building out and maintaining/tuning an ontology that includes atoms.
(FYI, I initially failed to parse this because I interpreted “‘believing in’ atoms” as something like “atoms of ‘believing in’”, presumably because the idea of “believing in” I got from your post was not something that you typically apply to atoms.)
I disagree. (Partially.) For a unitary agent who is working with a small number of possible hypotheses (e.g., 3), and a small number of possible actions, I agree with your quoted sentence.
But let’s say you’re dealing with a space of possible actions that’s much too large to let you consider each exhaustively, e.g. what blog post to write (considered concretely, as a long string of characters).
It’d be nice to have some way to consider recombinable pieces, e.g. “my blog post could include idea X”, “my blog post could open with joke J”, “my blog post could be aimed at a reader similar to Alice”.
Now consider the situation as seen by the line of thinking that is determining: “should my blog post be aimed mostly at readers similar to Alice, or at readers similar to Bob?”. For this line of thinking to do a good estimate of ExpectedUtility(post is aimed at Alice), it needs predictions about whether the post will contain idea X. However, for the line of thinking that is determining whether to include idea X (or the unified agent, at those moments when it is actively considering this), it’’ll of course need good plans (not predictions) about whether to include X, and how exactly to include X.
I don’t fully know what a good structure is for navigating this sort of recombinable plan space, but it might involve a lot of toggling between “this is a planning question, from the inside: shall I include X?” and “this is a prediction question, from the outside: is it likely that I’m going to end up including X, such that I should plan other things around that assumption?”.
My own cognition seems to me to toggle many combinatorial pieces back and forth between planning-from-the-inside and predicting-from-the-outside, like this. I agree with your point that human brains and bodies have all kinds of silly entanglements. But this part seems to me like a plausible way for other intelligences to evolve/grow too, not a purely one-off humans idiosyncrasy like having childbirth through the hips.
In this example, you’re trying to make various planning decisions; those planning decisions call on predictions; and the predictions are about (other) planning decisions; and these form a loopy network. This is plausibly an intrinsic / essential problem for intelligences, because it involves the intelligence making predictions about its own actions—and those actions are currently under consideration—and those actions kinda depend on those same predictions. The difficulty of predicting “what will I do” grows in tandem with the intelligence, so any sort of problem that makes a call to the whole intelligence might unavoidably make it hard to separate predictions from decisions.
A further wrinkle / another example is that a question like “what should I think about (in particular, what to gather information about / update about)”, during the design process, wants these predictions. For example, I run into problems like:
I’m doing some project X.
I could do a more ambitious version of X, or a less ambitious version of X.
If I’m doing the more ambitious version of X, I want to work on pretty different stuff right now, at the beginning, compared to if I’m doing the less ambitious version. Example 1: a programming project; should I put in the work ASAP to redo the basic ontology (datatypes, architecture), or should I just try to iterate a bit on the MVP and add epicycles? Example 2: an investigatory blog post; should I put in a bunch of work to get a deeper grounding in the domain I’m talking about, or should I just learn enough to check that the specific point I’m making probably makes sense?
The question of whether to do ambitious X vs. non-ambitious X also depends on / gets updated by those computations that I’m considering how to prioritize.
Another kind of example is common knowledge. What people actually do seems to be some sort of “conjecture / leap of faith”, where at some point they kinda just assume / act-as-though there is common knowledge. Even in theory, how is this supposed to work, for agents of comparable complexity* to each other? Notably, Lobian handshake stuff doesn’t AFAICT especially look like it has predictions / decisions separated out.
*(Not sure what complexity should mean in this context.)
Yes; this (or something similar) is why I suspect that “‘believing in’ atoms” may involve the same cognitive structure as “‘believing in’ this bakery I am helping to create” or “‘believing in’ honesty” (and a different cognitive structure, at least for ideal minds, from predictions about outside events). The question of whether to “believe in” atoms can be a question of whether to invest in building out and maintaining/tuning an ontology that includes atoms.
(FYI, I initially failed to parse this because I interpreted “‘believing in’ atoms” as something like “atoms of ‘believing in’”, presumably because the idea of “believing in” I got from your post was not something that you typically apply to atoms.)