Prediction and planning remain incredibly distinct as structures of cognitive work,
I disagree. (Partially.) For a unitary agent who is working with a small number of possible hypotheses (e.g., 3), and a small number of possible actions, I agree with your quoted sentence.
But let’s say you’re dealing with a space of possible actions that’s much too large to let you consider each exhaustively, e.g. what blog post to write (considered concretely, as a long string of characters).
It’d be nice to have some way to consider recombinable pieces, e.g. “my blog post could include idea X”, “my blog post could open with joke J”, “my blog post could be aimed at a reader similar to Alice”.
Now consider the situation as seen by the line of thinking that is determining: “should my blog post be aimed mostly at readers similar to Alice, or at readers similar to Bob?”. For this line of thinking to do a good estimate of ExpectedUtility(post is aimed at Alice), it needs predictions about whether the post will contain idea X. However, for the line of thinking that is determining whether to include idea X (or the unified agent, at those moments when it is actively considering this), it’’ll of course need good plans (not predictions) about whether to include X, and how exactly to include X.
I don’t fully know what a good structure is for navigating this sort of recombinable plan space, but it might involve a lot of toggling between “this is a planning question, from the inside: shall I include X?” and “this is a prediction question, from the outside: is it likely that I’m going to end up including X, such that I should plan other things around that assumption?”.
My own cognition seems to me to toggle many combinatorial pieces back and forth between planning-from-the-inside and predicting-from-the-outside, like this. I agree with your point that human brains and bodies have all kinds of silly entanglements. But this part seems to me like a plausible way for other intelligences to evolve/grow too, not a purely one-off humans idiosyncrasy like having childbirth through the hips.
Yes; this (or something similar) is why I suspect that “‘believing in’ atoms” may involve the same cognitive structure as “‘believing in’ this bakery I am helping to create” or “‘believing in’ honesty” (and a different cognitive structure, at least for ideal minds, from predictions about outside events). The question of whether to “believe in” atoms can be a question of whether to invest in building out and maintaining/tuning an ontology that includes atoms.