Ok, that’s what I was figuring. My general position is that the problems of agents embedded in their environment reduce to problems of abstraction, i.e. world-models embedded in computations which do not themselves obviously resemble world-models. At some point I’ll probably write that up in more detail, although the argument remains informal for now.

The immediately important point is that, while the OP makes sense in a Cartesian model, it also makes sense without a Cartesian model. We can just have some big computation, and pick a little chunk of it at random, and say “does this part here embed a Naive Bayes model?” In other words, it’s the sort of thing you could use to detect agenty subsystems, without having a Cartesian boundary drawn in advance.

Ok, that’s what I was figuring. My general position is that the problems of agents embedded in their environment reduce to problems of abstraction, i.e. world-models embedded in computations which do not themselves obviously resemble world-models. At some point I’ll probably write that up in more detail, although the argument remains informal for now.

The immediately important point is that, while the OP makes sense in a Cartesian model, it

alsomakes sense without a Cartesian model. We can just have some big computation, and pick a little chunk of it at random, and say “does this part here embed a Naive Bayes model?” In other words, it’s the sort of thing you could use todetectagenty subsystems, without having a Cartesian boundary drawn in advance.