Cool post! I like to think that I influenced your choice of subject.
I was slightly confused by your names: it seems that “head-state” is what would usually called “state” in TMs.
So from this perspective, the central problem of self-embedded world models is not representation or interpretation of the model, but rather the algorithmic problem of expanding the set of queries we can answer “without any weirdness”.
I like to think that I influenced your choice of subject.
Yup, you did.
it seems that “head-state” is what would usually called “state” in TMs.
Correct. Really, the “state” of a TM (as the word is used most often in other math/engineering contexts) is both the head-state and whatever’s on the tape.
In a technical sense, the “state” of a system is usually whatever information forms a Markov blanket between future and past—i.e. the interaction between everything in the future and everything in the past should be completely mediated by the system state. There are lots of exceptions to this, and the word isn’t used consistently everywhere, but that’s probably the most useful heuristic.
Cool post! I like to think that I influenced your choice of subject.
I was slightly confused by your names: it seems that “head-state” is what would usually called “state” in TMs.
Neat summary.
Yup, you did.
Correct. Really, the “state” of a TM (as the word is used most often in other math/engineering contexts) is both the head-state and whatever’s on the tape.
In a technical sense, the “state” of a system is usually whatever information forms a Markov blanket between future and past—i.e. the interaction between everything in the future and everything in the past should be completely mediated by the system state. There are lots of exceptions to this, and the word isn’t used consistently everywhere, but that’s probably the most useful heuristic.