So far, I assumed the variables Xi were given; but what if all we have is the agent’s algorithm (or the agent itself) and need to infer their internal variables? And what about biased/incorrect beliefs? I’ll look at those in a subsequent post.
In the interest of keeping myself honest re my pre-registered suspicion that we will have a fundamental disagreement over this line of reasoning, I have no specific complaints with this particular post as it stands on its own, other than to want to complain you assumed the existence of an ontology to do this reasoning within that I don’t think you can assume given where you expect to go, but you’re going to consider that in next post, so whenever I get to that one we’ll have to see what comes up!
In the interest of keeping myself honest re my pre-registered suspicion that we will have a fundamental disagreement over this line of reasoning, I have no specific complaints with this particular post as it stands on its own, other than to want to complain you assumed the existence of an ontology to do this reasoning within that I don’t think you can assume given where you expect to go, but you’re going to consider that in next post, so whenever I get to that one we’ll have to see what comes up!
The finding variables post is now up: https://www.lesswrong.com/posts/pHHhyZX5zwvwNqDXm/finding-the-variables
The “subsequent post” has been delayed for a long time because of other research avenues I need to catch up with :-(