Not… really? “how can I maximize accuracy?” is a very liberal agentification of a process that might be more drily thought of as asking “what is accurate?” Your standard sequence predictor isn’t searching through epistemic pseudo-actions to find which ones best maximize its expected accuracy, it’s just following a pre-made plan of epistemic action that happens to increase accuracy.
Yeah, I absolutely agree with this. My description that you quoted was over-dramaticizing the issue.
Really, what you have is an agent sitting on top of non-agentic infrastructure. The non-agentic infrastructure is “optimizing” in a broad sense because it follows a gradient toward predictive accuracy, but it is utterly myopic (doesn’t plan ahead to cleverly maximize accuracy).
The point I was making, stated more accurately, is that you (seemingly) need this myopic optimization as a ‘protected’ sub-part of the agent, which the overall agent cannot freely manipulate (since if it could, it would just corrupt the policy-learning process by wireheading).
Though this does lead to the thought: if you want to put things on equal footing, does this mean you want to describe a reasoner that searches through epistemic steps/rules like an agent searching through actions/plans?
This is more or less how humans already conceive of difficult abstract reasoning.
Yeah, my observation is that it intuitively seems like highly capable agents need to be able to do that; to that end, it seems like one needs to be able to describe a framework where agents at least have that option without it leading to corruption of the overall learning process via the instrumental part strategically biasing the epistemic part to make the instrumental part look good.
(Possibly humans just use a messy solution where the strategic biasing occurs but the damage is lessened by limiting the extent to which the instrumental system can bias the epistemics—eg, you can’t fully choose what to believe.)
Yeah, I absolutely agree with this. My description that you quoted was over-dramaticizing the issue.
Really, what you have is an agent sitting on top of non-agentic infrastructure. The non-agentic infrastructure is “optimizing” in a broad sense because it follows a gradient toward predictive accuracy, but it is utterly myopic (doesn’t plan ahead to cleverly maximize accuracy).
The point I was making, stated more accurately, is that you (seemingly) need this myopic optimization as a ‘protected’ sub-part of the agent, which the overall agent cannot freely manipulate (since if it could, it would just corrupt the policy-learning process by wireheading).
Yeah, my observation is that it intuitively seems like highly capable agents need to be able to do that; to that end, it seems like one needs to be able to describe a framework where agents at least have that option without it leading to corruption of the overall learning process via the instrumental part strategically biasing the epistemic part to make the instrumental part look good.
(Possibly humans just use a messy solution where the strategic biasing occurs but the damage is lessened by limiting the extent to which the instrumental system can bias the epistemics—eg, you can’t fully choose what to believe.)