One thing that would make this more symmetrical is if some errors in your world model are worse than others. This makes inference more like a utility function.
Yup. I think this might route through utility as well, though. Observations are useful because they unlock bits of optimization, and bits related to different variables could unlock both different amounts of optimization capacity, and different amounts of goal-related optimization capacity. (It’s not so bad to forget a single digit of someone’s phone number; it’s much worse if you forgot a single letter in the password to your password manager.)
One thing that would make this more symmetrical is if some errors in your world model are worse than others. This makes inference more like a utility function.
Yup. I think this might route through utility as well, though. Observations are useful because they unlock bits of optimization, and bits related to different variables could unlock both different amounts of optimization capacity, and different amounts of goal-related optimization capacity. (It’s not so bad to forget a single digit of someone’s phone number; it’s much worse if you forgot a single letter in the password to your password manager.)