This post doesn’t seem to introduce a lot of new concepts so I can’t see much to discuss.
Open problems:
Your first open problem might not technically always be true, but it doesn’t really matter, because as you pointed out, the statement the agent uses to derive its action is always
I don’t see how you think agents can do anything agent-y without utility functions.
The instrumental values + agents in games ones look interesting.
I agree that the impossible one is impossible. Although it might be the kind of impossible thing you can do, or it might be the kind of impossible thing Godel proved you can’t do.
One thing that’s bothering me is that agents in the real world have to use dirty tricks to make problems simpler. Like, two strategies will do the same in 99% of situations, so let’s ignore that part and focus on the rest, hmmm calculate calculate calculate this one is better. But when I try to formalize that I lose at Newcomb’s problem. So that’s an open problem?
This post doesn’t seem to introduce a lot of new concepts so I can’t see much to discuss.
Open problems: Your first open problem might not technically always be true, but it doesn’t really matter, because as you pointed out, the statement the agent uses to derive its action is always I don’t see how you think agents can do anything agent-y without utility functions. The instrumental values + agents in games ones look interesting. I agree that the impossible one is impossible. Although it might be the kind of impossible thing you can do, or it might be the kind of impossible thing Godel proved you can’t do.
One thing that’s bothering me is that agents in the real world have to use dirty tricks to make problems simpler. Like, two strategies will do the same in 99% of situations, so let’s ignore that part and focus on the rest, hmmm calculate calculate calculate this one is better. But when I try to formalize that I lose at Newcomb’s problem. So that’s an open problem?