The timing problem is not a problem for agents. It’s a problem for the claim that goal preservation is instrumentally required for rational agents. The timing problem doesn’t force agents to take any particular decision. The argument is that it’s not instrumentally irrational for a rational agent to abandon its goal. It isn’t about any specific utility functions, and it isn’t a prediction about what an agent will do.
The timing problem is a problem for how well we can predict the actions of myopic agents: Any agent that has a myopic utility function has no instrumental convergent reason for goal preservation.
The reason I suspect you haven’t is that whether an agent is “myopic” or not is irrelevant to the argument. Where we may disagree is over the nature of goal having, as Seth Herd pointed out. If you want to find a challenge to the argument, that’s the place to look.
The timing problem is not a problem for agents. It’s a problem for the claim that goal preservation is instrumentally required for rational agents. The timing problem doesn’t force agents to take any particular decision. The argument is that it’s not instrumentally irrational for a rational agent to abandon its goal. It isn’t about any specific utility functions, and it isn’t a prediction about what an agent will do.
The timing problem is a problem for how well we can predict the actions of myopic agents: Any agent that has a myopic utility function has no instrumental convergent reason for goal preservation.
Have you read the paper?
I did read 2/3rd of the paper, and I tried my best to understand it, but apparently I failed.
The reason I suspect you haven’t is that whether an agent is “myopic” or not is irrelevant to the argument. Where we may disagree is over the nature of goal having, as Seth Herd pointed out. If you want to find a challenge to the argument, that’s the place to look.
It is possible that we also disagree on the nature of goal having. I reserve the right to find my own places to challenge your argument.
Ha, yes, fair enough