You’re making your utility function path-dependent on the detailed cognition of the Friendly AI trying to help you!
Wouldn’t it be a lot clearer to say that it’s dependent on, not the FAI’s algorithm, but the FAI’s actions in the counterfactual cases where you worked more or less hard?
Wouldn’t it be a lot clearer to say that it’s dependent on, not the FAI’s algorithm, but the FAI’s actions in the counterfactual cases where you worked more or less hard?