The assignment of probabilities to actions doesn’t influence the final decision here. We just need to assign probabilities to everything. They could be anything, and the decision would come out the same.
Aren’t there meaningful constraints here? If I think it’s equally likely that I’m in L-world and R-world and that this is independent of my action, then I have the constraint that P(Left, L-world)=P(Left, R-world) and another constraint that P(Right, L-world)=P(Right, R-world), and if I haven’t decided yet then I have a constraint that P>0 (since at my present state of knowledge I could take any of the actions). But beyond that, positive linear scalings are irrelevant.
Aren’t there meaningful constraints here? If I think it’s equally likely that I’m in L-world and R-world and that this is independent of my action, then I have the constraint that P(Left, L-world)=P(Left, R-world) and another constraint that P(Right, L-world)=P(Right, R-world), and if I haven’t decided yet then I have a constraint that P>0 (since at my present state of knowledge I could take any of the actions). But beyond that, positive linear scalings are irrelevant.