[Question] Many Gods refutation and Instrumental Goals. (Proper one)

Making another due to complaints about lack of clarity in the previous one.

Many Gods refutation (there are too many AIs to care about a particular one. Even if you acausally trade with one, another one might punish you for not following it)

Instrumental Goals for AI ( AIs have Instrumental goals, being if you donate to AI research, all of them would benefit)

My take is many gods refutation still works due to us having no real idea about mindspace of possible AI, thus the AI could just punish for not following it in particular. Also, a small preference today could result in a vastly different in the future, further providing objective to punish for the AI.

What do you think? Any replies would be appreciated.

No answers.