I’m sorry but I am not familiar with your notation. I am just interested in the idea: when an agent Amir is fundamentally uncertain about the ethical systems that he evaluates his actions by, is it better if all of his immediate child worlds make the same decision? Or should he hedge against his moral uncertainty, ensure his immediate child worlds choose courses of action that optimize for irreconcilable moral frameworks, and increase the probability that in a subset of his child worlds, his actions realize value?
It seems that in a growing market (worlds splitting at an exponential rate), it pays in the long term to diversify your portfolio (optimize locally for irreconcilable moral frameworks).
I agree that QM already creates a wide spread of worlds, but I don’t think that means it’s safe to put all of one’s eggs in one basket when one has doubt that their moral system is fundamentally wrong.
I too have been lurking for a little while. I have listened to the majority of Rationality from A to Z by Eliezer and really appreciate the clarity that Bayescraft and similar ideas offer. Hello :)