On further reflection, this post highlights an important omission in TDT (aka the categorical imperative): how do you judge the similarity of other agents to you? If each of your actions establishes a universal law, exactly how wide and universal does it become? You may think of yourself as optimizing your little corner of the world, because you feel uncomfortable around some people; or you may think of yourself bringing about a brave new world where the accursed papists must starve because nobody hires them; or maybe a not-so-brave new world where people are routinely denied jobs for ideological reasons. Right now I see no rational arguments to choose between these different perspectives.
This is one of the central open problems in our branch of decision theory. TDT is actually even weaker: it allows to express acausal dependencies, but figuring out what acausally depends on what is not part of it. Thus, in Newcomb’s problem, TDT doesn’t really insist on constructing the correct causal graph with platonic agent in control, even though it informally lays out guidelines that suggest that particular graph to be a good idea (for example, two identical computations are not independent, hence causal decision theorist’s graph is in error).
I suspect that ADT’s take on things allows inferring dependencies between “similar” agents, just as it allows inferring acausal dependencies in Newcomb variants, but I don’t understand this question, and maybe ADT needs modification to account for that. For example, there could be irreducible in practice logical uncertainty about the outcome, which happens to be the main factor in bargaining power or in the extent to which one should consider other slightly different agents controllable by your decisions.
On further reflection, this post highlights an important omission in TDT (aka the categorical imperative): how do you judge the similarity of other agents to you? If each of your actions establishes a universal law, exactly how wide and universal does it become? You may think of yourself as optimizing your little corner of the world, because you feel uncomfortable around some people; or you may think of yourself bringing about a brave new world where the accursed papists must starve because nobody hires them; or maybe a not-so-brave new world where people are routinely denied jobs for ideological reasons. Right now I see no rational arguments to choose between these different perspectives.
This is one of the central open problems in our branch of decision theory. TDT is actually even weaker: it allows to express acausal dependencies, but figuring out what acausally depends on what is not part of it. Thus, in Newcomb’s problem, TDT doesn’t really insist on constructing the correct causal graph with platonic agent in control, even though it informally lays out guidelines that suggest that particular graph to be a good idea (for example, two identical computations are not independent, hence causal decision theorist’s graph is in error).
I suspect that ADT’s take on things allows inferring dependencies between “similar” agents, just as it allows inferring acausal dependencies in Newcomb variants, but I don’t understand this question, and maybe ADT needs modification to account for that. For example, there could be irreducible in practice logical uncertainty about the outcome, which happens to be the main factor in bargaining power or in the extent to which one should consider other slightly different agents controllable by your decisions.