In a simple special case where everything is symmetric, they will cooperate if the problem is formalized in the spirit of TDT, but this is basically good old superrationality, not something TDT-specific. The doubt I expressed is about the case where the TDT agents are not exactly symmetric, so that each of them can’t automagically assume that the other will do exactly the same thing. In the context of this post, this assumption may be necessary.
I think it is unfair to TDT to say that it is just Hofstadter’s superrationality. If TDT is an actual algorithm to which Hofstadter’s argument applies, even just in the purely symmetric version, that is a great advance. I would definitely say that about UDT.
Yes, TDT is underspecified. But is it a class of fully specified algorithms, all of which cooperate with pure clones, or is it not clear if there is any way of specifying which logical counterfactuals it can consider?
Two relevant links: Gary Drescher on a problem with (a specification of?) TDT; you on underspecification.
The doubt I expressed is about the case where the TDT agents are not exactly symmetric, so that each of them can’t automagically assume that the other will do exactly the same thing. In the context of this post, this assumption may be necessary.
The assumption of symmetry is not necessary in the context of this post. The ability to read the code and know that if they read your code and find that you would cooperate if they do (etc) is all that is necessary. Being the same as you isn’t privileged at all. It’s just convenient.
Code access is basically just better than knowledge they will do exactly the same thing.
I think Vladimir is saying that TDT agents with a superior bargaining position might extract further concessions from TDTs with an inferior bargaining position- or, rather, that we can’t yet rigorously show that they wouldn’t do such things. In the world of one-shot PDs, numerical superiority of one kind of TDT agent over another might be such a bargaining advantage.
In the world of one-shot PDs, numerical superiority of one kind of TDT agent over another might be such a bargaining advantage.
I had been considering a whole population of agents doing lots of prisoner’s dilemmas among themselves to not be a one shot prisoner’s dilemma. It does make sense for all sorts of other plays to be made when the situation becomes political.
Omega can wipe their memories of past interactions with other particular agents, as in the example I made up. That would make each interaction a one-shot, and it wouldn’t prevent the sort of leverage we’re talking about.
Omega can wipe their memories of past interactions with other particular agents, as in the example I made up. That would make each interaction a one-shot
I wouldn’t call a game one shot just because memory constraints are applied. What matters is that the game that is being played is so much bigger than one prisoner’s dilemma. Again, I don’t dispute that there are all sorts of potential considerations that can be made, even if very little evidence about the external political environment is available to the agents, as in this case. Given this it seems likely that I don’t disagree with Vlad significantly.
I thought I saw a formal specification a while back, but perhaps that was UDT.
You’re probably thinking of cousin_it’s proof sketch of cooperation in PD. That was ADT/UDT. TDT talking about formal proofs is not part of its theory that was discussed anywhere that I know of.
If you doubt it, then I doubt it as well. I thought I saw a formal specification a while back, but perhaps that was UDT.
In a simple special case where everything is symmetric, they will cooperate if the problem is formalized in the spirit of TDT, but this is basically good old superrationality, not something TDT-specific. The doubt I expressed is about the case where the TDT agents are not exactly symmetric, so that each of them can’t automagically assume that the other will do exactly the same thing. In the context of this post, this assumption may be necessary.
I think it is unfair to TDT to say that it is just Hofstadter’s superrationality. If TDT is an actual algorithm to which Hofstadter’s argument applies, even just in the purely symmetric version, that is a great advance. I would definitely say that about UDT.
Yes, TDT is underspecified. But is it a class of fully specified algorithms, all of which cooperate with pure clones, or is it not clear if there is any way of specifying which logical counterfactuals it can consider?
Two relevant links: Gary Drescher on a problem with (a specification of?) TDT; you on underspecification.
The assumption of symmetry is not necessary in the context of this post. The ability to read the code and know that if they read your code and find that you would cooperate if they do (etc) is all that is necessary. Being the same as you isn’t privileged at all. It’s just convenient.
Code access is basically just better than knowledge they will do exactly the same thing.
I think Vladimir is saying that TDT agents with a superior bargaining position might extract further concessions from TDTs with an inferior bargaining position- or, rather, that we can’t yet rigorously show that they wouldn’t do such things. In the world of one-shot PDs, numerical superiority of one kind of TDT agent over another might be such a bargaining advantage.
I had been considering a whole population of agents doing lots of prisoner’s dilemmas among themselves to not be a one shot prisoner’s dilemma. It does make sense for all sorts of other plays to be made when the situation becomes political.
Omega can wipe their memories of past interactions with other particular agents, as in the example I made up. That would make each interaction a one-shot, and it wouldn’t prevent the sort of leverage we’re talking about.
I wouldn’t call a game one shot just because memory constraints are applied. What matters is that the game that is being played is so much bigger than one prisoner’s dilemma. Again, I don’t dispute that there are all sorts of potential considerations that can be made, even if very little evidence about the external political environment is available to the agents, as in this case. Given this it seems likely that I don’t disagree with Vlad significantly.
You’re probably thinking of cousin_it’s proof sketch of cooperation in PD. That was ADT/UDT. TDT talking about formal proofs is not part of its theory that was discussed anywhere that I know of.