“Great minds think alike” is a predictable dictum to socially arise if Reason Is Universal and the culture generating various dictums has many instances of valid reasoners in it <3
Granting that such capacities are widely distributed, almost anyone reasoning in a certain way is likely to think in ways that others will also think in.
If they notice this explicitly they can hope that others, reasoning similarly, will notice explicitly too, and then everyone who has done this and acts on whatever they think about is, in some sense, deciding once for the entire collective, and, rationally speaking, they should act in the way that “if everyone in the same mental posture acted the same” would conduce to the best possible result for them all.
This tactic of “noticing that my rationality is the rationality of all, and should endorse what would be good for all of us” was named “superrationality” by Hofstadter and is one of the standard solutions to the one shot prisoner’s dilemma that let’s one generate and mostly inhabit the good timelines.
Presumably the SaaS people aren’t superrational? Or they are, and I’ve missed a lemma in the proofs they are using in their practical reasoning engines? Or something? My naive tendency is to assume that “adults” (the grownups who are good, and ensuring good outcomes for the 7th generation?) are more likely to be superrational than immature children rather than less likely… but I grant that I could be miscalibrated here.
A failure mode might also be that the SaaS people are assuming the other players are not superrational. In that case a superrational player should also defect.
Without having put much thought into it, I believe (adult) humans cooperating via this mechanism is in general very unlikely. Agents cooperating relies on all agents coming to the same (or sufficiently similar?) conclusion regarding the payoff matrix and the nature of the other agents. So in human terms, this relies on everyones ability to reason correctly about the problem and everyone elses behavior AND everyone having the right information. I don’t think that happens very often, if at all. The “everyone predicting each others behavior correctly” part seems especially unlikely to me. Also slightly different (predicted) information (e.g. AGI timelines in our case) can yield very different payoff matrices?
“Great minds think alike” is a predictable dictum to socially arise if Reason Is Universal and the culture generating various dictums has many instances of valid reasoners in it <3
(The original source was actually quite subtle, and points out that fools also often agree.)
Math says that finding proofs is very hard, but validating them is nearly trivial, and Socrates demonstrated that with leading questions he could get a young illiterate slave to generatively validate a geometry proof.
Granting that such capacities are widely distributed, almost anyone reasoning in a certain way is likely to think in ways that others will also think in.
If they notice this explicitly they can hope that others, reasoning similarly, will notice explicitly too, and then everyone who has done this and acts on whatever they think about is, in some sense, deciding once for the entire collective, and, rationally speaking, they should act in the way that “if everyone in the same mental posture acted the same” would conduce to the best possible result for them all.
This tactic of “noticing that my rationality is the rationality of all, and should endorse what would be good for all of us” was named “superrationality” by Hofstadter and is one of the standard solutions to the one shot prisoner’s dilemma that let’s one generate and mostly inhabit the good timelines.
Presumably the SaaS people aren’t superrational? Or they are, and I’ve missed a lemma in the proofs they are using in their practical reasoning engines? Or something? My naive tendency is to assume that “adults” (the grownups who are good, and ensuring good outcomes for the 7th generation?) are more likely to be superrational than immature children rather than less likely… but I grant that I could be miscalibrated here.
A failure mode might also be that the SaaS people are assuming the other players are not superrational. In that case a superrational player should also defect.
Without having put much thought into it, I believe (adult) humans cooperating via this mechanism is in general very unlikely. Agents cooperating relies on all agents coming to the same (or sufficiently similar?) conclusion regarding the payoff matrix and the nature of the other agents. So in human terms, this relies on everyones ability to reason correctly about the problem and everyone elses behavior AND everyone having the right information. I don’t think that happens very often, if at all. The “everyone predicting each others behavior correctly” part seems especially unlikely to me. Also slightly different (predicted) information (e.g. AGI timelines in our case) can yield very different payoff matrices?