I was once at a meetup, and there were some people there new to LessWrong. After listening to a philosophical argument between two long-time meetup group members, where they agreed on a conclusion that was somewhere between their original positions, a newcomer said “sounds like a good compromise,” to which one of the old-comers (?) said “but that has nothing to do with whether it’s true… in fact now that you point that out I’m suspicious of it.”
Later in the meetup, an argument ended with another conclusion that sounded like a compromise. I pointed it out. One of the arguers was horrified to agree with me that compromising was exactly what he was doing.
Is this actually a failure mode though, if you only “compromise” with people you respect intellectually? In retrospect, this sounds kind of like an approximation to Aumann agreement.
Is this actually a failure mode though, if you only “compromise” with people you respect intellectually? In retrospect, this sounds kind of like an approximation to Aumann agreement.
Each side should update on the other’s arguments and data, and on the fact that the other side believes what it does (inasfar we can’t perfectly trust our own reasoning process). This often means they update towards the other’s position. But it certainly doesn’t mean they’re going to update so much as to agree on a common position.
You don’t need to try to approximate Aumann agreement because you don’t believe that either yourself or the other party is perfectly rational, so you can’t treat your or the other’s beliefs as having that kind of weight.
Also, people who start out looking for a compromise might be led to compromise in a bad way: A’s theory predicts ball will fall down, B’s theory predicts ball will fall up, compromise theory predicts it will stay in place, even though both A and B have evidence against that.
Is this actually a failure mode though, if you only “compromise” with people you respect intellectually?
Part of intellectual debate is that you judge arguments on their merits instead of negotiating what’s true. Comprosing suggests that you are involved in a negotiation over what’s true instead of search for the real truth.
Theoretically, if you treat your own previous position as a prior and the other guy’s arguments as some evidence, the standard updating will lead you to have a new position somewhere in between which will look like a compromise.
Obviously there are are lot of caveats—e.g. the assumption that an intermediate position makes sense (that is, the two positions are on some kind of continuous axis), etc.
Theoretically, if you treat your own previous position as a prior and the other guy’s arguments as some evidence, the standard updating will lead you to have a new position somewhere in between
Not to the extent your current position already takes those arguments into account (in which case the arguments fail to address any disagreement). More than that, by conservation of expected evidence some arguments should change your mind in the opposite direction from what they are intended to argue.
I was once at a meetup, and there were some people there new to LessWrong. After listening to a philosophical argument between two long-time meetup group members, where they agreed on a conclusion that was somewhere between their original positions, a newcomer said “sounds like a good compromise,” to which one of the old-comers (?) said “but that has nothing to do with whether it’s true… in fact now that you point that out I’m suspicious of it.”
Later in the meetup, an argument ended with another conclusion that sounded like a compromise. I pointed it out. One of the arguers was horrified to agree with me that compromising was exactly what he was doing.
Is this actually a failure mode though, if you only “compromise” with people you respect intellectually? In retrospect, this sounds kind of like an approximation to Aumann agreement.
Each side should update on the other’s arguments and data, and on the fact that the other side believes what it does (inasfar we can’t perfectly trust our own reasoning process). This often means they update towards the other’s position. But it certainly doesn’t mean they’re going to update so much as to agree on a common position.
You don’t need to try to approximate Aumann agreement because you don’t believe that either yourself or the other party is perfectly rational, so you can’t treat your or the other’s beliefs as having that kind of weight.
Also, people who start out looking for a compromise might be led to compromise in a bad way: A’s theory predicts ball will fall down, B’s theory predicts ball will fall up, compromise theory predicts it will stay in place, even though both A and B have evidence against that.
Part of intellectual debate is that you judge arguments on their merits instead of negotiating what’s true. Comprosing suggests that you are involved in a negotiation over what’s true instead of search for the real truth.
It doesn’t matter what it sounds like you are doing or what you think you are doing. One thing matters: how good is your actual answer?
Yes, but if you followed a crappy reasoning process it’s less likely that you end up with a high quality answer than when you followed a good process.
Theoretically, if you treat your own previous position as a prior and the other guy’s arguments as some evidence, the standard updating will lead you to have a new position somewhere in between which will look like a compromise.
Obviously there are are lot of caveats—e.g. the assumption that an intermediate position makes sense (that is, the two positions are on some kind of continuous axis), etc.
Not to the extent your current position already takes those arguments into account (in which case the arguments fail to address any disagreement). More than that, by conservation of expected evidence some arguments should change your mind in the opposite direction from what they are intended to argue.