“if you make five accurate maps of the same city, then the maps will necessarily be consistent with each other; but if you draw one map by fantasy and then make four copies, the five will be consistent but not accurate. ”
This reminds me of one of my major points about Aumann Agreement, namely, that in actuality, if two people have been trying for any very substantial amount of time to reach true beliefs they won’t just agree after encountering one another and exchanging information, they will in most cases, to a very close approximation, agree BEFORE encountering one another. When you find someone who disagrees with you this is very strong evidence that either you or that other person or both HAVE NOT BEEN TRYING to reach true beliefs in the relevant domain. If you have not been trying, why should you start now by changing your belief? If they have not been trying and you are trying you should NOT change your beliefs in a manner that prevents you from being able to predict disagreement with them.
Example, I not only don’t persist in disagreement with people about whether the sun is hot and ice is cold, I don’t even enter into disagreements with people about these questions. When I think that gravity is due to a “force of attraction” and someone else thinks it’s due to “curvature of space-time” it turns out, predictably, that upon reflection we agreed to a very close approximation before exchanging information. When I was in high school and believed that the singularity was centuries away and that I knew that cryonics wouldn’t work it turned out, upon reflection, that I had not been trying to reach a realistic model of the future, but rather to reach a model that explained justified the behaviors of the people around me under a model of them as rational agents which I had arrived at by not trying to predict their behavior or statements but rather by trying to justify my beliefs that
a) I should ‘respect’ the people I encountered unless I observed on an individual level that a person wasn’t ‘worthy’ of ‘respect’.
and
b) I should only ‘respect’ people who I believed to be rational moral agents in something like a Kantian sense.
Those beliefs had been absorbed on the basis of argument from authority in the moral domain, which was accepted because I had been told to be skeptical of factual claims but not of moral claims (though I examined both my model of the world and my model of morality for internal consistency to a fairly high degree).
“if you make five accurate maps of the same city, then the maps will necessarily be consistent with each other; but if you draw one map by fantasy and then make four copies, the five will be consistent but not accurate. ”
This reminds me of one of my major points about Aumann Agreement, namely, that in actuality, if two people have been trying for any very substantial amount of time to reach true beliefs they won’t just agree after encountering one another and exchanging information, they will in most cases, to a very close approximation, agree BEFORE encountering one another. When you find someone who disagrees with you this is very strong evidence that either you or that other person or both HAVE NOT BEEN TRYING to reach true beliefs in the relevant domain. If you have not been trying, why should you start now by changing your belief? If they have not been trying and you are trying you should NOT change your beliefs in a manner that prevents you from being able to predict disagreement with them.
Example, I not only don’t persist in disagreement with people about whether the sun is hot and ice is cold, I don’t even enter into disagreements with people about these questions. When I think that gravity is due to a “force of attraction” and someone else thinks it’s due to “curvature of space-time” it turns out, predictably, that upon reflection we agreed to a very close approximation before exchanging information. When I was in high school and believed that the singularity was centuries away and that I knew that cryonics wouldn’t work it turned out, upon reflection, that I had not been trying to reach a realistic model of the future, but rather to reach a model that explained justified the behaviors of the people around me under a model of them as rational agents which I had arrived at by not trying to predict their behavior or statements but rather by trying to justify my beliefs that
a) I should ‘respect’ the people I encountered unless I observed on an individual level that a person wasn’t ‘worthy’ of ‘respect’. and b) I should only ‘respect’ people who I believed to be rational moral agents in something like a Kantian sense.
Those beliefs had been absorbed on the basis of argument from authority in the moral domain, which was accepted because I had been told to be skeptical of factual claims but not of moral claims (though I examined both my model of the world and my model of morality for internal consistency to a fairly high degree).