I would rather see politics at LW done in a way that playfully respects the complications that are obvious, and ends up doing something surprising and hopefully awesome. Let me see if I can develop this a bit...
Imagine starting with a pool of people who think their brains are turbo-charged and who “enjoy the possibility of discussing political topics in a ‘rational’ atmosphere (truth-searching, not us-vs-them, aware of biases and fallibility, etc.)”. If they’re really actually rational, you’d kind of expect them to be able to do things like Aumann update with people with similar capacities and inclinations but different conclusions on a given hot-button political subject. So the trick would be to find two people whose starting points on a political topic was radically different and have them be each other’s chavutra, and discuss the subject with each other until they either agree on the relevant and substantive facts or agree to disagree.
Now, maybe this is just me, but it seems to me that having chavutra discussions in a public forum would introduce all kinds of signalling complications and potentially cause negative externalities if other people run into it without adequate background. To avoid both problems, it seems like it would be better to do this via the phone, or IM, or email. IM and email would be easier to log, but voice would probably be helpful for issues of tone and hashing out a common vocabulary fast.
I would expect this to be quite educational. Also I think it would be neat to read a joint write-up of how it went afterwards, where the reader finds out about the initial dramatically different opinions, and hears about the sort of higher level surprises came up in the process itself: how long it took, what was helpful, what was learned in general.
I’d personally prefer not to hear the details of the final conclusion other than the single yes/no bit about whether agreement was reached or not, because I would expect it would re-introduce signaling issues into the discussion itself, make future updates harder, and sort of implicitly suggest to the community and the wider world that these two people’s conclusion is “endorsed by all reasonable people in general”. (Which suggests a second order thing to try: have two pairs of people update on the same subject and then compare each pair’s agreements...)
It would be pretty awesome if LW had a thread every so often where people broadcasted interest in finding an “aumann chavutra” for specific topics, including political topics. This might help with people specific “dialogue cravings”. It might eventually start clogging up the site itself (the way meetup posts clogged things up for a while) but that seems like it would be a good problem to have to solve :-)
The reason why I am not optimistic about this sort of thing is because many people know someone clever who has radically different political opinions from them, and they often talk about politics quite a bit. So those sort of Aumann updates often happen, but they often end at a stance like “we both understand each other’s opinions of the facts, but have different value systems, and so disagree” or something like “we both assign the same likelihood ratio to evidence, but have very different priors.”
I guess my thought was that LWers are likely to think that its possible to implement values incoherently (ie correctably), and so might have much more to say (and learn) other than your average “clever person”. Scope neglect, cognitive dissonance, etc, etc.
My guess would be that really solid rationalists might turn out to disagree with each other over really deep values, like one being primarily selfish and sadistic while another has lots of empathy and each can see that each has built a personal narrative around such tendencies, but I wouldn’t expect them to disagree, for example, over whether someone was really experiencing pain or not. I wouldn’t expect them to get bogged down in a hairsplitting semantic claim about whether a particular physical entity “counts as a person” for the sake of a given moral code.
And “we just have different priors” usually actually means “that would take too long to explain” from what I can tell. Pretty much all of us started out as babies, and most of us have more or less the same sensory apparatus and went through Piaget’s stages and so on and so forth. Taking that common starting point and “all of life” as the evidence, it seems likely that differences in opinion could take days or weeks or months of discussion to resolve, rather than 10 minutes of rhetorical hand waving. I saw an evangelical creationist argued into direct admission that creationism is formally irrational once, but it took the rationalist about 15 hours over the course of several days to do (and that topic is basically a slam dunk). I wouldn’t expect issues that are legitimately fuzzy and emotionally fraught to be dramatically easier than that was.
...spelling this out, it seems likely to me that being someone’s aumann chavutra could involve substantially more intellectual intimacy than most people are up for. Perhaps it would be good to have some kind of formal non-disclosure contract or something like that first, as with a therapist, confessor, or lawyer?
Taking that common starting point and “all of life” as the evidence, it seems likely that differences in opinion could take days or weeks or months of discussion to resolve, rather than 10 minutes of rhetorical hand waving.
All of our lives, or even a month of it, probably imparted to us far more evidence than we could explain to each other in a month of discussion. The trouble is that much of the learning got lodged in memory regions that are practically inaccessible to the verbal parts of our brains. I can’t define Xs and you can’t define Ys, but we know them when we see them.
“We just have different priors” is probably not the best way to describe these cognitive differences—I agree with you there. But we could still be at a loss to verbally reason our way through them.
I don’t think people have any sort of capacity to fully describe their entire audio/video experience in full resolution, but if you think about the real barriers to more limited communication I predict that you’ll be able to imagine plausible attempts to circumvent these barriers for the specific purpose of developing a model of a particular real world domain in common with someone with enough precision to derive similar strategic conclusions in limited domains.
I can’t define Xs and you can’t define Ys, but we know them when we see them.
Maybe I’m misunderstanding you, but my impression is that this is what extensive definitions and rationalist taboo are for: the first to inspire words and the second to trim away confusing connotations that already adhere to the words people have started to use. The procedure for handling the apparently incommensurable “know it when I see it” concepts of each party is thus to coin new words in private for the sake of the conversation, master the common vocabulary, and then communicate while using these new terms and see if the reasonable predictions of the novel common understanding square with observable reality.
A lot of times I expect that each person will turn out to have been somewhat confused, perhaps by committing a kind of fallacy of equivocation due to lumping genuinely distinct things under the same “know it when I see it” concept, which (in the course of the conversation) could be converted to a single word and explored thoroughly enough to detect the confusion, perhaps suggesting the need for more refined sub-concepts that “cut reality at the joints”.
When I think of having a conversation with a skilled rationalist, I expect them to be able to deploy these sorts of skills on the most important seeming source of disagreement, rather than having to fall back to “agreeing to disagree”. They might still do so if the estimated cost of the time in conversation is lower the the expected benefit of agreement, but they wouldn’t be forced to it out of raw incapacity. That is, it wouldn’t be a matter of incapacity, but a matter of a pragmatically reasonable lack of interest. In some sense, one or both of us would be too materially, intellectually, or relationally impoverished to be able to afford thinking clearly together on that subject.
However, notice how far the proposal has come from “talking about politics in a web forum”. It starts to appear as though it would be a feat of communication for two relatively richly endowed people, in private, to rationally update with each other on a single conceptually tricky and politically contentious point. If that conversational accomplishment seems difficult for many people here, does it seem easier or more likely to work for many people at different levels of skill, to individually spend fewer hours, in public, writing for a wide and heterogeneously knowledgeable audience, who can provide no meaningful feedback, on that same conceptually tricky and politically contentious point?
I would rather see politics at LW done in a way that playfully respects the complications that are obvious, and ends up doing something surprising and hopefully awesome. Let me see if I can develop this a bit...
Imagine starting with a pool of people who think their brains are turbo-charged and who “enjoy the possibility of discussing political topics in a ‘rational’ atmosphere (truth-searching, not us-vs-them, aware of biases and fallibility, etc.)”. If they’re really actually rational, you’d kind of expect them to be able to do things like Aumann update with people with similar capacities and inclinations but different conclusions on a given hot-button political subject. So the trick would be to find two people whose starting points on a political topic was radically different and have them be each other’s chavutra, and discuss the subject with each other until they either agree on the relevant and substantive facts or agree to disagree.
Now, maybe this is just me, but it seems to me that having chavutra discussions in a public forum would introduce all kinds of signalling complications and potentially cause negative externalities if other people run into it without adequate background. To avoid both problems, it seems like it would be better to do this via the phone, or IM, or email. IM and email would be easier to log, but voice would probably be helpful for issues of tone and hashing out a common vocabulary fast.
I would expect this to be quite educational. Also I think it would be neat to read a joint write-up of how it went afterwards, where the reader finds out about the initial dramatically different opinions, and hears about the sort of higher level surprises came up in the process itself: how long it took, what was helpful, what was learned in general.
I’d personally prefer not to hear the details of the final conclusion other than the single yes/no bit about whether agreement was reached or not, because I would expect it would re-introduce signaling issues into the discussion itself, make future updates harder, and sort of implicitly suggest to the community and the wider world that these two people’s conclusion is “endorsed by all reasonable people in general”. (Which suggests a second order thing to try: have two pairs of people update on the same subject and then compare each pair’s agreements...)
It would be pretty awesome if LW had a thread every so often where people broadcasted interest in finding an “aumann chavutra” for specific topics, including political topics. This might help with people specific “dialogue cravings”. It might eventually start clogging up the site itself (the way meetup posts clogged things up for a while) but that seems like it would be a good problem to have to solve :-)
The reason why I am not optimistic about this sort of thing is because many people know someone clever who has radically different political opinions from them, and they often talk about politics quite a bit. So those sort of Aumann updates often happen, but they often end at a stance like “we both understand each other’s opinions of the facts, but have different value systems, and so disagree” or something like “we both assign the same likelihood ratio to evidence, but have very different priors.”
I guess my thought was that LWers are likely to think that its possible to implement values incoherently (ie correctably), and so might have much more to say (and learn) other than your average “clever person”. Scope neglect, cognitive dissonance, etc, etc.
My guess would be that really solid rationalists might turn out to disagree with each other over really deep values, like one being primarily selfish and sadistic while another has lots of empathy and each can see that each has built a personal narrative around such tendencies, but I wouldn’t expect them to disagree, for example, over whether someone was really experiencing pain or not. I wouldn’t expect them to get bogged down in a hairsplitting semantic claim about whether a particular physical entity “counts as a person” for the sake of a given moral code.
And “we just have different priors” usually actually means “that would take too long to explain” from what I can tell. Pretty much all of us started out as babies, and most of us have more or less the same sensory apparatus and went through Piaget’s stages and so on and so forth. Taking that common starting point and “all of life” as the evidence, it seems likely that differences in opinion could take days or weeks or months of discussion to resolve, rather than 10 minutes of rhetorical hand waving. I saw an evangelical creationist argued into direct admission that creationism is formally irrational once, but it took the rationalist about 15 hours over the course of several days to do (and that topic is basically a slam dunk). I wouldn’t expect issues that are legitimately fuzzy and emotionally fraught to be dramatically easier than that was.
...spelling this out, it seems likely to me that being someone’s aumann chavutra could involve substantially more intellectual intimacy than most people are up for. Perhaps it would be good to have some kind of formal non-disclosure contract or something like that first, as with a therapist, confessor, or lawyer?
All of our lives, or even a month of it, probably imparted to us far more evidence than we could explain to each other in a month of discussion. The trouble is that much of the learning got lodged in memory regions that are practically inaccessible to the verbal parts of our brains. I can’t define Xs and you can’t define Ys, but we know them when we see them.
“We just have different priors” is probably not the best way to describe these cognitive differences—I agree with you there. But we could still be at a loss to verbally reason our way through them.
I don’t think people have any sort of capacity to fully describe their entire audio/video experience in full resolution, but if you think about the real barriers to more limited communication I predict that you’ll be able to imagine plausible attempts to circumvent these barriers for the specific purpose of developing a model of a particular real world domain in common with someone with enough precision to derive similar strategic conclusions in limited domains.
Maybe I’m misunderstanding you, but my impression is that this is what extensive definitions and rationalist taboo are for: the first to inspire words and the second to trim away confusing connotations that already adhere to the words people have started to use. The procedure for handling the apparently incommensurable “know it when I see it” concepts of each party is thus to coin new words in private for the sake of the conversation, master the common vocabulary, and then communicate while using these new terms and see if the reasonable predictions of the novel common understanding square with observable reality.
A lot of times I expect that each person will turn out to have been somewhat confused, perhaps by committing a kind of fallacy of equivocation due to lumping genuinely distinct things under the same “know it when I see it” concept, which (in the course of the conversation) could be converted to a single word and explored thoroughly enough to detect the confusion, perhaps suggesting the need for more refined sub-concepts that “cut reality at the joints”.
When I think of having a conversation with a skilled rationalist, I expect them to be able to deploy these sorts of skills on the most important seeming source of disagreement, rather than having to fall back to “agreeing to disagree”. They might still do so if the estimated cost of the time in conversation is lower the the expected benefit of agreement, but they wouldn’t be forced to it out of raw incapacity. That is, it wouldn’t be a matter of incapacity, but a matter of a pragmatically reasonable lack of interest. In some sense, one or both of us would be too materially, intellectually, or relationally impoverished to be able to afford thinking clearly together on that subject.
However, notice how far the proposal has come from “talking about politics in a web forum”. It starts to appear as though it would be a feat of communication for two relatively richly endowed people, in private, to rationally update with each other on a single conceptually tricky and politically contentious point. If that conversational accomplishment seems difficult for many people here, does it seem easier or more likely to work for many people at different levels of skill, to individually spend fewer hours, in public, writing for a wide and heterogeneously knowledgeable audience, who can provide no meaningful feedback, on that same conceptually tricky and politically contentious point?