I think there are dis-encentives to do it on the internet, even if you expect good faith from your partner, you don’t expect good faith from all the other viewers.
But because if you change your mind for all the world to see, people with bad faith can use it is as evidence that you can be wrong and so are likely to be wrong about other things you say as well. Examples of this in the real world are politicians accused of flip-flopping on issues.
You touch on this with
instead of continuing to argue in public where there’s a lot more pressure to not lose face, or steer social norms, they continue the discussion privately, in whatever the most human-centric way is practical.
How will this norm spread?
We need public examples for people to have an idea of what good looks like.
Unless we can hide it away in a culture where it is okay to be wrong about things, or somehow anonymise it, so you can’t tell who is being wrong, it doesn’t seem like it would scale.
We need public examples, agreed. I think this under-sells the difficulty here.
In an argument or discourse worth having, a lot of the beliefs feeding in are going to be things that are:
A) Hard to state with precision, or that require the sum of a lot of different claims.
B) Involve beliefs or implications that risk getting a very negative reaction on the internet. There are a lot of important facts about the world you do not want to be seen endorsing in public, as much as we wish it were not so.
C) Involve claims that you do not have a social right to make.
D) Involve claims you can’t provide well-articulated evidence for, or can’t without running into some of A-C.
In my experience, advanced actually-changing-minds discussions are very hard to follow and very easy to misconstrue. They involve saying things that make sense in context to the particular person you’re talking to, but that often on the surface make absurd, immoral or taboo claims.
I still think trying to do this is Worth It. I would start by trying to think harder about what topics we can do this on in public, that dodge these problems while still being non-trivial enough to be worthwhile.
There’d likely be a multi-step plan, which depends on whether your goals are more “raise the sanity waterline” or “build an intellectual hub that makes rapid progress on important issues.”
Step 1: Practice it in the rationality community. Generally get people on board with the notion that if there’s an actually-important disagreement, that people try to resolve it. This would require a few public examples of productive disagreement and double crux (I agree that lack-of-those is a major issue).
Then, when people have a private dispute, they come back saying “Hey this is what we talked about, this was what we agreed on, and this is any meta-issues we stumbled upon that we think others should know about re: productive disagreement.”
Step 2: Do that in semi-public places (facebook, other communities we’re part of, etc), in a way that let’s nearby intellectual communities get a sense of it. (Maybe if we can come up with clear examples and better introduction articles, it’d be good to share those). The next time you get into a political argument with your uncle, rather than angrily yell at each other, try to meet privately and talk to each other and share it with your family. (Note: I have some uncles for whom I think this would work and some for whom it definitely wouldn’t)
(This will require effort and emotional labor that may be uncomfortable)
Step 3: After getting some practice doing productive disagreement and/or Double Crux in particular with random people, do it in somewhat higher stakes environment. Try it when a dispute comes up at your company. (This may only work if you have the sort of company that already at least nominally values truthseeking/transparency/etc so that it feels like a natural extension of the company culture rather than a totally weird thing you’re shoving into it)
Step 4: A lot of things could go wrong in between steps 1-3, but afterwards basically make deliberate efforts to expand it into wider circles (I would not leap to “try to get politicians to do it” and the like. Instead, try to invoke it in places where there isn’t so much social penalty for changing minds. (In the world where this works, I think it works by raising so sanity waterline so high that politicians fall underneath it, not by trying to get politicians to jump on board)
I think there are dis-encentives to do it on the internet, even if you expect good faith from your partner, you don’t expect good faith from all the other viewers.
But because if you change your mind for all the world to see, people with bad faith can use it is as evidence that you can be wrong and so are likely to be wrong about other things you say as well. Examples of this in the real world are politicians accused of flip-flopping on issues.
You touch on this with
How will this norm spread?
We need public examples for people to have an idea of what good looks like.
Unless we can hide it away in a culture where it is okay to be wrong about things, or somehow anonymise it, so you can’t tell who is being wrong, it doesn’t seem like it would scale.
We need public examples, agreed. I think this under-sells the difficulty here.
In an argument or discourse worth having, a lot of the beliefs feeding in are going to be things that are:
A) Hard to state with precision, or that require the sum of a lot of different claims.
B) Involve beliefs or implications that risk getting a very negative reaction on the internet. There are a lot of important facts about the world you do not want to be seen endorsing in public, as much as we wish it were not so.
C) Involve claims that you do not have a social right to make.
D) Involve claims you can’t provide well-articulated evidence for, or can’t without running into some of A-C.
In my experience, advanced actually-changing-minds discussions are very hard to follow and very easy to misconstrue. They involve saying things that make sense in context to the particular person you’re talking to, but that often on the surface make absurd, immoral or taboo claims.
I still think trying to do this is Worth It. I would start by trying to think harder about what topics we can do this on in public, that dodge these problems while still being non-trivial enough to be worthwhile.
There’d likely be a multi-step plan, which depends on whether your goals are more “raise the sanity waterline” or “build an intellectual hub that makes rapid progress on important issues.”
Step 1: Practice it in the rationality community. Generally get people on board with the notion that if there’s an actually-important disagreement, that people try to resolve it. This would require a few public examples of productive disagreement and double crux (I agree that lack-of-those is a major issue).
Then, when people have a private dispute, they come back saying “Hey this is what we talked about, this was what we agreed on, and this is any meta-issues we stumbled upon that we think others should know about re: productive disagreement.”
Step 2: Do that in semi-public places (facebook, other communities we’re part of, etc), in a way that let’s nearby intellectual communities get a sense of it. (Maybe if we can come up with clear examples and better introduction articles, it’d be good to share those). The next time you get into a political argument with your uncle, rather than angrily yell at each other, try to meet privately and talk to each other and share it with your family. (Note: I have some uncles for whom I think this would work and some for whom it definitely wouldn’t)
(This will require effort and emotional labor that may be uncomfortable)
Step 3: After getting some practice doing productive disagreement and/or Double Crux in particular with random people, do it in somewhat higher stakes environment. Try it when a dispute comes up at your company. (This may only work if you have the sort of company that already at least nominally values truthseeking/transparency/etc so that it feels like a natural extension of the company culture rather than a totally weird thing you’re shoving into it)
Step 4: A lot of things could go wrong in between steps 1-3, but afterwards basically make deliberate efforts to expand it into wider circles (I would not leap to “try to get politicians to do it” and the like. Instead, try to invoke it in places where there isn’t so much social penalty for changing minds. (In the world where this works, I think it works by raising so sanity waterline so high that politicians fall underneath it, not by trying to get politicians to jump on board)