In complex or contentious discussions, the central or top-level topic is often altered or replaced. We’re all familiar from experience with this phenomenon. Topologically this is sort of like a wormhole:
Imagine two copies of R3 minus the open unit ball, glued together along the unit spheres. Imagine enclosing the origin with a sphere of radius 2. This is a topological separation: The origin is separated from the rest of your space, the copy of R3 that you’re standing in. But, what’s contained in the enclosure is an entire world just as large; therefore, the origin is not really contained, merely separated. One could walk through the enclosure, and pass through the unit ball boundary, and then proceed back out through the unit ball boundary into the other alternative copy of R3.
You come to a crux of the issue, or you come to a clash of discourse norms or background assumptions; and then you bloop, where now that is the primary motive or top-level criterion for the conversation.
This has pluses and minuses. You are finding out what the conversation really wanted to be, finding what you most care about here, finding out what the two of most ought to fight about / where you can best learn from each other / the highest leverage ideas to mix. On the other hand, you lose some coherence; there is disorientation; it’s harder to build up a case, integrate information into single nodes for comparison; and it’s harder to follow. [More theory could be done here.]
How to orient to this? Is there a way to use software to get more of the pluses and fewer of the minuses, e.g. in order to have better debates? E.g. by providing orienting structure with signposts and reminders but without clumsy artificial rigid restrictions on the transference of salience?
A particularly annoying-to-me kind of discourse wormhole:
Alice starts arguing and the natural interpretation of her argument is that she’s arguing for claim X. As the discussion continues and evidence/arguments[1] amass against X, she nimbly switches to arguing for an adjacent claim Y, pretending that Y is what she’s been arguing for all along (which might even go unnoticed by her interlocutors).
Mhm. Yeah that’s annoying. Though in her probabilistic defense,
In fact her salience might have changed; she might not have noticed either; it might not even be a genuinely adversarial process (even subconsciously).
She might reasonably not know exactly what position she wants to defend, while still being able to partially and partially-truthfully defend it. For example, she might have a deep intuition that incest is morally wrong; and then give an argument against incest that’s sort of true, like “there’s power differences” or “it could make a diseased baby”; and then you argue / construct a hypothetical where those things aren’t relevant; and then she switches to “no but like, the family environment in general has to be decision-theoretically protected from this sort of possibility in order to prevent pressures”, and claims that’s what she’s been arguing all along. Where from your perspective, the topic was the claim “disease babies mean incest is bad”, but from hers it was “something inchoate which I can’t quite express yet means incest is bad”. And her behavior can be cooperative, at least as described so far: she’s working out her true rejection by trying out some rejections she knows how to voice.
Sometimes I’m talking to someone (e.g. about AGI timelines) and they’ll start listing facts. And the facts don’t seem immediately relevant, or like, I don’t know how to take them on board or to respond because I don’t know what argument is being made. And if I try to clarify what argument is being made, they just keep listing more facts—which disorients me; and I often imagine that they have a background assumption like “A implies B” and so they have started giving supporting facts to explain and give evidence for A. And then I’m confused because I don’t know what B even is. So I try to ask; but what I get back is more stuff about A; and they are hoping that if they just say enough stuff and convince me of A, then of course since A implies B and B is obviously relevant, I will conclude B. But in fact I might not understand B, or not think A implies B, or not think B is very relevant. Now, suppose that B actually is something that I understand and that is importantly on-topic, but I don’t agree with or understand that A implies B. From my perspective, they are saying a bunch of off-topic stuff; and when I say “no but B is false because X”, they keep saying stuff about A, and trying different approaches to convincing me of A; which looks to me like jumping around off-topicly without acknowledging the irrelevance. This is frustrating, but it’s not only their fault (for not explicating A implies B); it’s the me-them system’s fault for not clarifying the disconnect. From their perspective, I’m not updating on A appropriately. And I might actually accede to their claim about A, but not about B; and they could be annoyed at that, and think I’m not acknowledging the success of their argument; but from my perspective A wasn’t the topic, but rather B was; but from their perspective it’s the same topic because A implies B, and they may not even clearly distinguish the two in their head; and in fact even on making A vs B vs A->B explicit they may not agree that the propositions are very distinct, eg because of skew ontos.
(Also though, of course what you describe can just be annoying anyway, and in some cases is genuinely bad or adversarial behavior. I do feel there should be more social tech here. A possible example is simply the skill of stack traces in general; and in particular, being able to, without too much cogload, neutrally point out “previous you said X, and we went down this thread, and now you’re saying not-X and also Y; does that seem right? what happened?” and such.)
Discourse Wormholes.
In complex or contentious discussions, the central or top-level topic is often altered or replaced. We’re all familiar from experience with this phenomenon. Topologically this is sort of like a wormhole:
You come to a crux of the issue, or you come to a clash of discourse norms or background assumptions; and then you bloop, where now that is the primary motive or top-level criterion for the conversation.
This has pluses and minuses. You are finding out what the conversation really wanted to be, finding what you most care about here, finding out what the two of most ought to fight about / where you can best learn from each other / the highest leverage ideas to mix. On the other hand, you lose some coherence; there is disorientation; it’s harder to build up a case, integrate information into single nodes for comparison; and it’s harder to follow. [More theory could be done here.]
How to orient to this? Is there a way to use software to get more of the pluses and fewer of the minuses, e.g. in order to have better debates? E.g. by providing orienting structure with signposts and reminders but without clumsy artificial rigid restrictions on the transference of salience?
A particularly annoying-to-me kind of discourse wormhole:
Or even, eh, social pressures, etc.
Mhm. Yeah that’s annoying. Though in her probabilistic defense,
In fact her salience might have changed; she might not have noticed either; it might not even be a genuinely adversarial process (even subconsciously).
She might reasonably not know exactly what position she wants to defend, while still being able to partially and partially-truthfully defend it. For example, she might have a deep intuition that incest is morally wrong; and then give an argument against incest that’s sort of true, like “there’s power differences” or “it could make a diseased baby”; and then you argue / construct a hypothetical where those things aren’t relevant; and then she switches to “no but like, the family environment in general has to be decision-theoretically protected from this sort of possibility in order to prevent pressures”, and claims that’s what she’s been arguing all along. Where from your perspective, the topic was the claim “disease babies mean incest is bad”, but from hers it was “something inchoate which I can’t quite express yet means incest is bad”. And her behavior can be cooperative, at least as described so far: she’s working out her true rejection by trying out some rejections she knows how to voice.
Sometimes I’m talking to someone (e.g. about AGI timelines) and they’ll start listing facts. And the facts don’t seem immediately relevant, or like, I don’t know how to take them on board or to respond because I don’t know what argument is being made. And if I try to clarify what argument is being made, they just keep listing more facts—which disorients me; and I often imagine that they have a background assumption like “A implies B” and so they have started giving supporting facts to explain and give evidence for A. And then I’m confused because I don’t know what B even is. So I try to ask; but what I get back is more stuff about A; and they are hoping that if they just say enough stuff and convince me of A, then of course since A implies B and B is obviously relevant, I will conclude B. But in fact I might not understand B, or not think A implies B, or not think B is very relevant. Now, suppose that B actually is something that I understand and that is importantly on-topic, but I don’t agree with or understand that A implies B. From my perspective, they are saying a bunch of off-topic stuff; and when I say “no but B is false because X”, they keep saying stuff about A, and trying different approaches to convincing me of A; which looks to me like jumping around off-topicly without acknowledging the irrelevance. This is frustrating, but it’s not only their fault (for not explicating A implies B); it’s the me-them system’s fault for not clarifying the disconnect. From their perspective, I’m not updating on A appropriately. And I might actually accede to their claim about A, but not about B; and they could be annoyed at that, and think I’m not acknowledging the success of their argument; but from my perspective A wasn’t the topic, but rather B was; but from their perspective it’s the same topic because A implies B, and they may not even clearly distinguish the two in their head; and in fact even on making A vs B vs A->B explicit they may not agree that the propositions are very distinct, eg because of skew ontos.
(Also though, of course what you describe can just be annoying anyway, and in some cases is genuinely bad or adversarial behavior. I do feel there should be more social tech here. A possible example is simply the skill of stack traces in general; and in particular, being able to, without too much cogload, neutrally point out “previous you said X, and we went down this thread, and now you’re saying not-X and also Y; does that seem right? what happened?” and such.)