i recently ran into to a vegan advocate tabling in a public space, and spoke briefly to them for the explicit purpose of better understanding what it feels like to be the target of advocacy on something i feel moderately sympathetic towards but not fully bought in on. (i find this kind of thing very valuable for noticing flaws in myself and improving; it’s much harder to be perceptive of one’s own actions otherwise). the part where i am genuinely quite plausibly persuadable of his position in theory is important; i think if i had talked to e.g flat earthers one might say my reaction is just because i’d already decided not to be persuaded. several interesting things i noticed (none of which should be surprising or novel, especially for someone less autistic than me, but as they say, intellectually knowing things is not the same as actual experience):
this guy certainly knew more about e.g health impacts of veganism than i did, and i would not have been able to hold my own in an actual debate.
in particular, it’s really easy for actually-good-in-practice heuristics to come out as logical fallacies, especially when arguing with someone much more familiar with the object level details than you are.
interestingly, since i was pushing the conversation in a pretty meta direction, he actually explicitly said something to the effect that he’s had thousands of conversations like this and has a response to basically every argument i could make, do i really think i have something he hasn’t heard before, etc. in that moment i realized this was probably true, and that this nonetheless did not necessarily mean that he was correct in his claim. and in addition it certainly didn’t make me feel any more emotionally willing to accept his argument
in the past, i’ve personally had the exact experience of arguing for something where i had enough of a dialogue tree that other people couldn’t easily find any holes, where the other people were unconvinced, and felt really confused why people weren’t seeing the very straightforward argument, and then later it turned out i was actually just wrong and the other people were applying correct heuristics
my guess is at the extreme, with sufficient prep and motivation, you can get in this position for arbitrarily wrong beliefs. like probably if i talked to flat earthers for a while i’d get deep enough in their dialogue tree that i’d stop being able to refute them on the object level and would (for the purposes of my own epistemics, not to convince an external audience) have to appeal to cognitive heuristics that are isomorphic to some cognitive fallacies.
of course we shouldn’t always appeal to the cognitive heuristics. doing so is almost always reasonable and yet you will miss out on the one thing that actually does matter. to do anything interesting you do have to eventually dig into some particular spicy claims and truly resolve things at the object level. but there are so many things in the world and resolving them takes so much time that you need some heuristics to reject a whole bunch of things out of hand and focus your energy on the things that matter.
like, i could invest energy until i can actually refute flat earthers completely on the object level, and i’d almost certainly succeed. but this would be a huge waste of time. on the other hand, i could also just never look into anything and say “nothing ever happens”. but every important thing to ever happen did, in fact, happen at some point [citation needed].
it’s really really irritating to be cut off mid sentence. this is hard to admit because i also have an unconscious tendency to do this (currently working on fixing this) and my guess is other people get very annoyed when i do this to them.
sometimes i do enjoy being cut off in conversations, but on reflection this is only when i feel like (a) the conversation is cooperative enough that i feel like we’re trying to discover the truth together, (b) the other person actually understands what i’m saying before i finish saying it. but since these conditions are much rarer and requires high levels of social awareness to detect, it’s a good first order heuristic that interrupting people is bad.
i found it completely unhelpful to be told that he was also in my shoes X years ago with similar uncertainties when he was deciding to become vegan; or to be told that he had successfully convinced Y other people to become vegan; or to be subject to what i want to call “therapy speak”. i only want to therapyspeak with people i feel relatively close to, and otherwise it comes off as very patronizing.
i think there’s a closely related thing, which is genuine curiosity about people’s views. it uses similar phrases like “what makes you believe that?” but has a very different tone and vibe.
his achievements mean a lot more to himself than to me. i don’t really care that much what he’s accomplished for the purposes of deciding whether his argument is correct. any credibility points conferred are more than cancelled out by it being kind of annoying. even if it is true, there’s nothing more annoying than hearing say “i’ve thought about this more than you / accomplished more than you have because of my phd/experience/etc so you should listen to me” unless you really really really trust this person
the calculus changes when there is an audience.
therapyspeak is still probably better than nothing, and can be a useful stepping stone for the socially incompetent
one possible take is that i’m just really weird and these modes of interaction work well for normal people more because they’re less independently thinking or need to be argued out of having poorly thought out bad takes or something like that, idk. i can’t rule this out but my guess is normal people probably are even more this than i am. also, for the purposes of analogy to the AI safety movement, presumably we want to select for people who are independent thinkers who have especially well thought out takes more than just normal people.
also my guess is this particular interaction was probably extremely out of distribution from the perspective of those tabling. my guess is activists generally have a pretty polished pitch for most common situations which includes a bunch of concrete ways of talking they’ve empirically found to cause people to engage, learned through years of RL against a general audience, but the polishedness of this pitch doesn’t generalize out of distribution when poked at in weird ways. my interlocutor even noted at some point that his conversations when tabling generally don’t go the way ours went.
like, i could invest energy until i can actually refute flat earthers completely on the object level, and I’d almost certainly succeed. but this would be a huge waste of time.
I don’t think it would be that hard to refute flat earthers. One or two facts about how the sun travels, that the atmosphere bends light, and the fact that there are commercial flights crossing the poles seem like they would be sufficient to me. This probably won’t convince a flat earther, but I think you could fairly easily convince 95% of smart unbiased 3ed listeners (not that they exist).
You don’t have to go down every option in their argument tree, finding one argument they are completely unable to refute can be enough.
you mentioned sometimes people are just wrong in their arguments but think they are correct because they’ve repeated it many times. do you have examples of this from what they said?
i recently ran into to a vegan advocate tabling in a public space, and spoke briefly to them for the explicit purpose of better understanding what it feels like to be the target of advocacy on something i feel moderately sympathetic towards but not fully bought in on. (i find this kind of thing very valuable for noticing flaws in myself and improving; it’s much harder to be perceptive of one’s own actions otherwise). the part where i am genuinely quite plausibly persuadable of his position in theory is important; i think if i had talked to e.g flat earthers one might say my reaction is just because i’d already decided not to be persuaded. several interesting things i noticed (none of which should be surprising or novel, especially for someone less autistic than me, but as they say, intellectually knowing things is not the same as actual experience):
this guy certainly knew more about e.g health impacts of veganism than i did, and i would not have been able to hold my own in an actual debate.
in particular, it’s really easy for actually-good-in-practice heuristics to come out as logical fallacies, especially when arguing with someone much more familiar with the object level details than you are.
interestingly, since i was pushing the conversation in a pretty meta direction, he actually explicitly said something to the effect that he’s had thousands of conversations like this and has a response to basically every argument i could make, do i really think i have something he hasn’t heard before, etc. in that moment i realized this was probably true, and that this nonetheless did not necessarily mean that he was correct in his claim. and in addition it certainly didn’t make me feel any more emotionally willing to accept his argument
in the past, i’ve personally had the exact experience of arguing for something where i had enough of a dialogue tree that other people couldn’t easily find any holes, where the other people were unconvinced, and felt really confused why people weren’t seeing the very straightforward argument, and then later it turned out i was actually just wrong and the other people were applying correct heuristics
my guess is at the extreme, with sufficient prep and motivation, you can get in this position for arbitrarily wrong beliefs. like probably if i talked to flat earthers for a while i’d get deep enough in their dialogue tree that i’d stop being able to refute them on the object level and would (for the purposes of my own epistemics, not to convince an external audience) have to appeal to cognitive heuristics that are isomorphic to some cognitive fallacies.
of course we shouldn’t always appeal to the cognitive heuristics. doing so is almost always reasonable and yet you will miss out on the one thing that actually does matter. to do anything interesting you do have to eventually dig into some particular spicy claims and truly resolve things at the object level. but there are so many things in the world and resolving them takes so much time that you need some heuristics to reject a whole bunch of things out of hand and focus your energy on the things that matter.
like, i could invest energy until i can actually refute flat earthers completely on the object level, and i’d almost certainly succeed. but this would be a huge waste of time. on the other hand, i could also just never look into anything and say “nothing ever happens”. but every important thing to ever happen did, in fact, happen at some point [citation needed].
it’s really really irritating to be cut off mid sentence. this is hard to admit because i also have an unconscious tendency to do this (currently working on fixing this) and my guess is other people get very annoyed when i do this to them.
sometimes i do enjoy being cut off in conversations, but on reflection this is only when i feel like (a) the conversation is cooperative enough that i feel like we’re trying to discover the truth together, (b) the other person actually understands what i’m saying before i finish saying it. but since these conditions are much rarer and requires high levels of social awareness to detect, it’s a good first order heuristic that interrupting people is bad.
i found it completely unhelpful to be told that he was also in my shoes X years ago with similar uncertainties when he was deciding to become vegan; or to be told that he had successfully convinced Y other people to become vegan; or to be subject to what i want to call “therapy speak”. i only want to therapyspeak with people i feel relatively close to, and otherwise it comes off as very patronizing.
i think there’s a closely related thing, which is genuine curiosity about people’s views. it uses similar phrases like “what makes you believe that?” but has a very different tone and vibe.
his achievements mean a lot more to himself than to me. i don’t really care that much what he’s accomplished for the purposes of deciding whether his argument is correct. any credibility points conferred are more than cancelled out by it being kind of annoying. even if it is true, there’s nothing more annoying than hearing say “i’ve thought about this more than you / accomplished more than you have because of my phd/experience/etc so you should listen to me” unless you really really really trust this person
the calculus changes when there is an audience.
therapyspeak is still probably better than nothing, and can be a useful stepping stone for the socially incompetent
one possible take is that i’m just really weird and these modes of interaction work well for normal people more because they’re less independently thinking or need to be argued out of having poorly thought out bad takes or something like that, idk. i can’t rule this out but my guess is normal people probably are even more this than i am. also, for the purposes of analogy to the AI safety movement, presumably we want to select for people who are independent thinkers who have especially well thought out takes more than just normal people.
also my guess is this particular interaction was probably extremely out of distribution from the perspective of those tabling. my guess is activists generally have a pretty polished pitch for most common situations which includes a bunch of concrete ways of talking they’ve empirically found to cause people to engage, learned through years of RL against a general audience, but the polishedness of this pitch doesn’t generalize out of distribution when poked at in weird ways. my interlocutor even noted at some point that his conversations when tabling generally don’t go the way ours went.
I don’t think it would be that hard to refute flat earthers. One or two facts about how the sun travels, that the atmosphere bends light, and the fact that there are commercial flights crossing the poles seem like they would be sufficient to me. This probably won’t convince a flat earther, but I think you could fairly easily convince 95% of smart unbiased 3ed listeners (not that they exist).
You don’t have to go down every option in their argument tree, finding one argument they are completely unable to refute can be enough.
you mentioned sometimes people are just wrong in their arguments but think they are correct because they’ve repeated it many times. do you have examples of this from what they said?
This was really interesting, thanks for putting yourself in that situation and for writing it up
I was curious what examples were of therapy speak in the conversation, if you’re down to elaborate