I’m going to try to summarize your perspective before giving mine. It seems to me you suggest the following:
I should acknowledge that it sounds crazy. Something like “I know it sounds crazy.”
I should affirm my sincere belief. Not necessarily trying to convince, but just telling her “this is what I believe”, and let the sincerity of the belief on its own be convincing, rather than trying to persuade.
I should be open to her perspective, investigating where any points of disagreement would arise, what experiences she’s had that would make this believable/not believable, etc. Without pressure.
Overall, you’re suggesting that rather than trying to use/improve the mechanistic sci-fi style persuasive arguments that convince people on LessWrong, I should take a more human and individual approach, investigating what sounds crazy to her and examining her perspective.
I agree, of course, with a lot of that. I suspect if you’d been on the line when I was actually talking on the phone to my mom about AI extinction risk, you’d have approved.
I wrote this article because I found that some of the things I brought up, like bioweapons, were met with initial skepticism, and that these parts of the AI extinction argument are not load-bearing. We don’t know how AI would kill everyone. Some things sound outlandish (boiling oceans) without actually adding anything to the argument.
I was able to manage her skepticism, but I think if I’d skipped talking about bioweapons, I would have triggered less skepticism in the first place. In fact, I think there’s probably some way I could have talked about the AI extinction argument that she didn’t think sounded crazy at all. If so, then the amount of exploring her perspective and so on I’d need to do would be dramatically reduced.
Rather than start with something that sounds crazy, then assure people it’s not and convince them one by one, if we can actually make it not sound crazy in the first place, that sounds valuable.
Actually, no. I wouldn’t suggest you should do any of that. What I’m saying is purely descriptive.
This may sound like a nit, but I promise this is central to my point
I suspect if you’d been on the line when I was actually talking on the phone to my mom about AI extinction risk, you’d have approved.
I’d be surprised.
Not that I’d expect to disapprove, I just don’t really think it’s my place to do either. I tend to approach such things from a perspective of “Are you getting the results you want? If so, great. If not, let’s examine why”.
The fact that you’re making this post suggests “not”. I could reassure you that I don’t think you did terribly, and I don’t, but at the end of the day what’s my hypothetical approval worth when it won’t change the results?
I think if I’d skipped talking about bioweapons, I would have triggered less skepticism in the first place. In fact, I think there’s probably some way I could have talked about the AI extinction argument that she didn’t think sounded crazy at all. If so, then the amount of exploring her perspective and so on I’d need to do would be dramatically reduced.
Rather than start with something that sounds crazy, then assure people it’s not and convince them one by one, if we can actually make it not sound crazy in the first place, that sounds valuable.
I get that this might sound crazy from where you stand, but I don’t actually see skepticism as a problem. I wouldn’t try to route around it, nor would I try to assure anyone of anything.
I don’t have to explore my mom’s perspective or assure her of anything when I say crazy sounding stuff, because “He gets how this sounds, and has good reasons for his beliefs” is baked in. The reason I said I’d be curious to explore your mom’s perspective is because of the “sounds crazy” objection, and the sense that “I know, right?” won’t cut it. If I already understand her perspective well enough to navigate it without hiccup, then I don’t need to explore it any more. I’m not going to plow forward if I anticipate that I’m going to be dismissed, so when that happens I know I’ve erred and need to reorient to the unexpected data. That’s where the curiosity comes from.
The question of “How am I not coming off as obviously sane?” is much more important to me than avoiding stretching people’s worldviews. Because when I come off as obviously sane, I can get away with a hell of a lot of stretching, and almost trivially. And when I don’t, trying to route around that and convince people by “strategically withholding the beliefs I have which I don’t see as believable” strikes me as fighting the current. Or, to switch metaphors, it’s like fretting over excess weight of your toothbrush because lighter cargo is always easier, before fully updating on the fact that there are pickup trucks available so nothing needs to be backpacked in.
Projection onto “shoulds” is always a lossy process and I hesitate to do it at all, but if I were to do a little to make things a little more concretely actionable at the risk of incurring projection errors, it’d come out something like...
Notice how incredibly far and easily one can stretch the worldviews of others, once the others are motivated to follow rather than object. Just notice, and let it sink in.
Notice how this scales. No one believes the earth is round because they understand the arguments. Few people doubt it, because the visibly sane people are all on one side.
Notice the “spurious” connection between epistemic rationality and effectiveness. Even when you’re sure you’re right, “Make sure I come off as unquestionably sane, or else wonder what I’m missing” forces epistemic hygiene and proper humility. Just in case. Which is always more likely than we like to think.
Notice whether or not you anticipate being able to have the effectiveness you yearn for by adopting this mode of operation. If not, turn first to understand exactly where it goes wrong, focusing on “How can I fix this?”, and noticing if your attention shifts toward justifying failure and dismissal—because the latter type of “answering why it’s not working” serves a very different purpose.
Things like “Acknowledge that I sound crazy when I sound crazy” and “Explore my moms perspective when I realize I don’t understand her perspective well enough” don’t need to be micromanaged, as they come naturally when we attend to the legitimacy of objections and insufficiency of our own understanding—and I have no doubt that you do them already in the situations that you recognize as calling for them. That’s why I wouldn’t “should” at that level.
You say you’re focused on epistemic rationality and humility and so on, but you also say I should be focused on coming across as sane, independent of the actual argument I’m putting forward. In the sense that I could convince someone the Earth was round or flat by simply coming across as someone who knows what I’m talking about, rather than by actually putting forward a good argument.
I’m comfortable with delaying certain arguments until later. Every schoolteacher does the same thing. But what you’re suggesting feels more like Dark Arts. You try to equate it with being more rational and questioning your own understanding and so on, but I’m not sure I buy that you’re not just advocating for deception.
No, definitely not dark arts. The exact opposite, actually—though the latter probably won’t come across in this comment.
Again, I’m going to have to point at some distinctions which might feel like nits but which actually change the story completely. In this case, it’s the difference between focusing on “coming off as sane”—which I would not advocate—and “coming off as obviously sane”. Or perhaps more clearly worded “being visibly sane”.
If you focus on coming across as sane, then you are Goodharting on appearing sane even if you aren’t. “Reality doesn’t matter, just [other] people’s perceptions” does indeed lead to dark arts, and it has a ceiling. This is politician shit, and comes off as politician shit to anyone who is more perceptive than you take them for.
At the same time, the wise alternative is not “Other people’s perceptions don’t matter, just reality”. Because our perception can never be reality, so what this means in practice is “Other people’s perceptions don’t matter, just [my own perception of] reality”, while losing track of the conflation hiding in the presupposition. This conflation leads to not only shutting out error signals of less-than-perfect sanity, but also to blinding ourselves to the extent to which we’ve become blind. Us aspiring rationalists tend to be much more prone to this failure mode, partly for reasons that are flattering to us, and partly for reasons that are less so. People often pick up on signs that we’re doing this subtle flinching, and it’s perfectly rational for people to discount our arguments in such cases even if the arguments appear to be solid—because how are they to know they’re competent to judge? It’s not like people can’t be tricked with sophistry.
What I’m talking about is critically different than either. When it’s just obvious that you’re sane, it’s not “seduced into a perception that could be believable”. It’s that the alternative visibly doesn’t fit. Like, it’s not true, and clearly so.
“Being visibly sane” requires both that you’re actually sane, and that it’s visible to others. The focus is still on actually being sane, while taking care to notice that if you can’t get others to see you as sane this is evidence against your sanity. Not “proof”, not “the only thing that matters”, but evidence—and something that will therefore soften your perceived certainty, if you allow your beliefs to update with the evidence.
It’s true that if you don’t provide receipts, this opens a window to deceive. It’s also true that there’s no rule saying that you have to abuse the trust people place in you. Do you trust yourself not to abuse it?
It’s a hell of a question, actually. The moment people start trusting you too much and putting their wellbeing at risk because they didn’t demand the receipts you expected them to demand, you tend to get a reality check about how sure you are of your own words and arguments. It’s a very sobering experience, and one that is worth working towards with appropriate caution.
It’s also an uncomfortable one. And if we’re not extremely careful we’re likely to flinch and fail to notice.
I’m going to try to summarize your perspective before giving mine. It seems to me you suggest the following:
I should acknowledge that it sounds crazy. Something like “I know it sounds crazy.”
I should affirm my sincere belief. Not necessarily trying to convince, but just telling her “this is what I believe”, and let the sincerity of the belief on its own be convincing, rather than trying to persuade.
I should be open to her perspective, investigating where any points of disagreement would arise, what experiences she’s had that would make this believable/not believable, etc. Without pressure.
Overall, you’re suggesting that rather than trying to use/improve the mechanistic sci-fi style persuasive arguments that convince people on LessWrong, I should take a more human and individual approach, investigating what sounds crazy to her and examining her perspective.
I agree, of course, with a lot of that. I suspect if you’d been on the line when I was actually talking on the phone to my mom about AI extinction risk, you’d have approved.
I wrote this article because I found that some of the things I brought up, like bioweapons, were met with initial skepticism, and that these parts of the AI extinction argument are not load-bearing. We don’t know how AI would kill everyone. Some things sound outlandish (boiling oceans) without actually adding anything to the argument.
I was able to manage her skepticism, but I think if I’d skipped talking about bioweapons, I would have triggered less skepticism in the first place. In fact, I think there’s probably some way I could have talked about the AI extinction argument that she didn’t think sounded crazy at all. If so, then the amount of exploring her perspective and so on I’d need to do would be dramatically reduced.
Rather than start with something that sounds crazy, then assure people it’s not and convince them one by one, if we can actually make it not sound crazy in the first place, that sounds valuable.
Actually, no. I wouldn’t suggest you should do any of that. What I’m saying is purely descriptive.
This may sound like a nit, but I promise this is central to my point
I’d be surprised.
Not that I’d expect to disapprove, I just don’t really think it’s my place to do either. I tend to approach such things from a perspective of “Are you getting the results you want? If so, great. If not, let’s examine why”.
The fact that you’re making this post suggests “not”. I could reassure you that I don’t think you did terribly, and I don’t, but at the end of the day what’s my hypothetical approval worth when it won’t change the results?
I get that this might sound crazy from where you stand, but I don’t actually see skepticism as a problem. I wouldn’t try to route around it, nor would I try to assure anyone of anything.
I don’t have to explore my mom’s perspective or assure her of anything when I say crazy sounding stuff, because “He gets how this sounds, and has good reasons for his beliefs” is baked in. The reason I said I’d be curious to explore your mom’s perspective is because of the “sounds crazy” objection, and the sense that “I know, right?” won’t cut it. If I already understand her perspective well enough to navigate it without hiccup, then I don’t need to explore it any more. I’m not going to plow forward if I anticipate that I’m going to be dismissed, so when that happens I know I’ve erred and need to reorient to the unexpected data. That’s where the curiosity comes from.
The question of “How am I not coming off as obviously sane?” is much more important to me than avoiding stretching people’s worldviews. Because when I come off as obviously sane, I can get away with a hell of a lot of stretching, and almost trivially. And when I don’t, trying to route around that and convince people by “strategically withholding the beliefs I have which I don’t see as believable” strikes me as fighting the current. Or, to switch metaphors, it’s like fretting over excess weight of your toothbrush because lighter cargo is always easier, before fully updating on the fact that there are pickup trucks available so nothing needs to be backpacked in.
Projection onto “shoulds” is always a lossy process and I hesitate to do it at all, but if I were to do a little to make things a little more concretely actionable at the risk of incurring projection errors, it’d come out something like...
Notice how incredibly far and easily one can stretch the worldviews of others, once the others are motivated to follow rather than object. Just notice, and let it sink in.
Notice how this scales. No one believes the earth is round because they understand the arguments. Few people doubt it, because the visibly sane people are all on one side.
Notice the “spurious” connection between epistemic rationality and effectiveness. Even when you’re sure you’re right, “Make sure I come off as unquestionably sane, or else wonder what I’m missing” forces epistemic hygiene and proper humility. Just in case. Which is always more likely than we like to think.
Notice whether or not you anticipate being able to have the effectiveness you yearn for by adopting this mode of operation. If not, turn first to understand exactly where it goes wrong, focusing on “How can I fix this?”, and noticing if your attention shifts toward justifying failure and dismissal—because the latter type of “answering why it’s not working” serves a very different purpose.
Things like “Acknowledge that I sound crazy when I sound crazy” and “Explore my moms perspective when I realize I don’t understand her perspective well enough” don’t need to be micromanaged, as they come naturally when we attend to the legitimacy of objections and insufficiency of our own understanding—and I have no doubt that you do them already in the situations that you recognize as calling for them. That’s why I wouldn’t “should” at that level.
You say you’re focused on epistemic rationality and humility and so on, but you also say I should be focused on coming across as sane, independent of the actual argument I’m putting forward. In the sense that I could convince someone the Earth was round or flat by simply coming across as someone who knows what I’m talking about, rather than by actually putting forward a good argument.
I’m comfortable with delaying certain arguments until later. Every schoolteacher does the same thing. But what you’re suggesting feels more like Dark Arts. You try to equate it with being more rational and questioning your own understanding and so on, but I’m not sure I buy that you’re not just advocating for deception.
No, definitely not dark arts. The exact opposite, actually—though the latter probably won’t come across in this comment.
Again, I’m going to have to point at some distinctions which might feel like nits but which actually change the story completely. In this case, it’s the difference between focusing on “coming off as sane”—which I would not advocate—and “coming off as obviously sane”. Or perhaps more clearly worded “being visibly sane”.
If you focus on coming across as sane, then you are Goodharting on appearing sane even if you aren’t. “Reality doesn’t matter, just [other] people’s perceptions” does indeed lead to dark arts, and it has a ceiling. This is politician shit, and comes off as politician shit to anyone who is more perceptive than you take them for.
At the same time, the wise alternative is not “Other people’s perceptions don’t matter, just reality”. Because our perception can never be reality, so what this means in practice is “Other people’s perceptions don’t matter, just [my own perception of] reality”, while losing track of the conflation hiding in the presupposition. This conflation leads to not only shutting out error signals of less-than-perfect sanity, but also to blinding ourselves to the extent to which we’ve become blind. Us aspiring rationalists tend to be much more prone to this failure mode, partly for reasons that are flattering to us, and partly for reasons that are less so. People often pick up on signs that we’re doing this subtle flinching, and it’s perfectly rational for people to discount our arguments in such cases even if the arguments appear to be solid—because how are they to know they’re competent to judge? It’s not like people can’t be tricked with sophistry.
What I’m talking about is critically different than either. When it’s just obvious that you’re sane, it’s not “seduced into a perception that could be believable”. It’s that the alternative visibly doesn’t fit. Like, it’s not true, and clearly so.
“Being visibly sane” requires both that you’re actually sane, and that it’s visible to others. The focus is still on actually being sane, while taking care to notice that if you can’t get others to see you as sane this is evidence against your sanity. Not “proof”, not “the only thing that matters”, but evidence—and something that will therefore soften your perceived certainty, if you allow your beliefs to update with the evidence.
It’s true that if you don’t provide receipts, this opens a window to deceive. It’s also true that there’s no rule saying that you have to abuse the trust people place in you. Do you trust yourself not to abuse it?
It’s a hell of a question, actually. The moment people start trusting you too much and putting their wellbeing at risk because they didn’t demand the receipts you expected them to demand, you tend to get a reality check about how sure you are of your own words and arguments. It’s a very sobering experience, and one that is worth working towards with appropriate caution.
It’s also an uncomfortable one. And if we’re not extremely careful we’re likely to flinch and fail to notice.