“Narrative syncing” took a moment to click to me, but when it did it brought up the connotations that I don’t see in the examples alone. Personally, the words that first came to mind were “Presupposing into existence”, and then after getting a better idea of which facet of this you were intending to convey, “Coordination through presupposition”.
While it obviously can be problematic in the ways you describe, I wouldn’t view it as “a bad thing” or “a thing to be minimized”. It’s like.. well, telling someone what to do can be “bossy” and “controlling”, and maybe as a society we think we see too much of this failure mode, but sometimes commands really are called for and so too little willingness to command “Take cover!” when necessary can be just as bad.
Before getting into what I see as the proper role of this form of communication, I think it’s worth pointing out something relevant about the impression I got when meeting you forever ago, which I’d expect others get as well, and would be expected to lead to this kind of difficulty and this kind of explanation of the difficulty.
It’s a little hard to put into words, and not at all a bad thing, but it’s this sort of paradoxically “intimidating in reverse” sort of thing. It’s this sort of “I care what you think. I will listen and update my models based on what you say” aura that provokes anxieties of “Wait a minute, my status isn’t that high here. This doesn’t make sense, and I’m afraid if I don’t denounce the status elevation I might fall less gracefully soon”—though without the verbal explanation, of course. But then, when you look at it, it’s *not* that you were holding other people above you, and there’s no signals of “I will *believe* what you say*” or “I see you as claiming relevant authority here”, just a lack of “threatened projection of rejection”. Like, there was going to be no “That’s dumb. You’re dumb for thinking that”, and no passive aggression in “Hm. Okay.”, just an honest attempt to take things for what they appear to be worth. It’s unusually respectful, and therefore jarring when people aren’t used to being given the opportunity to take that kind of responsibility.
I think this is a good thing, but if you lack an awareness of how it clashes with expectations people are likely to have, it can be harder to notice and preempt the issues that can come up when people get too intimidated by what you’re asking of them, which they are likely to flinch from. Your proposed fix addresses part of this because you’re at least saying the “We expect you to think for yourself” part explicitly rather than presupposing it on them, but there are a couple pieces missing. One is that it doesn’t acknowledge the “scariness” of being expected to come up with ones own perspectives and offer them to be criticized by very intelligent people who have thought about the subject matter more than you have. Your phrasing downplays it a bit (“no vetted-by-the-group answer to this” is almost like “no right answer here”) and that can help, but I suspect that it ends up burying some of the intimidation under the rug rather than integrating it.
The other bit is that it doesn’t really address the conceptual possibility that “You should go study ML” is actually the right answer here. This needs a little unpacking, I think.
Respect, including self respect or lack thereof, is a big part of how we reason collectively. When someone makes an explicit argument (or otherwise makes a bid for pointing our attention in a certain direction), we cannot default to always engage and try to fully evaluate the argument on the object level. Before even beginning to do that, we have to decide whether or not and to what extent their claim is worth engaging with, and we do that based on a sense of how likely it is that this person’s thoughts will prove useful to engage with. “Respect” is a pretty good term for that valuation, and it is incredibly useful for communicating across inferential distances. It’s always necessary to *some* degree (or else discussions go the way political arguments go even about trivial things), and excess amounts let you bridge much larger distances usefully because things don’t have to be supported immediately relative to a vastly different perspective. When the homeless guy starts talking about the multiverse, you don’t think quite so hard about whether it could be true as if it were a respected physics professor saying the same things. When someone you can see to see things you miss tells you that you’re in danger and to follow their instructions if you want to live, it can be viscerally unnerving, and you might find yourself motivated to follow precautions you don’t understand—and it might very well be the right thing to do.
Returning to Alec, he’s coming to *you*. Anna freakin’ Salamon. He’s asking you “What should I do? Tell me what I should do, because *I don’t know* what I should do”. In response one, you’re missing his presupposition that he belongs in a “follower” role, as relates to this question, and elevating to “peer” someone who doesn’t feel up to the job, without acknowledging his concerns or addressing them.
In response two, you’re accepting the role and feeling uneasy about it, presumably because you intuitively feel like that leadership role is appropriate there, regardless of whether you’ve put it to words.
In response three, you lead yourself out of a leadership role. This is nice because it actually addresses the issue somewhat, and is a potentially valid use of leadership, but open to unintentional abuse of the same type that your unease with the second response warns of.
Returning to “narrative syncing”, I don’t see it so much as “syncing”, as that implies a sort of symmetry that doesn’t exist. It’s not “I’m over here, where are you? How do we best meet up?”. It’s “We’re meeting up *here*. This is where you will be, or you won’t be part of the group”. It’s a decision coming from someone who has the authority to decide.
So when’s that a good thing?
Well, put simply, when it’s coming from someone who actually has the authority to decide, and when the decision is a good one. Is the statement *true?*
“We don’t do that here” might be questionable. Do people there really not do it, or do you just frown at them when they do? Do you actually *know* that people will continue to meet your expectations of them, or is there a little discord that you’re “shoulding” at them? Is that a good rule in the first place?
It’s worth noticing that we do this all the time without noticing anything weird about it. What else is “My birthday party is this Saturday!”, if not syncing narratives around a decision that is stated as fact? But it’s *true*, so what’s the problem? Or internally, “You know, I *will* go to that party!”. They’re both decisions and predictions simultaneously because that’s how decisions fundamentally work. As long as it’s an actual prediction and not a “shoulding”, it doesn’t suddenly become dishonest if the person predicting has some choice in the matter. Nor is there any thing wrong with exercising choice in good directions.
So as applied to things like “What should I do for AI risk?”, where the person is to some degree asking to be coordinated, and telling you that they want your belief or your community’s belief because they don’t trust themselves to be able to do better themselves, do you have something worth coordinating them toward? Are you sure you don’t, given how strongly they believe they need the direction, and how much longer you’ve been thinking about this?
An answer which denies neither possibility might look like..
“ML. Computer science in general. AI safety orgs. Those are the legible options that most of us currently guess to be best for most, but there’s dissent and no one really knows. If you don’t know what else to do, start with computer science while working to develop your own inside views about what the right path is, and ditch my advice the moment you don’t believe it to be right for you. There’s plenty of room for new answers here, and finding them might be one of the more valuable things you could contribute, if you think you have some ideas”.
“Narrative syncing” took a moment to click to me, but when it did it brought up the connotations that I don’t see in the examples alone. Personally, the words that first came to mind were “Presupposing into existence”, and then after getting a better idea of which facet of this you were intending to convey, “Coordination through presupposition”.
While it obviously can be problematic in the ways you describe, I wouldn’t view it as “a bad thing” or “a thing to be minimized”. It’s like.. well, telling someone what to do can be “bossy” and “controlling”, and maybe as a society we think we see too much of this failure mode, but sometimes commands really are called for and so too little willingness to command “Take cover!” when necessary can be just as bad.
Before getting into what I see as the proper role of this form of communication, I think it’s worth pointing out something relevant about the impression I got when meeting you forever ago, which I’d expect others get as well, and would be expected to lead to this kind of difficulty and this kind of explanation of the difficulty.
It’s a little hard to put into words, and not at all a bad thing, but it’s this sort of paradoxically “intimidating in reverse” sort of thing. It’s this sort of “I care what you think. I will listen and update my models based on what you say” aura that provokes anxieties of “Wait a minute, my status isn’t that high here. This doesn’t make sense, and I’m afraid if I don’t denounce the status elevation I might fall less gracefully soon”—though without the verbal explanation, of course. But then, when you look at it, it’s *not* that you were holding other people above you, and there’s no signals of “I will *believe* what you say*” or “I see you as claiming relevant authority here”, just a lack of “threatened projection of rejection”. Like, there was going to be no “That’s dumb. You’re dumb for thinking that”, and no passive aggression in “Hm. Okay.”, just an honest attempt to take things for what they appear to be worth. It’s unusually respectful, and therefore jarring when people aren’t used to being given the opportunity to take that kind of responsibility.
I think this is a good thing, but if you lack an awareness of how it clashes with expectations people are likely to have, it can be harder to notice and preempt the issues that can come up when people get too intimidated by what you’re asking of them, which they are likely to flinch from. Your proposed fix addresses part of this because you’re at least saying the “We expect you to think for yourself” part explicitly rather than presupposing it on them, but there are a couple pieces missing. One is that it doesn’t acknowledge the “scariness” of being expected to come up with ones own perspectives and offer them to be criticized by very intelligent people who have thought about the subject matter more than you have. Your phrasing downplays it a bit (“no vetted-by-the-group answer to this” is almost like “no right answer here”) and that can help, but I suspect that it ends up burying some of the intimidation under the rug rather than integrating it.
The other bit is that it doesn’t really address the conceptual possibility that “You should go study ML” is actually the right answer here. This needs a little unpacking, I think.
Respect, including self respect or lack thereof, is a big part of how we reason collectively. When someone makes an explicit argument (or otherwise makes a bid for pointing our attention in a certain direction), we cannot default to always engage and try to fully evaluate the argument on the object level. Before even beginning to do that, we have to decide whether or not and to what extent their claim is worth engaging with, and we do that based on a sense of how likely it is that this person’s thoughts will prove useful to engage with. “Respect” is a pretty good term for that valuation, and it is incredibly useful for communicating across inferential distances. It’s always necessary to *some* degree (or else discussions go the way political arguments go even about trivial things), and excess amounts let you bridge much larger distances usefully because things don’t have to be supported immediately relative to a vastly different perspective. When the homeless guy starts talking about the multiverse, you don’t think quite so hard about whether it could be true as if it were a respected physics professor saying the same things. When someone you can see to see things you miss tells you that you’re in danger and to follow their instructions if you want to live, it can be viscerally unnerving, and you might find yourself motivated to follow precautions you don’t understand—and it might very well be the right thing to do.
Returning to Alec, he’s coming to *you*. Anna freakin’ Salamon. He’s asking you “What should I do? Tell me what I should do, because *I don’t know* what I should do”. In response one, you’re missing his presupposition that he belongs in a “follower” role, as relates to this question, and elevating to “peer” someone who doesn’t feel up to the job, without acknowledging his concerns or addressing them.
In response two, you’re accepting the role and feeling uneasy about it, presumably because you intuitively feel like that leadership role is appropriate there, regardless of whether you’ve put it to words.
In response three, you lead yourself out of a leadership role. This is nice because it actually addresses the issue somewhat, and is a potentially valid use of leadership, but open to unintentional abuse of the same type that your unease with the second response warns of.
Returning to “narrative syncing”, I don’t see it so much as “syncing”, as that implies a sort of symmetry that doesn’t exist. It’s not “I’m over here, where are you? How do we best meet up?”. It’s “We’re meeting up *here*. This is where you will be, or you won’t be part of the group”. It’s a decision coming from someone who has the authority to decide.
So when’s that a good thing?
Well, put simply, when it’s coming from someone who actually has the authority to decide, and when the decision is a good one. Is the statement *true?*
“We don’t do that here” might be questionable. Do people there really not do it, or do you just frown at them when they do? Do you actually *know* that people will continue to meet your expectations of them, or is there a little discord that you’re “shoulding” at them? Is that a good rule in the first place?
It’s worth noticing that we do this all the time without noticing anything weird about it. What else is “My birthday party is this Saturday!”, if not syncing narratives around a decision that is stated as fact? But it’s *true*, so what’s the problem? Or internally, “You know, I *will* go to that party!”. They’re both decisions and predictions simultaneously because that’s how decisions fundamentally work. As long as it’s an actual prediction and not a “shoulding”, it doesn’t suddenly become dishonest if the person predicting has some choice in the matter. Nor is there any thing wrong with exercising choice in good directions.
So as applied to things like “What should I do for AI risk?”, where the person is to some degree asking to be coordinated, and telling you that they want your belief or your community’s belief because they don’t trust themselves to be able to do better themselves, do you have something worth coordinating them toward? Are you sure you don’t, given how strongly they believe they need the direction, and how much longer you’ve been thinking about this?
An answer which denies neither possibility might look like..
“ML. Computer science in general. AI safety orgs. Those are the legible options that most of us currently guess to be best for most, but there’s dissent and no one really knows. If you don’t know what else to do, start with computer science while working to develop your own inside views about what the right path is, and ditch my advice the moment you don’t believe it to be right for you. There’s plenty of room for new answers here, and finding them might be one of the more valuable things you could contribute, if you think you have some ideas”.