Some thoughts that taking this perspective triggers in me:
Ask culture is actually kind of a fantastical achievement in human history, given the degree to which humans are social animals and our minds are constantly processing social consequences. Getting people to just say what they’re thinking, without considering the impact of their words on other people’s feelings, how is that even possible?
If you consider it to be a rare and valuable achievement, a highly desirable but potentially fragile Schelling point or equilibrium (guys, if we leave level 0, we’ll sink into a quagmire of infinite levels of social metacognition and never be able to easily tell what someone really has in mind!), perhaps that makes some people’s behaviors more understandable, such as insisting that their words have no social consequences, or why they’re so resistant to suggestions that they should consider other people’s feelings before they speak. (But they’re probably doing that subconsciously or by habit/indoctrination, not following this reasoning explicitly.)
I’m not sure what to do in light of all this. Even talking about it abstractly like I’m doing might destroy the shared illusion that is necessary to sustain a culture where people speak their minds honestly without consideration for social consequences. (But it’s probably ok here since the OP is already pushing away from it, and the door has already been opened by other posts/comments.)
I’m not super-wedded to ask culture—the considerations in the OP seem real, but it also seems to be neglecting the advantages of ask culture, and not asking why it came about in the first place. It feels like a potential Chesterton’s fence situation.
I’m not sure what to do in light of all this. Even talking about it abstractly like I’m doing might destroy the shared illusion that is necessary to sustain a culture where people speak their minds honestly without consideration for social consequences.
For what it’s worth, my policy around these sorts of things is roughly “That which can be destroyed by being accurately described, should be.”
More concretely, I think if you’re worried that something is so weak as to be destroyed by being named, then this is a sign that you should do something to make it stronger — for instance, celebrating it, or writing a blogpost that explains clearly and well why it’s important.
Alternatively, I’ve found that often my worry is unfounded, and that the thing was indeed something people care about and is stronger than I feared. And then talking about it just helps improve people’s maps and is good.
I think religion and institutions built up around it (such as freedom of religion) is a fairly clear counterexample to this. They are in part a coordination technology built upon a shared illusion (e.g., that God exists) and safeguards against its “misuse” built up from centuries of experience. If you destroy the illusion at the wrong time (i.e. before better replacements are ready), you could cause a lot of damage at least in the short run, and possibly even in the long run given path dependence.
Oh okay. I don’t find this convincing, consistent with my position above I’d bet that in the longer term we’d do best to hit a button that ended all religions today, and then eat the costs and spend the decades/centuries required to build better things in their stead. (I think it’s really embarrassing we don’t have better things in their place, especially after the industrial revolution.) I don’t think I can argue well for that position right now, I’ll need to think on it more (and maybe write a post on it when I’ve made some more progress on the reasonining).
(Obvious caveat that actually we only have like 0.5-3 decades of being humans any more, so the above ‘centuries’ isn’t realistic.)
“consistent with my position above I’d bet that in the longer term we’d do best to hit a button that ended all religions today, and then eat the costs and spend the decades/centuries required to build better things in their stead.”
Would you have pressed this button at every other point throughout history too? If not, when’s the earliest you would have pressed it?
For me the answer is “roughly the beginning of the 20th century?”
Like, seems to me that around that time humanity had enough of the pieces figured out to make a more naturalistic worldview work pretty well.
It’s kind of hard to specify what it would have meant to press that button some centuries earlier, since like, I think a non-trivial chunk of religion was people genuinely trying to figure out what reality is made out of, and what the cosmology of the world is, etc. Depending on the details of this specification I would have done it earlier.
If you get around to writing that post, please consider/address:
Theory of the second best—“The economists Richard Lipsey and Kelvin Lancaster showed in 1956 that if one optimality condition in an economic model cannot be satisfied, it is possible that the next-best solution involves changing other variables away from the values that would otherwise be optimal.”—Generalizing from this, given that humans deviate from optimal rationality in all kinds of unavoidable ways, the “second-best” solution may well involve belief in some falsehoods.
Managing risks while trying to do good—We’re all very tempted to overlook risks while trying to do good, including (in this instance) destroying “that which can be destroyed by truth”.
Before Christianity was discredited, it acted as a sort of shared lens through which the value of any proposed course of action could be evaluated. (I’m limiting my universe of discourse to Western society here.) I’m tempted to call such a lens an “ideological commitment” (where the “commitment” is a commitment to view everything that happens to you through the lens of the ideology—or at least a habit of doing so).
Committing to an ideology is one of the most powerful things a person can do to free himself from anxiety (because the commitment shifts his focus from his impotent vulnerable self to something much less vulnerable and much longer lived). Also, people who share a commitment to the same ideology tend to work together effectively: a small fraction of the employees of an organization for example who share a commitment to the same ideology have many times taken the organization over by using loyalty to the ideology to decide who to hire and who to promote. They’ve also taken over whole countries in a few cases.
The trouble with reducing the prestige and the influence of Christianity even now in 2025 is that the ideologies that have rushed in to fill the void (in the availability of ways to reduce personal anxiety and of ways to coordinate large groups of people) have had IMHO much worse effects than Christianity.
You, Ben, tend to think that society should “eat the costs and spend the decades/centuries required to build better things” than Christianity. The huge problem with that is that the extreme deadliness of one of ideologies that has rushed in to fill the void caused by the discrediting of Christianity: namely, the one (usually referred to vaguely by “progress” or “innovation”) that views every personal, organizational and political decision through the lens of which decision best advances or accelerates science and technology.
In trying to get frontier AI research stopped or paused for a few decades, we are facing off against not only trillions of dollars in economic / profit incentives, but also an ideology, and ideologies (including older ideologies like Christianity) have proven to be fierce opponents in the past.
Reducing the prestige and influence of Christianity will tend to increase the prestige and influence of all the other ideologies, including the ideology, which is already much more popular than I would prefer, that we can expect to offer up determined sustained opposition to anyone trying to stop or long-pause AI.
The huge problem with that is that the extreme deadliness of one of ideologies that has rushed in to fill the void caused by the discrediting of Christianity: namely, the one (usually referred to vaguely by “progress” or “innovation”) that views every personal, organizational and political decision through the lens of which decision best advances or accelerates science and technology.
Is this really a widely held ideology? My impression is that the AI race is driven by greed much more than ideology.
I love this comment. I think persuading people towards atheism is good, because then there’s more demand for a new atheist religion. (I consider e/acc or even Yudkowsky-style longtermism a religion)
Ask culture is actually kind of a fantastical achievement in human history, given the degree to which humans are social animals and our minds are constantly processing social consequences. Getting people to just say what they’re thinking, without considering the impact of their words on other people’s feelings, how is that even possible?
This seems to be a function of predictability. I think ask culture developed (to some extent) in America due to the ‘melting pot’ nature of America. This meant that you couldn’t reliably predict how your ask would ‘echo’, and so you might as well just ask directly.
On the other hand, in somewhere like Japan where you not only have a very homogenous population, but also have a culture which specifically values conformity, then it becomes possible to reliably predict something like 4+ echos. And whatever is possible is what the culture tends toward, since you can improve your relative value to others by tracking more echos in a Red Queen’s Race. (It seems like this can be stopped if the culture takes pride in being an Ask culture, maybe Israeli culture is a good example here, though it is still kind of a melting pot.)
You can see the same sort of dynamic play out in the urban vs rural divide, e.g. New Yorkers are ‘rude’ and ‘blunt’, while small towns are ‘friendly’ and ‘charming’… if you’re a predictable person to them, that is.
My guess is that the ideal is something like a default Ask culture with specific Guess culture contexts when it genuinely is worth the extra consideration. Maybe when commenting on effortposts, for example.
Among other points in the essay, they have a model of “pushiness” where people can be more direct/forceful in a negotiation (e.g. discussing where to eat) to try to take more control over the outcome, or more subtle/indirect to take less control.
They suggest that if two people are both trying to get more control they can end up escalating until they’re shouting at each other, but that it’s actually more common for two people to both be trying to get less control, because the reputational penalty for being too domineering is often bigger than whatever’s at stake in the current negotiation, and so people try to be a little more accommodating than necessary, to be “on the safe side”, and this results in people spiraling into indirection until they can no longer understand each other.
They suggested that more homogenized cultures can spiral farther into indirection because people understand each other better, while more diverse cultures are forced to stop sooner because they have more misunderstandings, and so e.g. the melting-pot USA ends up being more blunt than Japan.
They also suggested that “ask culture” and “guess culture” can be thought of as different expectations about what point on the blunt/subtle scale is “normal”. The same words, spoken in ask culture, could be a bid for a small amount of control, but when spoken in guess culture, could be a bid for a large amount of control.
I’m quite glad to be reminded of that essay in this context, since it provides a competing explanation of how ask/guess culture can be thought of as different amounts of a single thing, rather than two fundamentally different things. I’ll have to do some thinking about how these two models might complement or clash with each other, and how much I ought to believe each of them where they differ.
My guess is that the ideal is something like a default Ask culture with specific Guess culture contexts when it genuinely is worth the extra consideration.
IMO the ideal is a culture where everyone puts some reasonable effort into Guessing when feasible, but where Asking is also fully accepted.
One thing I didn’t have time for in the post proper is that ask culture (or something like it) is crucial for diplomacy—diplomatic cosmopolitan contexts require that everyone set aside their knee-jerk assumptions about what “everyone knows” or what X “obviously means,” etc. I think part of why it came about (/has almost certainly been reinvented thousands of times) is that people wanted to interact nondestructively with people whose cultural assumptions greatly differed from their own.
Some thoughts that taking this perspective triggers in me:
Ask culture is actually kind of a fantastical achievement in human history, given the degree to which humans are social animals and our minds are constantly processing social consequences. Getting people to just say what they’re thinking, without considering the impact of their words on other people’s feelings, how is that even possible?
If you consider it to be a rare and valuable achievement, a highly desirable but potentially fragile Schelling point or equilibrium (guys, if we leave level 0, we’ll sink into a quagmire of infinite levels of social metacognition and never be able to easily tell what someone really has in mind!), perhaps that makes some people’s behaviors more understandable, such as insisting that their words have no social consequences, or why they’re so resistant to suggestions that they should consider other people’s feelings before they speak. (But they’re probably doing that subconsciously or by habit/indoctrination, not following this reasoning explicitly.)
I’m not sure what to do in light of all this. Even talking about it abstractly like I’m doing might destroy the shared illusion that is necessary to sustain a culture where people speak their minds honestly without consideration for social consequences. (But it’s probably ok here since the OP is already pushing away from it, and the door has already been opened by other posts/comments.)
I’m not super-wedded to ask culture—the considerations in the OP seem real, but it also seems to be neglecting the advantages of ask culture, and not asking why it came about in the first place. It feels like a potential Chesterton’s fence situation.
For what it’s worth, my policy around these sorts of things is roughly “That which can be destroyed by being accurately described, should be.”
More concretely, I think if you’re worried that something is so weak as to be destroyed by being named, then this is a sign that you should do something to make it stronger — for instance, celebrating it, or writing a blogpost that explains clearly and well why it’s important.
Alternatively, I’ve found that often my worry is unfounded, and that the thing was indeed something people care about and is stronger than I feared. And then talking about it just helps improve people’s maps and is good.
I think religion and institutions built up around it (such as freedom of religion) is a fairly clear counterexample to this. They are in part a coordination technology built upon a shared illusion (e.g., that God exists) and safeguards against its “misuse” built up from centuries of experience. If you destroy the illusion at the wrong time (i.e. before better replacements are ready), you could cause a lot of damage at least in the short run, and possibly even in the long run given path dependence.
Oh okay. I don’t find this convincing, consistent with my position above I’d bet that in the longer term we’d do best to hit a button that ended all religions today, and then eat the costs and spend the decades/centuries required to build better things in their stead. (I think it’s really embarrassing we don’t have better things in their place, especially after the industrial revolution.) I don’t think I can argue well for that position right now, I’ll need to think on it more (and maybe write a post on it when I’ve made some more progress on the reasonining).
(Obvious caveat that actually we only have like 0.5-3 decades of being humans any more, so the above ‘centuries’ isn’t realistic.)
“consistent with my position above I’d bet that in the longer term we’d do best to hit a button that ended all religions today, and then eat the costs and spend the decades/centuries required to build better things in their stead.”
Would you have pressed this button at every other point throughout history too? If not, when’s the earliest you would have pressed it?
For me the answer is “roughly the beginning of the 20th century?”
Like, seems to me that around that time humanity had enough of the pieces figured out to make a more naturalistic worldview work pretty well.
It’s kind of hard to specify what it would have meant to press that button some centuries earlier, since like, I think a non-trivial chunk of religion was people genuinely trying to figure out what reality is made out of, and what the cosmology of the world is, etc. Depending on the details of this specification I would have done it earlier.
If you get around to writing that post, please consider/address:
Theory of the second best—“The economists Richard Lipsey and Kelvin Lancaster showed in 1956 that if one optimality condition in an economic model cannot be satisfied, it is possible that the next-best solution involves changing other variables away from the values that would otherwise be optimal.”—Generalizing from this, given that humans deviate from optimal rationality in all kinds of unavoidable ways, the “second-best” solution may well involve belief in some falsehoods.
Managing risks while trying to do good—We’re all very tempted to overlook risks while trying to do good, including (in this instance) destroying “that which can be destroyed by truth”.
Before Christianity was discredited, it acted as a sort of shared lens through which the value of any proposed course of action could be evaluated. (I’m limiting my universe of discourse to Western society here.) I’m tempted to call such a lens an “ideological commitment” (where the “commitment” is a commitment to view everything that happens to you through the lens of the ideology—or at least a habit of doing so).
Committing to an ideology is one of the most powerful things a person can do to free himself from anxiety (because the commitment shifts his focus from his impotent vulnerable self to something much less vulnerable and much longer lived). Also, people who share a commitment to the same ideology tend to work together effectively: a small fraction of the employees of an organization for example who share a commitment to the same ideology have many times taken the organization over by using loyalty to the ideology to decide who to hire and who to promote. They’ve also taken over whole countries in a few cases.
The trouble with reducing the prestige and the influence of Christianity even now in 2025 is that the ideologies that have rushed in to fill the void (in the availability of ways to reduce personal anxiety and of ways to coordinate large groups of people) have had IMHO much worse effects than Christianity.
You, Ben, tend to think that society should “eat the costs and spend the decades/centuries required to build better things” than Christianity. The huge problem with that is that the extreme deadliness of one of ideologies that has rushed in to fill the void caused by the discrediting of Christianity: namely, the one (usually referred to vaguely by “progress” or “innovation”) that views every personal, organizational and political decision through the lens of which decision best advances or accelerates science and technology.
In trying to get frontier AI research stopped or paused for a few decades, we are facing off against not only trillions of dollars in economic / profit incentives, but also an ideology, and ideologies (including older ideologies like Christianity) have proven to be fierce opponents in the past.
Reducing the prestige and influence of Christianity will tend to increase the prestige and influence of all the other ideologies, including the ideology, which is already much more popular than I would prefer, that we can expect to offer up determined sustained opposition to anyone trying to stop or long-pause AI.
Is this really a widely held ideology? My impression is that the AI race is driven by greed much more than ideology.
I love this comment. I think persuading people towards atheism is good, because then there’s more demand for a new atheist religion. (I consider e/acc or even Yudkowsky-style longtermism a religion)
This seems to be a function of predictability. I think ask culture developed (to some extent) in America due to the ‘melting pot’ nature of America. This meant that you couldn’t reliably predict how your ask would ‘echo’, and so you might as well just ask directly.
On the other hand, in somewhere like Japan where you not only have a very homogenous population, but also have a culture which specifically values conformity, then it becomes possible to reliably predict something like 4+ echos. And whatever is possible is what the culture tends toward, since you can improve your relative value to others by tracking more echos in a Red Queen’s Race. (It seems like this can be stopped if the culture takes pride in being an Ask culture, maybe Israeli culture is a good example here, though it is still kind of a melting pot.)
You can see the same sort of dynamic play out in the urban vs rural divide, e.g. New Yorkers are ‘rude’ and ‘blunt’, while small towns are ‘friendly’ and ‘charming’… if you’re a predictable person to them, that is.
My guess is that the ideal is something like a default Ask culture with specific Guess culture contexts when it genuinely is worth the extra consideration. Maybe when commenting on effortposts, for example.
This reminds me of Social status part 1/2: negotiations over object-level preferences, particularly because of your comment that Japan might develop a standard of greater subtlety because they can predict each other better.
Among other points in the essay, they have a model of “pushiness” where people can be more direct/forceful in a negotiation (e.g. discussing where to eat) to try to take more control over the outcome, or more subtle/indirect to take less control.
They suggest that if two people are both trying to get more control they can end up escalating until they’re shouting at each other, but that it’s actually more common for two people to both be trying to get less control, because the reputational penalty for being too domineering is often bigger than whatever’s at stake in the current negotiation, and so people try to be a little more accommodating than necessary, to be “on the safe side”, and this results in people spiraling into indirection until they can no longer understand each other.
They suggested that more homogenized cultures can spiral farther into indirection because people understand each other better, while more diverse cultures are forced to stop sooner because they have more misunderstandings, and so e.g. the melting-pot USA ends up being more blunt than Japan.
They also suggested that “ask culture” and “guess culture” can be thought of as different expectations about what point on the blunt/subtle scale is “normal”. The same words, spoken in ask culture, could be a bid for a small amount of control, but when spoken in guess culture, could be a bid for a large amount of control.
I’m quite glad to be reminded of that essay in this context, since it provides a competing explanation of how ask/guess culture can be thought of as different amounts of a single thing, rather than two fundamentally different things. I’ll have to do some thinking about how these two models might complement or clash with each other, and how much I ought to believe each of them where they differ.
IMO the ideal is a culture where everyone puts some reasonable effort into Guessing when feasible, but where Asking is also fully accepted.
One thing I didn’t have time for in the post proper is that ask culture (or something like it) is crucial for diplomacy—diplomatic cosmopolitan contexts require that everyone set aside their knee-jerk assumptions about what “everyone knows” or what X “obviously means,” etc. I think part of why it came about (/has almost certainly been reinvented thousands of times) is that people wanted to interact nondestructively with people whose cultural assumptions greatly differed from their own.