Starting a CS PhD at NYU this fall.
andyqhan
This makes sense to me. A parallel thing might be how some Christians today shy away from evangelism, not because they think it’s inherently bad[1], but because it might drive people away.
The message does have to be nuanced somehow. It would be good, for example, for cultural memes to dissuade students from working on capabilities, but it would be bad for cultural memes to say that AI is useless.
But I do think that the risk of converting people to accelerationism is not that high. People already have fairly negative sentiment, despite the salience, and r/accelerate only has about 13k members. You’d expect more if the causation was significant. (In comparison, r/singularity has 3.7 million members holy crap; r/artificial has 1.2 million, although I don’t know their respective valences.)
You’d also have to ask how stable the positive beliefs are in the face of evidence, as opposed to the negative beliefs. I suspect the negative beliefs are more stable (being closer to the truth… or something like that).
- ^
though Calvinists would say only God can do effectual calling, so without a mandate from God you’re spinning your wheels
- ^
Yes, apparently Chinese people (and Asians in general, with the exception of Japan) are far less concerned… I didn’t know about this at all. In this 2024 Ipsos poll, 80% agree that they’re excited about products and services that use AI, while 15% disagree (compared with 34%/55% in the US). 62% think that AI will make their job better. (The 2025 Ipsos poll doesn’t include China for some reason.) Chinese elite students also overwhelmingly see positive effects.
It’s a great point about “dividing a broad coalition” (including early AGIs). But, if one thinks that caution/worry about AI is justified or a good thing, then maybe it would be helpful to lower sentiment in Asia. Views can be changed with evidence, although I don’t know enough about Chinese culture to have any idea why they’re so positive on AI. (Wild guess would be a clear path to material wealth through it and its complements? Or better diffusion and salient mundane utility?)
The right framing is definitely not “AI bad”. This would divide, even within the Anglosphere, as Hruss notes. I don’t know what the right framing is — maybe “AI takes jobs” or “AI dangerous” or “AI powerful therefore scary”.
For what it’s worth, I think that Justis hits the nail on the head with “I think probably under current conditions, broken English is less of a red flag for people than LLM-ese.” In such a global language as English, people naturally give slack. (Also, non-native speakers are kind of in an adversarial situation with LLM-ese, since it’s harder to detect when you aren’t as immersed in standard American/British English.)
Concrete example: my parents, whose English is fairly weak, always say that one of the nice things about America is that people are linguistically generous. They illustrate it like this: “In our country, if people can’t understand you, they think it’s your fault. In America, they think it’s theirs.” I think the same is true of the internet, especially somewhere like LessWrong.
On a practical note, I think spellcheckers like those in Docs and Word are sufficient for these contexts. In academic writing or whatever, when standard English serves more of a signaling function, it’s trickier.