This seems like a bad idea. As observed on Reddit, most members of r/accelerate, the main accelerationist sub, have joined because of annoyance at extremely uninformed anti-ai (mostly art) sentiment online. Although there could be a mild benefit to ai safety from anti-ai thought, the risk of converting people to accelerationism is much worse. In addition, the commonly accepted anti-ai perception of ASI/AGI is that it is made up and a way for current AI companies to make the public believe their products are better than they actually are, which would obviously be unhelpful to serious AI safety.
This makes sense to me. A parallel thing might be how some Christians today shy away from evangelism, not because they think it’s inherently bad[1], but because it might drive people away.
The message does have to be nuanced somehow. It would be good, for example, for cultural memes to dissuade students from working on capabilities, but it would be bad for cultural memes to say that AI is useless.
But I do think that the risk of converting people to accelerationism is not that high. People already have fairly negative sentiment, despite the salience, and r/accelerate only has about 13k members. You’d expect more if the causation was significant. (In comparison, r/singularity has 3.7 million members holy crap; r/artificial has 1.2 million, although I don’t know their respective valences.)
You’d also have to ask how stable the positive beliefs are in the face of evidence, as opposed to the negative beliefs. I suspect the negative beliefs are more stable (being closer to the truth… or something like that).
This seems like a bad idea. As observed on Reddit, most members of r/accelerate, the main accelerationist sub, have joined because of annoyance at extremely uninformed anti-ai (mostly art) sentiment online. Although there could be a mild benefit to ai safety from anti-ai thought, the risk of converting people to accelerationism is much worse. In addition, the commonly accepted anti-ai perception of ASI/AGI is that it is made up and a way for current AI companies to make the public believe their products are better than they actually are, which would obviously be unhelpful to serious AI safety.
This makes sense to me. A parallel thing might be how some Christians today shy away from evangelism, not because they think it’s inherently bad[1], but because it might drive people away.
The message does have to be nuanced somehow. It would be good, for example, for cultural memes to dissuade students from working on capabilities, but it would be bad for cultural memes to say that AI is useless.
But I do think that the risk of converting people to accelerationism is not that high. People already have fairly negative sentiment, despite the salience, and r/accelerate only has about 13k members. You’d expect more if the causation was significant. (In comparison, r/singularity has 3.7 million members holy crap; r/artificial has 1.2 million, although I don’t know their respective valences.)
You’d also have to ask how stable the positive beliefs are in the face of evidence, as opposed to the negative beliefs. I suspect the negative beliefs are more stable (being closer to the truth… or something like that).
though Calvinists would say only God can do effectual calling, so without a mandate from God you’re spinning your wheels