There is a phenomenon in which rationalists sometimes make predictions about the future, and they seem to completely forget their other belief that we’re heading toward a singularity (good or bad) relatively soon. It’s ubiquitous, and it kind of drives me insane. Consider these two tweets:
Timelines are really uncertain and you can always make predictions conditional on “no singularity”. Even if singularity happens you can always ask superintelligence “hey, what would be the consequences of this particular intervention in business-as-usual scenario” and be vindicated.
Why would they spend ~30 characters in a tweet to be slightly more precise while making their point more alienating to normal people who, by and large, do not believe in a singularity and think people who do are faintly ridiculous? The incentives simply are not there.
And that’s assuming they think the singularity is imminent enough that their tweets won’t be born out even beforehand. And assuming that they aren’t mostly just playing signaling games—both of these tweets read less as sober analysis to me, and more like in-group signaling.
Absolutely agreed. Wider public social norms are heavily against even mentioning any sort of major disruption due to AI in the near future (unless limited to specific jobs or copyright), and most people don’t even understand how to think about conditional predictions. Combining the two is just the sort of thing strange people like us do.
This is true, but then why not state “conditional on no singularity” if they intended that?
Because that’s a mouthful? And the default for an ordinary person (which is potentially most of their readers) is “no Singularity”, and the people expecting the Singularity can infer that it’s clearly about a no-Singularity branch.
There is a phenomenon in which rationalists sometimes make predictions about the future, and they seem to completely forget their other belief that we’re heading toward a singularity (good or bad) relatively soon. It’s ubiquitous, and it kind of drives me insane. Consider these two tweets:
Timelines are really uncertain and you can always make predictions conditional on “no singularity”. Even if singularity happens you can always ask superintelligence “hey, what would be the consequences of this particular intervention in business-as-usual scenario” and be vindicated.
This is true, but then why not state “conditional on no singularity” if they intended that? I somehow don’t buy that that’s what they meant
Why would they spend ~30 characters in a tweet to be slightly more precise while making their point more alienating to normal people who, by and large, do not believe in a singularity and think people who do are faintly ridiculous? The incentives simply are not there.
And that’s assuming they think the singularity is imminent enough that their tweets won’t be born out even beforehand. And assuming that they aren’t mostly just playing signaling games—both of these tweets read less as sober analysis to me, and more like in-group signaling.
Absolutely agreed. Wider public social norms are heavily against even mentioning any sort of major disruption due to AI in the near future (unless limited to specific jobs or copyright), and most people don’t even understand how to think about conditional predictions. Combining the two is just the sort of thing strange people like us do.
Because that’s a mouthful? And the default for an ordinary person (which is potentially most of their readers) is “no Singularity”, and the people expecting the Singularity can infer that it’s clearly about a no-Singularity branch.
I think the general population doesn’t know all that much about singularity, so adding that to the part would just unnecessarily dilute it.
This is definitely baked in for many people (e.g. me, but also see the discussion here for example).
See also: population decline discourse
I think Richard has one to two decade timelines?
Two decades don’t seem like enough to generate the effect he’s talking about. He might disagree though.