Timelines are really uncertain and you can always make predictions conditional on “no singularity”. Even if singularity happens you can always ask superintelligence “hey, what would be the consequences of this particular intervention in business-as-usual scenario” and be vindicated.
Why would they spend ~30 characters in a tweet to be slightly more precise while making their point more alienating to normal people who, by and large, do not believe in a singularity and think people who do are faintly ridiculous? The incentives simply are not there.
And that’s assuming they think the singularity is imminent enough that their tweets won’t be born out even beforehand. And assuming that they aren’t mostly just playing signaling games—both of these tweets read less as sober analysis to me, and more like in-group signaling.
Absolutely agreed. Wider public social norms are heavily against even mentioning any sort of major disruption due to AI in the near future (unless limited to specific jobs or copyright), and most people don’t even understand how to think about conditional predictions. Combining the two is just the sort of thing strange people like us do.
This is true, but then why not state “conditional on no singularity” if they intended that?
Because that’s a mouthful? And the default for an ordinary person (which is potentially most of their readers) is “no Singularity”, and the people expecting the Singularity can infer that it’s clearly about a no-Singularity branch.
Timelines are really uncertain and you can always make predictions conditional on “no singularity”. Even if singularity happens you can always ask superintelligence “hey, what would be the consequences of this particular intervention in business-as-usual scenario” and be vindicated.
This is true, but then why not state “conditional on no singularity” if they intended that? I somehow don’t buy that that’s what they meant
Why would they spend ~30 characters in a tweet to be slightly more precise while making their point more alienating to normal people who, by and large, do not believe in a singularity and think people who do are faintly ridiculous? The incentives simply are not there.
And that’s assuming they think the singularity is imminent enough that their tweets won’t be born out even beforehand. And assuming that they aren’t mostly just playing signaling games—both of these tweets read less as sober analysis to me, and more like in-group signaling.
Absolutely agreed. Wider public social norms are heavily against even mentioning any sort of major disruption due to AI in the near future (unless limited to specific jobs or copyright), and most people don’t even understand how to think about conditional predictions. Combining the two is just the sort of thing strange people like us do.
Because that’s a mouthful? And the default for an ordinary person (which is potentially most of their readers) is “no Singularity”, and the people expecting the Singularity can infer that it’s clearly about a no-Singularity branch.
I think the general population doesn’t know all that much about singularity, so adding that to the part would just unnecessarily dilute it.
This is definitely baked in for many people (e.g. me, but also see the discussion here for example).