No singularity seems pretty unlikely to me (e.g. 10%) and also I can easily imagine AI talking a while (e.g. 20 years) while still having a singularity.
Separately, no singularity plausibly implies no hinge of history and thus maybe implies that current work isn’t that important from a longtermist perspective
Well, humans are still at risk of being wiped out by a war with a successor species and/or a war with each other aided by a successor species, regardless of which of these models is true. Not dying as an individual or species is sort of a continuous always-a-hinge-of-history sort of thing.
Separately, no singularity plausibly implies we lose most value in the universe from the longtermist perspective.
I don’t really see why that would be the case. We can still go out and settle space, end disease, etc. it just means that starkly superintelligent machines turn out to not be alien minds by nature of their superintelligence.
No singularity seems pretty unlikely to me (e.g. 10%) and also I can easily imagine AI talking a while (e.g. 20 years) while still having a singularity.
Separately, no singularity plausibly implies no hinge of history and thus maybe implies that current work isn’t that important from a longtermist perspective
replying to edited version,
Well, humans are still at risk of being wiped out by a war with a successor species and/or a war with each other aided by a successor species, regardless of which of these models is true. Not dying as an individual or species is sort of a continuous always-a-hinge-of-history sort of thing.
I don’t really see why that would be the case. We can still go out and settle space, end disease, etc. it just means that starkly superintelligent machines turn out to not be alien minds by nature of their superintelligence.
(Sorry, I edited my comment because it was originally very unclear/misleading/wrong, does the edited version make more sense?)