The rapture of the nerds, like space colonization, is likely to be a non-participatory event for 99.999% of humanity — unless we’re very unlucky. If it happens and it’s interested in us, all our plans go out the window. If it doesn’t happen, sitting around waiting for the AIs to save us from the rising sea level/oil shortage/intelligent bioengineered termites looks like being a Real Bad Idea. The best approach to the singularity is to apply Pascal’s Wager — in reverse — and plan on the assumption that it ain’t going to happen, much less save us from ourselves.
He doesn’t even consider the possibility of trying to nudge it in a good direction. It’s either “plan on the assumption that it ain’t going to happen”, or sit around waiting for AIs to save us.
ETA: The “He” in your second paragraph is Kurtzweil, I presume?
Thinking that FAI is extremely difficult or unlikely isn’t obviously crazy, but Stross isn’t just saying “don’t bother trying FAI” but rather “don’t bother trying anything with the aim of making a good Singularity more likely”. The first sentence of his answer, which I neglected to quote, is “Forget it.”
Pretty much how I read it. It should acknowledge the attempts to make a FAI, but it seems like a reasonable pessimistic opinion that FAI is too difficult to ever be pulled off successfully before strong AI in general.
Seems like a sensible default stance to me. Since humans exist, we know that a general intelligence can be built out of atoms, and since humans have many obvious flaws as physical computation systems, we know that any successful AGI is likely to end up at least weakly superhuman. There isn’t a similarly strong reason to assume a FAI can be built, and the argument for one seems to be more on the lines of things being likely to go pretty weird and bad for humans if one can’t be built but an AGI can.
Look at his answer for The Singularity:
He doesn’t even consider the possibility of trying to nudge it in a good direction. It’s either “plan on the assumption that it ain’t going to happen”, or sit around waiting for AIs to save us.
ETA: The “He” in your second paragraph is Kurtzweil, I presume?
That quote could also be interpreted as saying that UFAI is far more likely than FAI.
Thinking that FAI is extremely difficult or unlikely isn’t obviously crazy, but Stross isn’t just saying “don’t bother trying FAI” but rather “don’t bother trying anything with the aim of making a good Singularity more likely”. The first sentence of his answer, which I neglected to quote, is “Forget it.”
Pretty much how I read it. It should acknowledge the attempts to make a FAI, but it seems like a reasonable pessimistic opinion that FAI is too difficult to ever be pulled off successfully before strong AI in general.
Seems like a sensible default stance to me. Since humans exist, we know that a general intelligence can be built out of atoms, and since humans have many obvious flaws as physical computation systems, we know that any successful AGI is likely to end up at least weakly superhuman. There isn’t a similarly strong reason to assume a FAI can be built, and the argument for one seems to be more on the lines of things being likely to go pretty weird and bad for humans if one can’t be built but an AGI can.