I use “nanobots” to mean “self-replicating microscopic machines with some fundamental mechanistic differences from all biological life that make them superior”.
(And I believe they’re using “grey goo” the same way.) So I think you’re using a different definition of “grey goo” from OP, and that under OP’s definition, biological life is not an existence proof.
I think the question of “whether grey-goo-as-defined-by-OP is possible” is an interesting question and I’d be curious to know the answer for various reasons, even if it’s not super-central in the context of AI risk.
He excludes the only examples we have, which is fine for his purposes, though I’m skeptical it’s useful as a definition, especially since “some difference” is an unclear and easily moved bar. However, it doesn’t change the way we want to do prediction about whether something different is possible. That is, even if the example is excluded, it is very relevant for the question “is something in the class possible to specify.”
OP said:
(And I believe they’re using “grey goo” the same way.) So I think you’re using a different definition of “grey goo” from OP, and that under OP’s definition, biological life is not an existence proof.
I think the question of “whether grey-goo-as-defined-by-OP is possible” is an interesting question and I’d be curious to know the answer for various reasons, even if it’s not super-central in the context of AI risk.
He excludes the only examples we have, which is fine for his purposes, though I’m skeptical it’s useful as a definition, especially since “some difference” is an unclear and easily moved bar. However, it doesn’t change the way we want to do prediction about whether something different is possible. That is, even if the example is excluded, it is very relevant for the question “is something in the class possible to specify.”