That’s not really accurate; any system operating today can usually be turned off as easy as executing a few commands in a terminal, or at worst, cutting power to some servers. Self-replication is similarly limited and contained.
If someone today even made something as basic as a simple LLM + engine that copies itself to other machines and keeps spreading, I’d say that is in fact bad, albeit certainly not world-ending bad.
This paper shows it can be done in principle, but in practice curren systems are still not capable enough to do this at full scale on the internet, and I think that even if we don’t die directly from full autonomous self replication, self improvement is only a few inches away, and is a true catastrophic/existential risk.
Yeah I’ve got no doubt it can be done, though as I said I don’t think it’s terribly dangerous yet. But my point is that you can build perfectly well lots of current systems without running afoul of this particular red line; self-replicating entities within the larger context of an evolutionary algorithm is not the same as letting loose a smart virus that copies itself through the internet.
That’s not really accurate; any system operating today can usually be turned off as easy as executing a few commands in a terminal, or at worst, cutting power to some servers. Self-replication is similarly limited and contained.
If someone today even made something as basic as a simple LLM + engine that copies itself to other machines and keeps spreading, I’d say that is in fact bad, albeit certainly not world-ending bad.
This has already been demoed: https://arxiv.org/abs/2412.12140 - Frontier AI systems have surpassed the self-replicating red line
This paper shows it can be done in principle, but in practice curren systems are still not capable enough to do this at full scale on the internet, and I think that even if we don’t die directly from full autonomous self replication, self improvement is only a few inches away, and is a true catastrophic/existential risk.
Yeah I’ve got no doubt it can be done, though as I said I don’t think it’s terribly dangerous yet. But my point is that you can build perfectly well lots of current systems without running afoul of this particular red line; self-replicating entities within the larger context of an evolutionary algorithm is not the same as letting loose a smart virus that copies itself through the internet.