If we manage to build an AI that (1) possesses current LLM abilities, and (2) meets or exceeds the competence of a bright 12-year-old in other abilities, then I give us an 80+% chance of takeover by a superintelligence. After that, I expect the superintelligence to make all the important decisions about the future of humanity, in much the same way that humans make all the important decisions about the future of stray dogs. Euthanasia? Sterilization? A nice human rescue program and house-training? It won’t be our call. And since the AI probably pretty damn alien, we probably won’t like its call.
So I endorse, “If anyone builds it, then everyone dies. Or maybe it you’re incredibly lucky, you might just get spayed and lose all control over your life.” I don’t see this as a big improvement.
As a general rule, I feel like building an incomprehensible alien superintelligence is a very dumb move, for reasons that should be painfully self-evident to anyone who gives it 10 seconds of genuine thought.
If we manage to build an AI that (1) possesses current LLM abilities, and (2) meets or exceeds the competence of a bright 12-year-old in other abilities, then I give us an 80+% chance of takeover by a superintelligence. After that, I expect the superintelligence to make all the important decisions about the future of humanity, in much the same way that humans make all the important decisions about the future of stray dogs. Euthanasia? Sterilization? A nice human rescue program and house-training? It won’t be our call. And since the AI probably pretty damn alien, we probably won’t like its call.
So I endorse, “If anyone builds it, then everyone dies. Or maybe it you’re incredibly lucky, you might just get spayed and lose all control over your life.” I don’t see this as a big improvement.
As a general rule, I feel like building an incomprehensible alien superintelligence is a very dumb move, for reasons that should be painfully self-evident to anyone who gives it 10 seconds of genuine thought.