So let’s do it first, before the evil guys do it, but let’s do it well from the start!
The trouble is no one knows how to do it well. No one knows how to keep an AI aligned as the AI’s capabilities start exceeding human capabilities, and if you believe experts like Eliezer and Connor Leahy, it is very unlikely that anyone is going to figure it out before the lack of this knowledge causes human extinction or something equally dire.
It is only a slight exaggeration to say that the only thing keeping the current crop of AI systems from killing us all (or killing most of us and freezing some of us in case we end up having some use in the future) is simply that no AI or coalition of AIs so far is capable of doing it.
Actually there is a good way to do it: shut down all AI research till humanity figures out alignment, which will probably require waiting for a generation of humans significantly smarter than the current generation, which in turn will require probably at least a few centuries.
I just see it as a technological fact that it is very possible to build an AI which exerts economic dominance by just assembling existing puzzle pieces. With just a little bit of development effort, AI will be able to run an entire business, make money and then do stuff with that money. And this AI can then easily spiral into becoming autonomous and then god knows what it’ll do with all the money (i.e. power) it will then have.
Be realistic: Shutting down all AI research will never happen. You can advocate for it as much as you want, but Pandora’s Box has been opened. We don’t have time to wait until “humanity figures out alignment”, because by then we’ll all be enslaved by AGI. If we don’t make the first step in building it, someone else will.
The trouble is no one knows how to do it well. No one knows how to keep an AI aligned as the AI’s capabilities start exceeding human capabilities, and if you believe experts like Eliezer and Connor Leahy, it is very unlikely that anyone is going to figure it out before the lack of this knowledge causes human extinction or something equally dire.
It is only a slight exaggeration to say that the only thing keeping the current crop of AI systems from killing us all (or killing most of us and freezing some of us in case we end up having some use in the future) is simply that no AI or coalition of AIs so far is capable of doing it.
Actually there is a good way to do it: shut down all AI research till humanity figures out alignment, which will probably require waiting for a generation of humans significantly smarter than the current generation, which in turn will require probably at least a few centuries.
I’m not saying that I know how to do it well.
I just see it as a technological fact that it is very possible to build an AI which exerts economic dominance by just assembling existing puzzle pieces. With just a little bit of development effort, AI will be able to run an entire business, make money and then do stuff with that money. And this AI can then easily spiral into becoming autonomous and then god knows what it’ll do with all the money (i.e. power) it will then have.
Be realistic: Shutting down all AI research will never happen. You can advocate for it as much as you want, but Pandora’s Box has been opened. We don’t have time to wait until “humanity figures out alignment”, because by then we’ll all be enslaved by AGI. If we don’t make the first step in building it, someone else will.