I think it’s way too late to stop. Now the world knows what transformers can do, how are you going to stop it worldwide? Shut down every data center you can? Have trusted cyber-regulators overseeing every program that runs, in every remaining data center?
I always promote https://metaethical.ai by @june-ku as the best concrete proposal we have. Understand it and promote it and you might be doing some good. :-)
Basically we need a lot more Eliezers. We need a lot more AI advocates that tell it like it is, that make us shit our pants, that won’t soften their message to appear reasonable. That are actually realistic about timelines and risk. As long as most of the popular advocates keep with the approach of “don’t panic, don’t be afraid, don’t worry, it’s doable, if only we remain positive and believe in ourselves” then there is no hope. As long as people will keep lying to themselves to avoid panic, there is no hope. Panic can be treated in many ways, even in the most extreme cases with benzodiazepines. Disaster once it settles has no solution, and it’s a lot worse.
It would take a vast proportion of the world to shit their pants and form international organizations for regulation. As long as you can restrict global production and access to supercomputers, you can gain a few decades. Those decades will allow for more measures to be tried.
Formalizing ethics seems like a bad way. We need concrete priorities, not values. Value learning is dangerous. Anyway, like with most other alignment approaches, you’d need centuries for that. What’s the probability you’ll get there in 1-2 decades? I’d say less than 1%. Whereas my approach gives you time, time that can be used to try a multitude of approaches, yours included.
I think it’s way too late to stop. Now the world knows what transformers can do, how are you going to stop it worldwide? Shut down every data center you can? Have trusted cyber-regulators overseeing every program that runs, in every remaining data center?
I always promote https://metaethical.ai by @june-ku as the best concrete proposal we have. Understand it and promote it and you might be doing some good. :-)
Basically we need a lot more Eliezers. We need a lot more AI advocates that tell it like it is, that make us shit our pants, that won’t soften their message to appear reasonable. That are actually realistic about timelines and risk. As long as most of the popular advocates keep with the approach of “don’t panic, don’t be afraid, don’t worry, it’s doable, if only we remain positive and believe in ourselves” then there is no hope. As long as people will keep lying to themselves to avoid panic, there is no hope. Panic can be treated in many ways, even in the most extreme cases with benzodiazepines. Disaster once it settles has no solution, and it’s a lot worse.
It would take a vast proportion of the world to shit their pants and form international organizations for regulation. As long as you can restrict global production and access to supercomputers, you can gain a few decades. Those decades will allow for more measures to be tried.
Formalizing ethics seems like a bad way. We need concrete priorities, not values. Value learning is dangerous. Anyway, like with most other alignment approaches, you’d need centuries for that. What’s the probability you’ll get there in 1-2 decades? I’d say less than 1%. Whereas my approach gives you time, time that can be used to try a multitude of approaches, yours included.