I think we’ll get some more scares from systems like autoGPT. Watching an AI think to itself, in English, is going to be powerful. And when someone hooks one up to an unturned model and asks it to think about whether and how to take over the world, I think we’ll get another media event. For good reasons.
I think actually making such systems, while the core LLM is still too dumb to actually succeed at taking over the world, might be important.
I totally agree that it might be good to have such a fire alarm as soon as possible, and looking at how fast people make GPT-4 more and more powerful makes me think that this is only a matter of time.
I think we’ll get some more scares from systems like autoGPT. Watching an AI think to itself, in English, is going to be powerful. And when someone hooks one up to an unturned model and asks it to think about whether and how to take over the world, I think we’ll get another media event. For good reasons.
I think actually making such systems, while the core LLM is still too dumb to actually succeed at taking over the world, might be important.
I totally agree that it might be good to have such a fire alarm as soon as possible, and looking at how fast people make GPT-4 more and more powerful makes me think that this is only a matter of time.