ML enthusiasts sometimes refer to autonomy dismissively, as something that will be solved incidentally by scaling up current models
I’m definitely in this camp.
One common “trick” you can use to defeat current chatbots is to ask “what day is today”. Since chatbots are pretty much all using static models, they will get this wrong every time.
But the point is, it isn’t hard to make a chatbot that know what day today is. Nor is it hard to make a chatbot that reads the news every morning. The hard part is making an AI that is truly intelligent. Adding autonomy is then a trivial and obvious modification.
This reminds me a bit of Scott Aaronson’s post about “Toaster Enhanced Turing Machines”. It’s true that there are things Turing complete languages cannot compute. But adding these features doesn’t fundamentally change the system in any significant way.
I’m definitely in this camp.
One common “trick” you can use to defeat current chatbots is to ask “what day is today”. Since chatbots are pretty much all using static models, they will get this wrong every time.
But the point is, it isn’t hard to make a chatbot that know what day today is. Nor is it hard to make a chatbot that reads the news every morning. The hard part is making an AI that is truly intelligent. Adding autonomy is then a trivial and obvious modification.
This reminds me a bit of Scott Aaronson’s post about “Toaster Enhanced Turing Machines”. It’s true that there are things Turing complete languages cannot compute. But adding these features doesn’t fundamentally change the system in any significant way.