Bostrom talks about this in his book “Superintelligence” when he discusses the dangers of Oracle AI. It’s a valid concern, we’re just a long way from that with GPT-like models, I think.
I used to think a system trained on text only could never learn vision. So if it escaped onto the internet, it would be pretty limited in how it could interface with the outside world since it couldn’t interpret streams from cameras. But then I realized that probably in it’s training data is text on how to program a CNN. So in theory a system trained on only text could build a CNN algorithm inside itself and use that to learn how to interpret vision streams. Theoretically. A lot of stuff is theoretically possible with future AI, but how easy it is to realize in practice is a different story.
Bostrom talks about this in his book “Superintelligence” when he discusses the dangers of Oracle AI. It’s a valid concern, we’re just a long way from that with GPT-like models, I think.
I used to think a system trained on text only could never learn vision. So if it escaped onto the internet, it would be pretty limited in how it could interface with the outside world since it couldn’t interpret streams from cameras. But then I realized that probably in it’s training data is text on how to program a CNN. So in theory a system trained on only text could build a CNN algorithm inside itself and use that to learn how to interpret vision streams. Theoretically. A lot of stuff is theoretically possible with future AI, but how easy it is to realize in practice is a different story.