ChatGPT will be way more useful when integrated to other existing systems
“we can inspect how ChatGPT interacts with other systems”—the developer doth protest too much, methinks; yes the API calls will be the only part of the entire system that is somehow transparent to an ordinary human (an ordinary human with programming skills, I mean); also I expect that as humans get used to using ChatGPT, the API calls will be hidden again as a design decision
yes it is amazing how with an intelligent machine you can have things that require some thinking done automatically and quickly (in hindsight, the “agile programming” was invented for ChatGPT, not for humans, because human developers get annoyed when you keep fundamentally changing their requirements every week, but the AI does not mind if you do it once per minute, haha)
we get better at predicting how the AI capabilities will change when it is scaled 100 or 1000 times, so we can do experiments on smaller models and scale them when needed
his idea of safety seems to be “learning new capabilities step by step and providing feedback” and “if we go full speed ahead, at least we do not create capability overhang, which would be even more dangerous”
Overall, very good video! I am not really convinced about the safety part, but I am not sure what we can do about it anyway, the cat is already out of the bag.
My notes:
ChatGPT will be way more useful when integrated to other existing systems
“we can inspect how ChatGPT interacts with other systems”—the developer doth protest too much, methinks; yes the API calls will be the only part of the entire system that is somehow transparent to an ordinary human (an ordinary human with programming skills, I mean); also I expect that as humans get used to using ChatGPT, the API calls will be hidden again as a design decision
yes it is amazing how with an intelligent machine you can have things that require some thinking done automatically and quickly (in hindsight, the “agile programming” was invented for ChatGPT, not for humans, because human developers get annoyed when you keep fundamentally changing their requirements every week, but the AI does not mind if you do it once per minute, haha)
we get better at predicting how the AI capabilities will change when it is scaled 100 or 1000 times, so we can do experiments on smaller models and scale them when needed
his idea of safety seems to be “learning new capabilities step by step and providing feedback” and “if we go full speed ahead, at least we do not create capability overhang, which would be even more dangerous”
Overall, very good video! I am not really convinced about the safety part, but I am not sure what we can do about it anyway, the cat is already out of the bag.