This fits my intuition. Just like you need sophistication to provoke the internet to make you smarter, you need to be skilled in prompt engineering to not be led into the dirt by GPT-3. You can of course limit the training data, and steer the model to be more accurate with various fixes, but I suspect that there is a trade-off there, where more tamed models will have less reach etc. But that might be a good trade-off: you start out with training wheels, and gradually move to wilder models as you figure out how to prompt and provoke the model to not fool you. Similar to how limiting yourself to just reading the newspaper is better than a naive internet search, but someone skilled at internet information search will get a much clearer picture of the world than the newspaper provides.
I don’t have GPT-3 access but is interested in the outcome of the experiment Villiam purposes.
This fits my intuition. Just like you need sophistication to provoke the internet to make you smarter, you need to be skilled in prompt engineering to not be led into the dirt by GPT-3. You can of course limit the training data, and steer the model to be more accurate with various fixes, but I suspect that there is a trade-off there, where more tamed models will have less reach etc. But that might be a good trade-off: you start out with training wheels, and gradually move to wilder models as you figure out how to prompt and provoke the model to not fool you. Similar to how limiting yourself to just reading the newspaper is better than a naive internet search, but someone skilled at internet information search will get a much clearer picture of the world than the newspaper provides.
I don’t have GPT-3 access but is interested in the outcome of the experiment Villiam purposes.