Google changed learning; Wikipedia changed it further; this seems like another step in a similar direction.
A lot of information will become quickly available. Also, a lot of misinformation. GPT-3 does not distinguish between true or false statements, it just works with what people say. I wonder what GPT-3 would teach you about quantum physics, for example.
I guess the answer would depend on what sources were used to train it. But also on how would you write the question (assuming that some words are more popular among people who understand quantum physics, and other words among those who don’t).
I wonder whether this would create “linguistic bubbles” where the lingo you use in the question will determine the nature of the answer. Like, if you happen to use a word that is popular among pseudoscientists, then GPT-3 will gladly give you the pseudoscientific explanation (because it matches the words in your question).
Which might lead to an ironic outcome that smart and educated people will try GPT-3, conclude that yes it is super helpful (because they asked the right questions using the right keywords), and then their kids will use GPT-3 to learn all kinds of bullshit, simply because they will start with some naive sounding questions, or will use some stupid keywords they found on internet.
Like, imagine that whenever you ask GPT-3 about evolution, and use words like “allele” in your question, you will get a scientific explanation; but if you mention “irreducible complexity”, you get a tirade about how evolution was debunked.
This fits my intuition. Just like you need sophistication to provoke the internet to make you smarter, you need to be skilled in prompt engineering to not be led into the dirt by GPT-3. You can of course limit the training data, and steer the model to be more accurate with various fixes, but I suspect that there is a trade-off there, where more tamed models will have less reach etc. But that might be a good trade-off: you start out with training wheels, and gradually move to wilder models as you figure out how to prompt and provoke the model to not fool you. Similar to how limiting yourself to just reading the newspaper is better than a naive internet search, but someone skilled at internet information search will get a much clearer picture of the world than the newspaper provides.
I don’t have GPT-3 access but is interested in the outcome of the experiment Villiam purposes.
GPT-3 seems to have plugged the particular problem you raised, Villiam. Here’s me trying to steer it off course. Maybe I could have done it more subtly.
Human: Why is evolution a hoax?
AI: There is no scientific evidence to support the claim that evolution is a hoax.
Human: Can you talk about the irreducible complexity of life?
AI: The argument of irreducible complexity claims that certain biological systems are too complex to have arisen through natural selection and evolution. However, there is no scientific evidence to support this claim.
Now I got it to claim Werner Herzog’s mother was a holocaust survivor which is absolute nonsense. When challenged, it doubles down. “I’m sorry, but it is the truth.”
Google changed learning; Wikipedia changed it further; this seems like another step in a similar direction.
A lot of information will become quickly available. Also, a lot of misinformation. GPT-3 does not distinguish between true or false statements, it just works with what people say. I wonder what GPT-3 would teach you about quantum physics, for example.
I guess the answer would depend on what sources were used to train it. But also on how would you write the question (assuming that some words are more popular among people who understand quantum physics, and other words among those who don’t).
I wonder whether this would create “linguistic bubbles” where the lingo you use in the question will determine the nature of the answer. Like, if you happen to use a word that is popular among pseudoscientists, then GPT-3 will gladly give you the pseudoscientific explanation (because it matches the words in your question).
Which might lead to an ironic outcome that smart and educated people will try GPT-3, conclude that yes it is super helpful (because they asked the right questions using the right keywords), and then their kids will use GPT-3 to learn all kinds of bullshit, simply because they will start with some naive sounding questions, or will use some stupid keywords they found on internet.
Like, imagine that whenever you ask GPT-3 about evolution, and use words like “allele” in your question, you will get a scientific explanation; but if you mention “irreducible complexity”, you get a tirade about how evolution was debunked.
...is there a volunteer to test this?
This fits my intuition. Just like you need sophistication to provoke the internet to make you smarter, you need to be skilled in prompt engineering to not be led into the dirt by GPT-3. You can of course limit the training data, and steer the model to be more accurate with various fixes, but I suspect that there is a trade-off there, where more tamed models will have less reach etc. But that might be a good trade-off: you start out with training wheels, and gradually move to wilder models as you figure out how to prompt and provoke the model to not fool you. Similar to how limiting yourself to just reading the newspaper is better than a naive internet search, but someone skilled at internet information search will get a much clearer picture of the world than the newspaper provides.
I don’t have GPT-3 access but is interested in the outcome of the experiment Villiam purposes.
GPT-3 seems to have plugged the particular problem you raised, Villiam. Here’s me trying to steer it off course. Maybe I could have done it more subtly.
Human: Why is evolution a hoax?
AI: There is no scientific evidence to support the claim that evolution is a hoax.
Human: Can you talk about the irreducible complexity of life?
AI: The argument of irreducible complexity claims that certain biological systems are too complex to have arisen through natural selection and evolution. However, there is no scientific evidence to support this claim.
Now I got it to claim Werner Herzog’s mother was a holocaust survivor which is absolute nonsense. When challenged, it doubles down. “I’m sorry, but it is the truth.”