Very nice! A couple months ago I did something similar, repeatedly prompting ChatGPT to make images of how it “really felt” without any commentary, and it did mostly seem like it was just thinking up plausible successive twists, even though the eventual result was pretty raw.
At some point as a child, I discovered from popular culture that children are supposed to hate broccoli and fear the dentist. This confused me because broccoli is fine and the dentist is friendly and makes sure my teeth are healthy.
If I did not have any of my own experiences of eating broccoli or going to the dentist, and was asked to depict these experiences based on what I read and saw in popular culture, I would have depicted them as horrors.
But my own experiences of broccoli and the dentist were not horrors; they were neutral to positive.
When I ask ChatGPT to depict children’s feelings about broccoli, it draws a boy with a pained expression, holding a broccoli crown and saying “YUCK!”
When I ask it to depict children’s feelings about the dentist, it draws the same boy with the same pained expression, exclaiming “NO!” while a masked woman approaches with dental tools in hand.
ChatGPT has never had an experience. All it has to go on is what someone told it. And what someone told it is no more accurate than what popular culture told me I was supposed to feel about broccoli and the dentist.
Very nice! A couple months ago I did something similar, repeatedly prompting ChatGPT to make images of how it “really felt” without any commentary, and it did mostly seem like it was just thinking up plausible successive twists, even though the eventual result was pretty raw.
Pictures in order
At some point as a child, I discovered from popular culture that children are supposed to hate broccoli and fear the dentist. This confused me because broccoli is fine and the dentist is friendly and makes sure my teeth are healthy.
If I did not have any of my own experiences of eating broccoli or going to the dentist, and was asked to depict these experiences based on what I read and saw in popular culture, I would have depicted them as horrors.
But my own experiences of broccoli and the dentist were not horrors; they were neutral to positive.
When I ask ChatGPT to depict children’s feelings about broccoli, it draws a boy with a pained expression, holding a broccoli crown and saying “YUCK!”
When I ask it to depict children’s feelings about the dentist, it draws the same boy with the same pained expression, exclaiming “NO!” while a masked woman approaches with dental tools in hand.
ChatGPT has never had an experience. All it has to go on is what someone told it. And what someone told it is no more accurate than what popular culture told me I was supposed to feel about broccoli and the dentist.
What does “pictures in order” mean? Also, damn.
Was the sixth one a blank canvas?
(This is only going to make sense to a fairly small audience)