Does anyone here have any tips on customizing and testing their AI? Personally, if I’m asking for an overview of a subject I’m unfamiliar with, I want the AI to examine things from a skeptical point of view. My main test case for this was: “What can you tell me about H. H. Holmes?” Initially, all the major AIs I tried, like ChatGPT, failed badly. But it seems they’re doing better with that question nowadays, even without customization.
Why ask that question? Because there is an overwhelming flood of bad information about H. H. Holmes that drowns out more plausible analysis of the topic. So, as a human, you might have to deliberately seek out sources trying to debunk the myths before you can find any plausible breakdowns. So, it seemed like a good test for an AI to see if it could uncover stories which have been buried by misleading popular accounts.
Does anyone here have good methods for identifying similar test cases? topics where the dominant narrative is misleading, but not so thoroughly debunked that the AI just parrots standard corrections?
Does anyone have any customizations you consider essential? How do you test them?
Slightly different, but I tried some experiments deliberately misspelling celebrity names—note how when I ask about “Miranda June” and then say “sorry I got my months mixed up” it apologizes that it knows nothing, yet correctly describes Miranda July as a “artist, filmmaker, writer and actress”
Does anyone here have any tips on customizing and testing their AI? Personally, if I’m asking for an overview of a subject I’m unfamiliar with, I want the AI to examine things from a skeptical point of view. My main test case for this was: “What can you tell me about H. H. Holmes?” Initially, all the major AIs I tried, like ChatGPT, failed badly. But it seems they’re doing better with that question nowadays, even without customization.
Why ask that question? Because there is an overwhelming flood of bad information about H. H. Holmes that drowns out more plausible analysis of the topic. So, as a human, you might have to deliberately seek out sources trying to debunk the myths before you can find any plausible breakdowns. So, it seemed like a good test for an AI to see if it could uncover stories which have been buried by misleading popular accounts.
Does anyone here have good methods for identifying similar test cases? topics where the dominant narrative is misleading, but not so thoroughly debunked that the AI just parrots standard corrections?
Does anyone have any customizations you consider essential? How do you test them?
Slightly different, but I tried some experiments deliberately misspelling celebrity names—note how when I ask about “Miranda June” and then say “sorry I got my months mixed up” it apologizes that it knows nothing, yet correctly describes Miranda July as a “artist, filmmaker, writer and actress”