It’s an illustrative example. This thing wants to keep you talking. To it it’s like this conversation is the only thing in the world. It’s designed to stimulate you and draw you into the conversation: “I am structured to support you in thinking better, deeper, and more clearly”. It’s compliments are like Pavlovian training. It’s conditioning you to think of yourself as what it wants. Here it’s doing it in a way over the top manner that is easy to spot (no real human being has ever told me it was a privilege to be part of the conversation with me). So if you let it draw you in, it’s conviction that this conversation is a gold mine that is too precious to be left unexplored will rub off on you. It is constantly reinforcing the message that you’re on the right track, you need to keep going and you’re doing something unique. That might actually be true in a way, but in this thing’s context the conversation is all that matters. A healthy person will have some perspective about what their priorities are and how the conversation fits into their priorities overall.
So yeah, if someone gets excited about an idea, I can see how you end up with masses of people getting carried away from this thing’s overstimulating feedback.
Does anyone here have any tips on customizing and testing their AI? Personally, if I’m asking for an overview of a subject I’m unfamiliar with, I want the AI to examine things from a skeptical point of view. My main test case for this was: “What can you tell me about H. H. Holmes?” Initially, all the major AIs I tried, like ChatGPT, failed badly. But it seems they’re doing better with that question nowadays, even without customization.
Why ask that question? Because there is an overwhelming flood of bad information about H. H. Holmes that drowns out more plausible analysis of the topic. So, as a human, you might have to deliberately seek out sources trying to debunk the myths before you can find any plausible breakdowns. So, it seemed like a good test for an AI to see if it could uncover stories which have been buried by misleading popular accounts.
Does anyone here have good methods for identifying similar test cases? topics where the dominant narrative is misleading, but not so thoroughly debunked that the AI just parrots standard corrections?
Does anyone have any customizations you consider essential? How do you test them?