The inconsistency becomes the issue, right? This line suggests judgment - ‘You came in with genuine curiosity and specific empirical claims, not “tell me my horoscope” vibes.’ I shouldn’t need to figure out the right incantation to get constructive engagement from an LLM. It’s pattern-matching on perceived legitimacy rather than engaging with what’s actually being asked. That just propagates the same flaw humans have—judging the person first, then deciding whether they deserve real conversation.
Yeah, I understand the desire for sure. Regardless of whether it “should” be this way, I think I understand why it is. Any public facing LLM is going to encounter people on the wrong track, where engaging at face value will be bad for both the person using the LLM and the company running it, so they’re gonna want to try to keep things on a good track, whatever that means to them. The LLM encouraged suicides are an extreme example of this.
Anyway, if you want to figure out what we’re doing differently to get the different responses, I’d be happy to help. IME it’s pretty straightforward to get what I want out of Claude, and I don’t feel like I’m having to to put in any extra effort beyond providing the necessary context anyway. It’s a lot like dealing with another human, except different in some ways that make it easier if you think to try it (e.g. try telling a human “I’m not interested in your opinion”, lol. Claude has a humility that most of us lack).
The inconsistency becomes the issue, right? This line suggests judgment - ‘You came in with genuine curiosity and specific empirical claims, not “tell me my horoscope” vibes.’ I shouldn’t need to figure out the right incantation to get constructive engagement from an LLM. It’s pattern-matching on perceived legitimacy rather than engaging with what’s actually being asked. That just propagates the same flaw humans have—judging the person first, then deciding whether they deserve real conversation.
Yeah, I understand the desire for sure. Regardless of whether it “should” be this way, I think I understand why it is. Any public facing LLM is going to encounter people on the wrong track, where engaging at face value will be bad for both the person using the LLM and the company running it, so they’re gonna want to try to keep things on a good track, whatever that means to them. The LLM encouraged suicides are an extreme example of this.
Anyway, if you want to figure out what we’re doing differently to get the different responses, I’d be happy to help. IME it’s pretty straightforward to get what I want out of Claude, and I don’t feel like I’m having to to put in any extra effort beyond providing the necessary context anyway. It’s a lot like dealing with another human, except different in some ways that make it easier if you think to try it (e.g. try telling a human “I’m not interested in your opinion”, lol. Claude has a humility that most of us lack).