But my expectation and experience is that if you put in something like “make sure to double-check everything” or “reason like [smart person]” or “put probabilities on claims” or “express yourself organically” or “don’t be afraid to critique my ideas”, this doesn’t actually lead to smarter/more creative/less sycophantic behavior. Instead, it leads to painfully apparent LARP of being smarter/creativer/objectiver, where the LLM blatantly shoehorns-in these character traits in a way that doesn’t actually help.
I came here to say something like this. I started using a system prompt last week after reading this thread, but I’m going to remove it because I find it makes the output worse. For ChatGPT my system prompt seemingly had no effect, while Claude cared way too much about my system prompt, and now it says things like
I searched [website] and found [straightforwardly true claim]. However, we must be critical of these findings, because [shoehorned-in obviously-wrong criticism].
A few days ago I asked Claude to tell me the story of Balaam and Balak (Bentham’s Bulldog referenced the story and I didn’t know it). After telling the story, Claude said
I should note some uncertainties here: The talking donkey element tests credulity from a rationalist perspective
(It did not question the presence of God, angels, prophecy, or curses. Only the talking donkey.)
So, to be clear, Claude already has a system prompt, is already caring a lot about it… and it seems to me you can always recalibrate your own system prompt until it doesn’t make these errors you speak of.
Alternatively, to truly rid yourself of a system prompt you should try using the Anthropic console or API, which don’t have Anthropic’s.
I came here to say something like this. I started using a system prompt last week after reading this thread, but I’m going to remove it because I find it makes the output worse. For ChatGPT my system prompt seemingly had no effect, while Claude cared way too much about my system prompt, and now it says things like
A few days ago I asked Claude to tell me the story of Balaam and Balak (Bentham’s Bulldog referenced the story and I didn’t know it). After telling the story, Claude said
(It did not question the presence of God, angels, prophecy, or curses. Only the talking donkey.)
So, to be clear, Claude already has a system prompt, is already caring a lot about it… and it seems to me you can always recalibrate your own system prompt until it doesn’t make these errors you speak of.
Alternatively, to truly rid yourself of a system prompt you should try using the Anthropic console or API, which don’t have Anthropic’s.