If you use LLMs via API, put your system prompt into the context window.
At least for the Anthropic API, this is not correct. There is a specific field for a system prompt on console.anthropic.com.
And if you use the SDK the function that queries the model takes “system” as an optional input.
Querying Claude via the API has the advantage that the system prompt is customizable, whereas claude.ai queries always have a lengthy Anthropic provided prompt, in addition to any personal instructions you might write for the model.
Putting all that aside, I agree with what I take to be the general point of your post, which is that people should put more effort into writing good prompts.
At least for the Anthropic API, this is not correct. There is a specific field for a system prompt on console.anthropic.com.
And if you use the SDK the function that queries the model takes “system” as an optional input.
Querying Claude via the API has the advantage that the system prompt is customizable, whereas claude.ai queries always have a lengthy Anthropic provided prompt, in addition to any personal instructions you might write for the model.
Putting all that aside, I agree with what I take to be the general point of your post, which is that people should put more effort into writing good prompts.
Yeah I’m putting the console under “playground”, not “API”.
Right, but if you query the API directly with a Python script the query also has a “system” field, so my objection stands regardless.
Yep, edited, thank you.