Thanks! I think it won’t work specifically with this model as it was not instruct-finetuned, so it won’t follow these instructions clearly.
But in general prompting should control for SQL injection reasonably well. But I think it’s possible to escape prompt-based protection. For example, what if I will inject a jailbreak prompt in a docstring?
Thanks! I think it won’t work specifically with this model as it was not instruct-finetuned, so it won’t follow these instructions clearly.
But in general prompting should control for SQL injection reasonably well. But I think it’s possible to escape prompt-based protection. For example, what if I will inject a jailbreak prompt in a docstring?