Injections don’t deal with the model itself, it would be just like any other input prompt security protocol. Heck, I surely hope ChatGPT doesn’t execute code with root permission.
I didn’t know you could do that. Truly dangerous times we live in. I’m serious. More dangerous because of the hype. Hype means more unqualified participation.
Injections don’t deal with the model itself, it would be just like any other input prompt security protocol. Heck, I surely hope ChatGPT doesn’t execute code with root permission.
If someone is using a GPTv4 plugin to read and respond to their email, then a prompt injection would mean being able to read other emails
I didn’t know you could do that. Truly dangerous times we live in. I’m serious. More dangerous because of the hype. Hype means more unqualified participation.