One consideration re: the tone-warning LLMs: make sure to be aware that this means you’re pseudo-publishing someone’s comment before they meant to. Not publishing in discoverable sense, but logging it to a database somewhere (i.e., probably controlled by the LLM provider) - and depending on the types of writing, this might affect people’s willingness to actually write stuff
This is fixable by a) hosting own model, and double-checking that code does not log incoming content in any way, b) potentially, having that model on client side (over time, it might shrink to some manageable size).
One consideration re: the tone-warning LLMs: make sure to be aware that this means you’re pseudo-publishing someone’s comment before they meant to. Not publishing in discoverable sense, but logging it to a database somewhere (i.e., probably controlled by the LLM provider) - and depending on the types of writing, this might affect people’s willingness to actually write stuff
This is fixable by
a) hosting own model, and double-checking that code does not log incoming content in any way,
b) potentially, having that model on client side (over time, it might shrink to some manageable size).