Yes mostly agree. Unless the providers themselves log all responses and expose some API to check for LLM generation, we’re probably out of luck here, and incentives are strong to defect.
One thing I was thinking about (similar to i.e—speedrunners) is just making a self-recording or screenrecording of actually writing out the content / post? This probably can be verified by an AI or neutral third party. Something like a “proof of work” for writing your own content.
If it became common to demand and check proofs of (human) work, there will be a strong incentive to use AI to generate such proofs, which doesn’t not seem very hard to do.
Yes mostly agree. Unless the providers themselves log all responses and expose some API to check for LLM generation, we’re probably out of luck here, and incentives are strong to defect.
One thing I was thinking about (similar to i.e—speedrunners) is just making a self-recording or screenrecording of actually writing out the content / post? This probably can be verified by an AI or neutral third party. Something like a “proof of work” for writing your own content.
Grammarly has https://www.grammarly.com/authorship if you want to prove that you wrote something.
If it became common to demand and check proofs of (human) work, there will be a strong incentive to use AI to generate such proofs, which doesn’t not seem very hard to do.
I don’t expect the people on LW that I read to intentionally lie about stuff.