One solution is to integrate a proof of humanity type ID, . These are in many ways better than centralised government ID’s, and it’s the kind of thing Lesswrong might be able to take the lead on.
They sound plausible at a glance, but usually don’t explain the specific mechanism for why their experiment should be interesting, or fit into the LW conversation.
Please consider false positives here, we don’t want to waste our time, but we also don’t want to exclude novel work by people outside our network. What normally happens is we fall back on older and more robust algorithms like “who we know”.
An an example, would you consider this post to fit into this category?
I ask because it’s real work, with AI-assisted write up, and I’m in the category where “AI is so much better than me, it would feel silly not to use it”. Also, I see very little engagement, and this is likely because people are flooded with work and don’t have the time to evaluate it (including me).
(For your reading pleasure I’ve not used AI editing here, so you can enjoy my full range of spelling mistakes!)
This doesn’t prevent humans from copy-pasting AI generated output, or (in the future) prevent AIs from hiring humans to post their writings. The post you linked to has a section “Can we prevent renting IDs (eg. to sell votes)?” but the ideas there do not seem to apply to the current use case.
That is true it doesn’t, but if it limits it to unique persons, then we will only need to ban each person once, rather than unlimited times. So that solves part of the problem, but not all.
And I would hope we go on novel content, not who wrote it (which we can somewhat measure already, here’s a repo that doesn’t work well but spells out the idea https://github.com/wassname/detect_bs_text). So that a human needs to be responsible for that they post.
Right now we likely use email addresses as unique people, but often people will have many email addresses and are able to get.
I think your post didn’t get engagement because it contains a lot of fluffy LLM filler text that makes it hard to figure out what you actually did. I spent about 10 minutes reading and still understood basically nothing, so I got frustrated and had Claude summarize in a couple paragraphs. From the summary, the research sounds kind of useful. So it’s not “using AI” that’s the problem, you’re just using it wrong. Get straight to the point.
I basically don’t read academic paper styled posts, so I am not your target audience, but this alone probably filtered a lot of readers regardless of the actual writing. It is pretty common for posts in alignment forum to have no comments too.
One solution is to integrate a proof of humanity type ID, . These are in many ways better than centralised government ID’s, and it’s the kind of thing Lesswrong might be able to take the lead on.
Please consider false positives here, we don’t want to waste our time, but we also don’t want to exclude novel work by people outside our network. What normally happens is we fall back on older and more robust algorithms like “who we know”.
An an example, would you consider this post to fit into this category?
I ask because it’s real work, with AI-assisted write up, and I’m in the category where “AI is so much better than me, it would feel silly not to use it”. Also, I see very little engagement, and this is likely because people are flooded with work and don’t have the time to evaluate it (including me).
(For your reading pleasure I’ve not used AI editing here, so you can enjoy my full range of spelling mistakes!)
This doesn’t prevent humans from copy-pasting AI generated output, or (in the future) prevent AIs from hiring humans to post their writings. The post you linked to has a section “Can we prevent renting IDs (eg. to sell votes)?” but the ideas there do not seem to apply to the current use case.
That is true it doesn’t, but if it limits it to unique persons, then we will only need to ban each person once, rather than unlimited times. So that solves part of the problem, but not all.
And I would hope we go on novel content, not who wrote it (which we can somewhat measure already, here’s a repo that doesn’t work well but spells out the idea https://github.com/wassname/detect_bs_text). So that a human needs to be responsible for that they post.
Right now we likely use email addresses as unique people, but often people will have many email addresses and are able to get.
I think your post didn’t get engagement because it contains a lot of fluffy LLM filler text that makes it hard to figure out what you actually did. I spent about 10 minutes reading and still understood basically nothing, so I got frustrated and had Claude summarize in a couple paragraphs. From the summary, the research sounds kind of useful. So it’s not “using AI” that’s the problem, you’re just using it wrong. Get straight to the point.
That is very useful thanks, I’ll give it a rewrite in that vein.
I basically don’t read academic paper styled posts, so I am not your target audience, but this alone probably filtered a lot of readers regardless of the actual writing. It is pretty common for posts in alignment forum to have no comments too.
That makes sense, probably the majority are in this camp.