Thanks for the reply. It’s the exact kind of engagement I expected from this site. I’m having trouble identifying which areas an AI should have input over human speech patterns, whether it comes to claim specificity or truth-seeking. At the moment, there is an option to “sharpen” your claim with Vicara but I’m worried that a claim rewrite might reshape the original author‘s thought too much. At that point the user is forced to reword the claim again to better match their idea. For now, a claim specificity prompt is featured when scoring to nudge the user towards a more nuanced claim. I‘m not sure if that’s enough of a push to change behavior.
As for “truth,” I initially had a source reliability principle, to guide the site explicitly away from misinformation. The way I instituted it initially was a hierarchy of sources, with peer-reviewed sources held as closer to truth compared to personal testimony with regards to scoring. Since then, I have retracted that implementation as I didn’t want the site to bias academic speech over a layperson presenting an issue in their specific community—an issue that might not be studied academically yet. I decided to keep user argument replies as a way to question truth and validity of a published argument, like the community notes on other sites. I’m curious how you think the site could enforce “truth“ on a larger scale.
It’s still in active development and I plan to build out more of the features on the legislative side. The goal is to tie highly validated arguments to legislation as a living record of public comment. Of course, this doesn’t happen without a bigger user base. At the moment, you’re absolutely right that the hero section presents a false promise. I’ll be sure to change that.
When it comes to claim specificity, you could look at Metaculus for how a specified claim looks. It’s not simply one sentence.
When you have a quite one specific claim, often a position could be “Yes for X being Y, not for X being Z”.
Most legislative systems do have a system for public comments where public comments can be submitted. If you want to claim to be “in the room” than interfacing with the actual public comment bureaucratic mechanisms would be vital.
You model seems like the important thing when discussing a bill is to get to approve/disapprove based on a short summary of a bill. This is probably bad because the details of bills actually matter.
I was once talking with a lobbyist who said that one of his most impactful public commentary was something like saying “If you pass that bill you would actually completely outlaw industry X, which you probably did not intend because if you would pass the bill A B C would happen”
Thanks for the reply. It’s the exact kind of engagement I expected from this site. I’m having trouble identifying which areas an AI should have input over human speech patterns, whether it comes to claim specificity or truth-seeking. At the moment, there is an option to “sharpen” your claim with Vicara but I’m worried that a claim rewrite might reshape the original author‘s thought too much. At that point the user is forced to reword the claim again to better match their idea. For now, a claim specificity prompt is featured when scoring to nudge the user towards a more nuanced claim. I‘m not sure if that’s enough of a push to change behavior.
As for “truth,” I initially had a source reliability principle, to guide the site explicitly away from misinformation. The way I instituted it initially was a hierarchy of sources, with peer-reviewed sources held as closer to truth compared to personal testimony with regards to scoring. Since then, I have retracted that implementation as I didn’t want the site to bias academic speech over a layperson presenting an issue in their specific community—an issue that might not be studied academically yet. I decided to keep user argument replies as a way to question truth and validity of a published argument, like the community notes on other sites. I’m curious how you think the site could enforce “truth“ on a larger scale.
It’s still in active development and I plan to build out more of the features on the legislative side. The goal is to tie highly validated arguments to legislation as a living record of public comment. Of course, this doesn’t happen without a bigger user base. At the moment, you’re absolutely right that the hero section presents a false promise. I’ll be sure to change that.
When it comes to claim specificity, you could look at Metaculus for how a specified claim looks. It’s not simply one sentence.
When you have a quite one specific claim, often a position could be “Yes for X being Y, not for X being Z”.
Most legislative systems do have a system for public comments where public comments can be submitted. If you want to claim to be “in the room” than interfacing with the actual public comment bureaucratic mechanisms would be vital.
You model seems like the important thing when discussing a bill is to get to approve/disapprove based on a short summary of a bill. This is probably bad because the details of bills actually matter.
I was once talking with a lobbyist who said that one of his most impactful public commentary was something like saying “If you pass that bill you would actually completely outlaw industry X, which you probably did not intend because if you would pass the bill A B C would happen”