But that’s passing the buck… where to find the trustworthy commenters?
My idea for this has been that rather than require that all users use and trust the extension’s single foxy aggregation / deference algorithm, the tool instead ought to give users the freedom to choose between different aggregation mechanisms, including being able to select which users to epistemically trust or not. In other words, it could almost be like an epistemic social network where users can choose whose judgment they respect qnd have their aggregation algorithm give special weight to those users (as well as users those users say they respect the judgment of).
Perhaps this would lead to some users using the system to support their own tribalism or whatever and have their personalized aggregation algorithm spit out poor judgments, but I think it’d allow users like those on LW to use the tool and become more informed as a result.
Another solution could be to let every user specify whom they trust, and show the opinions of your friends more visibly than the opinion of randos. So you would get mostly good results if you import the list of rationalists; and everyone else, uhm, will use the tool to reinforce the bubble they are already in.
Yeah, exactly.
I think it’d be a valuable tool despite the challenges you mentioned.
I think the main challenge would be getting enough people to give the tool/extension enough input epistemic data, rather than (in my view) the lesser challenges of making the outputs based on that input data valuable enough to be informative to users.
And to solve this problem, I imagine the developers would have to come up with creative ways to make giving the tool epistemic data fast and low friction (though maybe not—e.g. is submitting Community Notes fast or low friction? (IDK, but) perhaps not necesarily and maybe some users do it anways because they value the exposure and impact their note may have if approved).
And perhaps also making sure that the way the users provide the onput data is a way that allows that data to be aggregated by some algorithm. E.g. It’s easier to aggregate submissions claiming a sentence is true or false, but what if a user just wants to submit a claim as misleading—do you need a more creative way to capture that data if you want to be able to communicate to other users the manner in which it is misleading rather than just a “misleading” tag? I haven’t thought through these sorts of questions, but suspect strongly that there is some MVP version of the extension that I at the very least would value as an end user and would also be happy to contribute to, even if only a few people I know would be seeing my data/notes when reading the same content as me after the fact. Though of course the more people who uae the tool and see the data, the more willing I’d be to contribute assuming some small time cost of contributing data. I already spend time leaving comments on things to point out mistakes and I imagine such a tool would just reduce the friction of providing such feedback.
My idea for this has been that rather than require that all users use and trust the extension’s single foxy aggregation / deference algorithm, the tool instead ought to give users the freedom to choose between different aggregation mechanisms, including being able to select which users to epistemically trust or not. In other words, it could almost be like an epistemic social network where users can choose whose judgment they respect qnd have their aggregation algorithm give special weight to those users (as well as users those users say they respect the judgment of).
Perhaps this would lead to some users using the system to support their own tribalism or whatever and have their personalized aggregation algorithm spit out poor judgments, but I think it’d allow users like those on LW to use the tool and become more informed as a result.
Yeah, exactly.
I think it’d be a valuable tool despite the challenges you mentioned.
I think the main challenge would be getting enough people to give the tool/extension enough input epistemic data, rather than (in my view) the lesser challenges of making the outputs based on that input data valuable enough to be informative to users.
And to solve this problem, I imagine the developers would have to come up with creative ways to make giving the tool epistemic data fast and low friction (though maybe not—e.g. is submitting Community Notes fast or low friction? (IDK, but) perhaps not necesarily and maybe some users do it anways because they value the exposure and impact their note may have if approved).
And perhaps also making sure that the way the users provide the onput data is a way that allows that data to be aggregated by some algorithm. E.g. It’s easier to aggregate submissions claiming a sentence is true or false, but what if a user just wants to submit a claim as misleading—do you need a more creative way to capture that data if you want to be able to communicate to other users the manner in which it is misleading rather than just a “misleading” tag? I haven’t thought through these sorts of questions, but suspect strongly that there is some MVP version of the extension that I at the very least would value as an end user and would also be happy to contribute to, even if only a few people I know would be seeing my data/notes when reading the same content as me after the fact. Though of course the more people who uae the tool and see the data, the more willing I’d be to contribute assuming some small time cost of contributing data. I already spend time leaving comments on things to point out mistakes and I imagine such a tool would just reduce the friction of providing such feedback.