I have also imagined things like the browser extension. With trustworthy commenters, it could become a powerful tool against disinformation. But that’s passing the buck… where to find the trustworthy commenters? Without them, the extension could just as well become a tool to spread hoaxes.
You need a trustworthy community first. Then, you can add some mechanisms, such as removing users who are reported by too many other users as liars. But you won’t get that if you start with a majority of liars.
It’s like the voting system on Less Wrong. It helps to keep the website sane. But it works because we have already started with a mostly sane community. If you took the same software, but started the website with random population, it would probably not evolve into a new rationalist community. More likely, it would enforce some random opinion which got a majority at some moment, and used it to eliminate the opposing opinions. At best, the system can maintain the truth-seeking community, but it cannot create it. (And even if it could, the dark side would simply create their own browser extension, and insist that it is your extension that is biased.)
I could be wrong here, considering the success of things like Community Notes on Twitter. (At least I think it was a success; haven’t heard about it recently. Maybe people already found a way to defeat it.) Seems like you can extract something in a kinda-mostly-true direction from chaos, the “things that people agree on even if they disagree on most other things”.
Another solution could be to let every user specify whom they trust, and show the opinions of your friends more visibly than the opinion of randos. So you would get mostly good results if you import the list of rationalists; and everyone else, uhm, will use the tool to reinforce the bubble they are already in.
Another problem is that there may be systematic difference about which users read (and review) which sources. For example, a fanatical antivaxer might review thousands of medical articles on vaccination; while an actual doctor probably wouldn’t bother to do that. Also, the less you think when you write a review, the more reviews you can write during the same time.
...so, I like the vision, but there are many difficult problems to solve. (And most entrepreneurs would probably be more interested in making the profit than actually solving those problems. Just like Facebook doesn’t care about the bots and spammers.)
But that’s passing the buck… where to find the trustworthy commenters?
My idea for this has been that rather than require that all users use and trust the extension’s single foxy aggregation / deference algorithm, the tool instead ought to give users the freedom to choose between different aggregation mechanisms, including being able to select which users to epistemically trust or not. In other words, it could almost be like an epistemic social network where users can choose whose judgment they respect qnd have their aggregation algorithm give special weight to those users (as well as users those users say they respect the judgment of).
Perhaps this would lead to some users using the system to support their own tribalism or whatever and have their personalized aggregation algorithm spit out poor judgments, but I think it’d allow users like those on LW to use the tool and become more informed as a result.
Another solution could be to let every user specify whom they trust, and show the opinions of your friends more visibly than the opinion of randos. So you would get mostly good results if you import the list of rationalists; and everyone else, uhm, will use the tool to reinforce the bubble they are already in.
Yeah, exactly.
I think it’d be a valuable tool despite the challenges you mentioned.
I think the main challenge would be getting enough people to give the tool/extension enough input epistemic data, rather than (in my view) the lesser challenges of making the outputs based on that input data valuable enough to be informative to users.
And to solve this problem, I imagine the developers would have to come up with creative ways to make giving the tool epistemic data fast and low friction (though maybe not—e.g. is submitting Community Notes fast or low friction? (IDK, but) perhaps not necesarily and maybe some users do it anways because they value the exposure and impact their note may have if approved).
And perhaps also making sure that the way the users provide the onput data is a way that allows that data to be aggregated by some algorithm. E.g. It’s easier to aggregate submissions claiming a sentence is true or false, but what if a user just wants to submit a claim as misleading—do you need a more creative way to capture that data if you want to be able to communicate to other users the manner in which it is misleading rather than just a “misleading” tag? I haven’t thought through these sorts of questions, but suspect strongly that there is some MVP version of the extension that I at the very least would value as an end user and would also be happy to contribute to, even if only a few people I know would be seeing my data/notes when reading the same content as me after the fact. Though of course the more people who uae the tool and see the data, the more willing I’d be to contribute assuming some small time cost of contributing data. I already spend time leaving comments on things to point out mistakes and I imagine such a tool would just reduce the friction of providing such feedback.
I have also imagined things like the browser extension. With trustworthy commenters, it could become a powerful tool against disinformation. But that’s passing the buck… where to find the trustworthy commenters? Without them, the extension could just as well become a tool to spread hoaxes.
You need a trustworthy community first. Then, you can add some mechanisms, such as removing users who are reported by too many other users as liars. But you won’t get that if you start with a majority of liars.
It’s like the voting system on Less Wrong. It helps to keep the website sane. But it works because we have already started with a mostly sane community. If you took the same software, but started the website with random population, it would probably not evolve into a new rationalist community. More likely, it would enforce some random opinion which got a majority at some moment, and used it to eliminate the opposing opinions. At best, the system can maintain the truth-seeking community, but it cannot create it. (And even if it could, the dark side would simply create their own browser extension, and insist that it is your extension that is biased.)
I could be wrong here, considering the success of things like Community Notes on Twitter. (At least I think it was a success; haven’t heard about it recently. Maybe people already found a way to defeat it.) Seems like you can extract something in a kinda-mostly-true direction from chaos, the “things that people agree on even if they disagree on most other things”.
Another solution could be to let every user specify whom they trust, and show the opinions of your friends more visibly than the opinion of randos. So you would get mostly good results if you import the list of rationalists; and everyone else, uhm, will use the tool to reinforce the bubble they are already in.
Another problem is that there may be systematic difference about which users read (and review) which sources. For example, a fanatical antivaxer might review thousands of medical articles on vaccination; while an actual doctor probably wouldn’t bother to do that. Also, the less you think when you write a review, the more reviews you can write during the same time.
...so, I like the vision, but there are many difficult problems to solve. (And most entrepreneurs would probably be more interested in making the profit than actually solving those problems. Just like Facebook doesn’t care about the bots and spammers.)
My idea for this has been that rather than require that all users use and trust the extension’s single foxy aggregation / deference algorithm, the tool instead ought to give users the freedom to choose between different aggregation mechanisms, including being able to select which users to epistemically trust or not. In other words, it could almost be like an epistemic social network where users can choose whose judgment they respect qnd have their aggregation algorithm give special weight to those users (as well as users those users say they respect the judgment of).
Perhaps this would lead to some users using the system to support their own tribalism or whatever and have their personalized aggregation algorithm spit out poor judgments, but I think it’d allow users like those on LW to use the tool and become more informed as a result.
Yeah, exactly.
I think it’d be a valuable tool despite the challenges you mentioned.
I think the main challenge would be getting enough people to give the tool/extension enough input epistemic data, rather than (in my view) the lesser challenges of making the outputs based on that input data valuable enough to be informative to users.
And to solve this problem, I imagine the developers would have to come up with creative ways to make giving the tool epistemic data fast and low friction (though maybe not—e.g. is submitting Community Notes fast or low friction? (IDK, but) perhaps not necesarily and maybe some users do it anways because they value the exposure and impact their note may have if approved).
And perhaps also making sure that the way the users provide the onput data is a way that allows that data to be aggregated by some algorithm. E.g. It’s easier to aggregate submissions claiming a sentence is true or false, but what if a user just wants to submit a claim as misleading—do you need a more creative way to capture that data if you want to be able to communicate to other users the manner in which it is misleading rather than just a “misleading” tag? I haven’t thought through these sorts of questions, but suspect strongly that there is some MVP version of the extension that I at the very least would value as an end user and would also be happy to contribute to, even if only a few people I know would be seeing my data/notes when reading the same content as me after the fact. Though of course the more people who uae the tool and see the data, the more willing I’d be to contribute assuming some small time cost of contributing data. I already spend time leaving comments on things to point out mistakes and I imagine such a tool would just reduce the friction of providing such feedback.