What’s more, I think no private company should be in a position to impose this kind of risk on every living human, and I support efforts to make sure that no company ever is.
I don’t see your name on the Statement on Superintelligence when I search for it. Assuming you didn’t sign it, why not? Do you disagree with it?
It seems like an effort to make sure that no company is in the position to impose this kind of risk on every living human:
We call for a prohibition on the development of superintelligence, not lifted before there is
broad scientific consensus that it will be done safely and controllably, and
strong public buy-in.
(Several Anthropic, OpenAI, and Google DeepMind employees signed.)
I’ve long thought that it’s also true that an entrpreneur could build a tool that allows people to easily see whether virtually everything they read or see on the internet is true.
On LessWrong if a reader thinks something someone says in a post is false they can highlight the sentence and Disagree-react it. Then everyone else reading the post can see that the sentence is highlighted and see who said they disagreed with it. This is great for epistemics.
I envision a system (could be as simple as a browser extension) that allows users to frictionlessly report their feedback/beliefs when reading any content online, noting when things they read seem true or false or definitely false, etc. The system crowdsources all of this epistemic feedback and then uses the data to estimate whether things actually are true or false, and shares this insight with other users.
Then no longer will someone have to read a news article or post that 100 or more other people have already read and be left to their own devices to determine what parts are true or not.
Perhaps some users might not trust the main algorithm’s judgment and would prefer to choose a set of other users who they trust have good judgment, and have their personalized algorithm give these people’s epistemic feedback extra weight. Great, the system should have this feature.
Perhaps some users mark something as false and later other users come along and show that it is true. Then perhaps the first users should have an epistemic score that goes down as a consequence of their mistaken/bad epistemic feedback.
Perhaps the system should track how good of judgment users have over time to ascertain which users give reliable feedback and which users have bad epistemics and largely just contribute noise.
There are a lot of features that could be added to such a system. But the point is that I’ve read far too many news articles and posts on the broader internet and had the experience of noting a mistake or inaccuracy or outright falsehood, and then moved on without sharing the insight with anyone due to there being no efficient way to do so.
Surely also there are many inaccuracies that I miss, and I’d benefit from being informed by others who did catch them noting that they were there in a way I could just believe as a non-expert on the claim.
This is good advice, but I really wish (and think it possible) that some competent entrepreneurs made it much less needed by creating epistemic tools that enhance the ability of anyone to discern what’s true out in the wild where people do commonly sneeze false information in your face.