Open sourcing a browser extension that shows when people are wrong on the internet

Link post

Screenshot From 2026-02-22 14-58-40 bordered-shadow.png
Example of OpenErrata nitting the Sequences

I just published OpenErrata, a browser extension that investigates the posts you read using your OpenAI API key, and underlines any factual claims that are sourceably incorrect. It then saves the results of the investigation so that whenever anybody else using the extension visits the post (with or without an API key), they get the corrections on their first visit.

I’ve noticed that while people can theoretically paste everything they’re reading into ChatGPT for verification:

  • No one has the time to do that

  • It duplicates work between readers

  • It takes around 5 minutes to get a really good sourced response for most mid-length posts.

I figure most of LessWrong is reading the same stuff, and that if a good portion of the community begins using this or something like it, we can avoid these problems.

Here is OpenErrata at work on some LessWrong & Substack articles that were published within the last week. I was a little surprised at what a high percentage of the articles I read seem to have at least one or two errors, even with how conservative my prompt is. When I delete rows from the database and rerun, often it finds different (and valid) ones it didn’t find the first time:

OpenErrata highlighting an incorrect claim on Astral Codex Ten with a hover tooltip showing the correction and source
”Record Low Crime Rates Are Real, Not Just Reporting Bias Or Improved Medical Care”
Life at the Frontlines of Demographic Collapse
Be skeptical of milestone announcements by young AI startups
Did Claude 3 Opus align itself via gradient hacking? (Note: as pointed out by commenters, this correction is incorrect. Leaving it up as it still seems interesting as a statement about the models.)

The project is published under my company, but the entire thing is self-hostable and AGPLv3 licensed. I also made an API available so that providers can use the results for articles independently and do statistics on them/​embed them. Some future additions I & others could work on:

  • A website for ‘leaderboards’/​‘loserboards’, viewing in progress investigations, helpful-to-the-reader reputation mechanics, etc.

  • Reasoning for no-nit results.

  • An appeal process that is completely AI driven, so that you can talk to the AI to point out either additional ways articles are wrong, or reasons previous nits are incorrect, which are reflected in the results. I think it should be possible to figure out how to make that adversarially robust as the tool gets better.

  • Support for other sites (NYT, Wikipedia, Reddit, Nitter, etc.) Right now it only works on LessWrong/​Substack and X (sort of).

    • Better support for X/​Twitter; I’ve got some ideas for ways the investigator could actually access related tweets and sources, for example.

  • Support for comments.

I really enjoyed working on & using this and want to keep doing so, so let me know if you like it/​find it useful!