Pardon my noob mistake. I’ve edited the post with a correction.
Bruce Lewis
The power of a simple 3-way truth scale
Isn’t one’s opinion of whether the statement is false, debatable, or true the epistemic status of that statement according to that person?
Hi, sorry for the late reply. That iteration of HowTruthful only got 2 paying customers, both of whom were people I knew. I realized I needed to improve the aesthetics, UX, and SEO, plus move to a more modern tech stack, so I rewrote it.
Thanks for the detailed suggestions. I want to implement a few of them, but most of them probably not, my reasoning hinging on the fact that adding complexity adds friction that discourages people from using it.
LessWrong’s goals overlap HowTruthful’s
The project I’ve been quietly working on now has its second iteration, better than ever: https://www.howtruthful.com/
I’m eager to talk about it, but won’t be able to during the workday today. Try it out. Comment and I’ll answer tonight.
The video linked from the link above can be watched at 2x speed if you only want to get the gist. If you do that and stop at the how-to-use part, it’s only 4 minutes. Watching the entire video at regular speed is 12 minutes, but since this is different from things you’ve already seen, that might be 12 minutes well spent.
The best path forward might be for @DPiepgrass to make a prototype or mockup, borrowing ideas from HowTruthful and then discussing from there.
What’s happening behind the scenes with my HowTruthful project
This comment is a level 1 lie! (I’m replying to it now with a level 5 lie.)
I don’t think 11 needs to be a community value. If someone comes in believing in the supernatural, in cryptozoology, UFOs, P = NP, or other ideas that haven’t been scientifically verified, who cares as long as they’re interested in changing their way of modeling the world to be more evidence-based?
Sympathize with them, but also with those affected by them.
I strongly upvoted this post because I believe epistemic empathy is important.
The word “irrational” has too many meanings, and I try to avoid it. And I try to direct criticism at arguments rather than people. But I do want to answer your final question as best I can. I’ll just phrase it as problems with arguments rather than people’s irrationality.
In my experience, the problem with arguments against COVID-19 vaccines is that they mainly consist of evidence that there’s risk involved in getting vaccinated. To usefully argue against getting vaccinated, one needs evidence not only that vaccine risks exist, but that they’re worse risks than those of remaining unvaccinated.
Similarly, arguments against masks are usually arguing against the wrong statement. They argue against “Masks always prevent transmission”, when to be useful, they should be arguing against “Masks reduce transmission”.
Good points are made in other comments about the significance of weakest bonds, but mostly I want to say that I like this post because it’s making a clear point with clear reasoning, and was very readable.
Are you really insulating from reality, or from recency bias?
At the beginning of 2023 I thought Google was a good place to work. I changed my mind after receiving new evidence.
Agreed there’s a lot of work ahead in making it engaging.
I define “pro” as anything one might say in defense of a statement, and that includes decomposing it. It can also include disambiguating it. Or citing a source.
Thanks for the well-wishes. Only two paid users so far, but I’m getting very useful feedback and will have a second iteration with key improvements.
I liked the xkcd on empiricism: https://xkcd.com/2855/
My humble opinion is that teachers should make such decisions. From my own education I’ve come to think that the best education comes from enthusiastic teachers.
I had not heard of Community Notes. Interesting anti-bias technique “notes require agreement between contributors who have sometimes disagreed in their past ratings”. https://communitynotes.twitter.com/guide/en/about/introduction
Thanks! Yes, it definitely resembles the structure of an argument map.
Percent probabilities would be more Bayesian and fit certain questions better, but I wanted to show what’s possible without any scary math at all. Also, for a lot of questions, any percent probabilities people put down would be made up anyway.