PhD student studying the epigenomics of ageing with bioinformatic methods. Former president of Humanist Students, the national umbrella group for humanist groups at universities in the UK.
RichardJActon
Why rationalists should care (more) about free software
I thought that the sections on Identity and self-deception in this book stuck out as being done better in this book that in other rationalist literature.
Yes I’ve been looking for is post on idea inoculation and inferential distance and can’t find it, just getting an error. What happened to this content?
https://www.lesswrong.com/posts/aYX6s8SYuTNaM2jh3/idea-inoculation-inferential-distance
For anyone else feeling this is less than intiutive sone3d is I think likely refering to, respectively:
Idea Inoculation is a very useful concept, and definitly something to bear in mind when playing the ‘weak’ form of the double crux game.
Correct me if I’m wrong but I have not noticed anyone else post something linking inferential distance with double cruxing so that maybe that’s what I should have emphasised in the title.
You are correct of course, I was mostly envisioning senarios where you have a very solid conclusion which you are attempting to convey to another party that you have good reason to beleive is wrong or ignorant of this conclusion. (I was also hoping for some mild comedic effect from an obvious answer.)
For the most part if you are going into a conversation where you are attempting to impart knowledge you are assuming that it is probably largely correct, one of the advantages of finding the crux or ‘graft point’ at which you want to attach you beleif network is that it usually forces you to layout you beleif structure fairly completely and can reveal previously unnoticied or unarticulated flaws to both parties. An attentive ‘imparter’ should have a better chance of spotting mistakes in their reasoning if they have to lead others through their reasoning—hence the observation that if you want to really grok something you should teach it to someone else.
An Intuitive Explanation of Inferential Distance
I made a deck of cards with 104 biases from the Wikipedia page on cognitive biases on them to play this and related games with. You can get the image files here:
https://github.com/RichardJActon/CognitiveBiasCards
(There is also a link to a printer where they are preconfigured so you can easily buy a deck if you want.)
The visuals on these cards were originally created by Eric Fernandez, (of http://royalsocietyofaccountplanning.blogspot.co.uk/2010/04/new-study-guide-to-help-you-memorize.html)
This is a Link to a resource I came across for people wishing to teach/learn Fermi calculation it contains a problem set, a potentially useful asset especially for meetup planners.
I’m not fundamentally opposed to exceptions in specific areas if there is sufficient reason. If I found the case that AI is such an exception convincing I might carve one out for it. In most cases however and specifically in the mission of raising the sanity waterline so that we collectively make better decisions on things like prioritising x-risks I would argue that a lack of free software and related issues of technology governance are currently a bottleneck in raising that waterline.