If you like analytic philosophy and mechanism design, consider checking out my blog.
B Jacobs
A Taijitu symbol for Moloch and Slack
[Aspiration-based designs] 1. Informal introduction
[Question] Why aren’t we testing general intelligence distribution?
Making a Crowdaction platform
2020 LessWrong Demographics Survey
2020 LessWrong Demographics Survey Results
A Toy Model of Hingeyness
Updated Hierarchy of Disagreement
[Question] Should we stop using the term ‘Rationalist’?
Meta-Preference Utilitarianism
[Meta] Three small suggestions for the LW-website
Sortition Model of Moral Uncertainty
Resolving moral uncertainty with randomization
[Question] By what metric do you judge a reference class?
Hierarchy of Evidence
I have a Mnemonic device for checking whether a model is Gears-like or not.
G E A R S:Does a variable Generate Empirical Anticipations?
Can a variable be Rederived?
Is a variable hard to Substitute?
I was writing a post about how you can get more fuzzies (=personal happiness) out of your altruism, but decided that it would be better as a shortform. I know the general advice is to purchase your fuzzies and utilons separately but if you’re going to do altruism anyway, and there are ways to increase your happiness of doing so without sacrificing altruistic output, then I would argue you should try to increase that happiness. After all, if altruism makes you miserable you’re less likely to do it in the future and if it makes you happy you will be more likely to do it in the future (and personal happiness is obviously good in general).
The most obvious way to do it is with conditioning e.g giving yourself a cookie, doing a handpump motion every time you donate etc. Since there’s already a boatload of stuff written about conditioning I won’t expand on it further. I then wanted to adapt the tips from Lukeprog’s the science of winning at life to this particular topic, but I don’t really have anything to add so you can probably just read it and apply it to doing altruism.
The only purely original thing I wanted to advice is to diversify your altruistic output. I found out there have already been defenses made in favor of this concept but I would like to give additional arguments. The primary one being that it will keep you personally emotionally engaged with different parts of the world. When you invest something (e.g time/money) into a cause you become more emotionally attached to said cause. So someone who only donates to malaria bednets will (on average) be less emotionally invested into deworming even though these are both equally important projects. While I know on an intellectual level that donating 50 dollars to malaria bednets is better than donating 25 dollars, it will emotionally both feel like a small drop in the ocean. When advancements in the cause get made I get to feel fuzzies that I contributed, but crucially these won’t be twice as warm if I donated twice as much. But if I donate to separate causes (e.g bednets and deworming) then for every advancement/milestone I will get to feel fuzzies from these two different causes (so twice as much).
This will lessen the chance of you becoming a victim of the bandwagon effect (of a particular cause) or becoming victim of the sunk-cost fallacy (if a cause you thought was effective turns out to be not very effective after all). This will also keep your worldview broad instead of either becoming depressed if your singular cause doesn’t advance or becoming ignorant of the world at large. So if you do diversify then every victory in the other causes creates more happiness for you, allowing you to align yourself much better with the worlds needs.
I tried a bit of a natural experiment to see if rationalists would be more negative towards an idea if it’s called socialism vs if it’s called it something else. I made two posts that are identical, except one calls it socialism right at the start, and one only reveals I was talking about socialism at the very end (perhaps it would’ve been better if I hadn’t revealed it at all). The former I posted to LW, the latter I posted to the EA forum.
I expected that the comments on LW would be more negative, that I would get more downvotes and gave it a 50% chance the mods wouldn’t even promote it to the frontpage on LW (but would on EA forum).
The comments were more negative on LW. I did get more downvotes, but I also got more upvotes and got more karma overall: (12 karma from 19 votes on EA and 27 karma from 39 votes on LW). Posts tend to get more karma on LW, but the difference is big enough that I consider my prediction to be wrong. Lastly, the LW mods did end up promoting it to the frontpage, but it took a very long time (maybe they had a debate about it).
Overall, while rationalists are more negative towards socialist ideas that are called socialist, they aren’t as negative as I expected and will update accordingly.
Not really. It’s so strange that the US journalistic code of ethics has very strict rules about revealing information from anonymous sources, but doesn’t seem to have any rules about revealing information from pseudonymous sources.
I know LessWrong has become less humorous over the years, but this idea popped into my head when I made my bounty comment and I couldn’t stop myself from making it. Feel free to downvote this shortform if you want the site to remain a super serious forum. For the rest of you: here is my wanted poster for the reference class problem. Please solve it, it keeps me up at night.