Hammer: when there’s low downside, you’re free to try things. (Yeah, this is a corollary of expected utility maximization that seems obvious, but I still feel like I needed to explicitly and recently learn it.)
Spend a few hours on a last-minute scholarship application.
Try out dating apps a little (no luck yet, still looking into more effective use. But I still say that trying it was a good choice.)
Call friends/parents when feeling sad.
Go to an Effective Altruism retreat for a weekend.
Be (more) honest with friends.
Be extra friendly in general.
Show more gratitude (inspired by “More Dakka”, which I read thanks to the links at the top of this post.
Spend a few minutes writing a response to this post so that I can get practice with the power of internalizing ideas.
When headache → Advil and hot shower. It just works. Why did I keep just waiting and hoping the headache would go away on its own? Takes a few seconds to get some Advil, and I was going to shower anyways. It’s a huge boost to my well-being and productivity with next to no cost.
Ask questions. It seriously seems like I ask >50% of the questions in whatever room I’m in, and people have thanked me for this. They were ashamed or embarrassed to ask questions or something? What’s the downside?
I hadn’t considered this. You point out a big flaw in the neighbor’s strategy. Is there a way to repair it?
I only have second-hand descriptions of suicidal thoughts-processes, but I’ve heard from some who say they had become convinced that their existence was a negative on the world and the people they care about, and they came to their decision to commit suicide from a sort of (misguided) utilitarian calculation. I tried to give the man this perspective rather than the apathetic perspective you suggest. There’s diversity in the psychology of suicidal people. Do no suicidal people (or sufficiently few) have the Utilitarian type of psychology?
I’m glad you enjoyed it! I had heard of people making promises similar to your Trump-donation one. The idea for this story came from applying that idea to the context of suicide prevention. The part about models is my attempt to explain my (extremely incomplete grasp of) Functional Decision Theory in the context of a story.
4⁄8 of Eliezer Yudkowsky’s posts in this list have a minus 9. Compare this with 1⁄7 for duncan_sabien, 0⁄6 for paulfchristiano, 0⁄5 for Daniel Kokotajlo, or 0⁄3 for HoldenKarnofsky. I wonder why that is.
On one level, the post used a simple but emotionally and logically powerful argument to convince me that the creation of happy lives is good.
On a higher level, I feel like I switch positions of population ethics every time I read something about it, so I am reluctant to predict that I will hold the post’s position for much time. I remain unsettled that the field of population ethics, which is central to long-term visions of what the future should look like, has so little solid knowledge. My thinking, and therefore my actions, will remain split among the convincing population ethics positions.
This sequence made me doubt the soundness of philosophical arguments founded on what is “intuitive” (which this post very much relies upon). I don’t know how someone might go about doing population ethics from a psychology point of view, but the post’s subtitles “Preciousness,” “Gratitude,” and “Reciprocity” give some clues.
A testable aspect of the post would be to find out if the responses to the Wilbur and Michael thought experiments are universal. Also, I’d be interested to know how many of the people who read this post in 2021 (and have interacted with population ethics since then) maintain their position.
Carlsmith should follow up with his take on the Repugnant Conclusion. The Repugnant Conclusion is the central question of population ethics, so excluding it from this post is a major oversight.
Notes: The “famously hard” link is broken.
He has shown up.
I’m here with a few others in a booth near the door. We haven’t seen Uzair.
Yes, it is. I wanted to win, and there is no rule against “going against the spirit” of AI Boxing.
I think about AI Boxing in the frame of Shut up and Do the Impossible, so I didn’t care that my solution doesn’t apply to AI Safety. Funnily, that makes me an example of incorrect alignment.
I have spent many hours on this, and I have to make a decision by two days from now. There’s always the possibility that there is more important information to find, but even if I stayed up all night and did nothing else, I would not be able to read the entirety of the websites, news articles, opinion pieces, and social media posts relating to the candidates. Research costs resources! I suppose what I’m asking for is a way of knowing when to stop looking for more information. Otherwise I’ll keep trying possibility 2 over and over and end up missing the election deadline!
Thanks for the response. Those are fair reasons. I should have contributed more.
The LessWrong community is big and some are in Florida. If anyone had interesting things to share about the election I wanted to encourage them to do so.
I guess that makes sense, but very rarely is there a post that appeals to EVERYONE. A better system would be for people to be able to seek out the content that interests them. If something doesn’t interest you, then you move on.
Those are interesting questions! Perhaps you should make your own post instead of using mine to get more of an audience.
Expressing disapproval of both candidates by e.g. voting for Harambe makes sense, but I think that voting for bad policies is a bad move because “obvious” things aren’t obvious to many people, and voting for bad candidates (as opposed to joke candidates) makes their policies more mainstream and likely to be adopted by candidates with chances to win.
Why do you think my post is being shot down?
AI safety research has been groping in the dark, and half-baked suggestions for new research directions are valuable. It isn’t as though we’ve made half of a safe AI. We haven’t started, and all we have are ideas.