While I think this is a broadly reasonable response, I’m curious what you think is able to provide better public justification than longtermism. These results seem to apply fairly broadly to any realistic EV-based justification for action given that partial observability is very much the rule.
I genuinely don’t know. It’s out of my depth to try to sensibly answer that. I think it’s sometimes easier to see the error in something than the solution.
All the same, I have niggling fear that LTist reasoning as practiced by MacAskill and others rests on a base with very serious problems. That’s not minor when the future of the universe is being decided.
In contrast, I totally believe that EA efforts like distributing malaria nets is a wonderful and sensible idea.
While I think this is a broadly reasonable response, I’m curious what you think is able to provide better public justification than longtermism. These results seem to apply fairly broadly to any realistic EV-based justification for action given that partial observability is very much the rule.
I genuinely don’t know. It’s out of my depth to try to sensibly answer that. I think it’s sometimes easier to see the error in something than the solution.
All the same, I have niggling fear that LTist reasoning as practiced by MacAskill and others rests on a base with very serious problems. That’s not minor when the future of the universe is being decided.
In contrast, I totally believe that EA efforts like distributing malaria nets is a wonderful and sensible idea.
So in summary, not sure.
@David Johnston I wrote a piece of my own philosophical thoughts (it doesn’t answer the longtermist question though.) I suspect you might really dislike, and disagree, with the essay, but your comments are so good and sharp. If you are ever in the mood (and there is also a small chance it will resonate) and wanted to help me think more clearly: https://www.lesswrong.com/posts/zANA2aJzTQDJutguA/join-my-new-movement-for-the-post-ai-world