[inspired by this comment, but not entirely a response; still relevant]
Assume utilitarianism and altruism. You’re trying to help the world. There’s a large pit of suffering that you could throw your entire life into and still not fill. So you do as much as you can. You maximize your positive impact on the world.
But argmax requires a set of possible actions. What are these actions? “Be a superhuman who needs no overhead to turn work into donations” is not a valid action. Given what you can do, taking into account physical and psychological limitations, you maximize positive impact. And this requires cutting corners. If you try your hardest to squeeze every last cent of your life into altruism, this has significant negative effects on you, and thus on your altruism. You might burn out. You might lose effectiveness. So to optimize to the fullest, don’t optimize too hard.
So rational “optimize just for altruism” apparently destroys itself. To optimize for altruism, you have to do things that look like they’re selfish.
Coming back a few months later, what did I even mean by “cutting corners”?
Somebody doesn’t understand the difference between the thing and the appearance of the thing, and I can’t tell whether it’s my past self or the hypothetical EAs being discussed.
[inspired by this comment, but not entirely a response; still relevant]
Assume utilitarianism and altruism. You’re trying to help the world. There’s a large pit of suffering that you could throw your entire life into and still not fill. So you do as much as you can. You maximize your positive impact on the world.
But argmax requires a set of possible actions. What are these actions? “Be a superhuman who needs no overhead to turn work into donations” is not a valid action. Given what you can do, taking into account physical and psychological limitations, you maximize positive impact. And this requires cutting corners. If you try your hardest to squeeze every last cent of your life into altruism, this has significant negative effects on you, and thus on your altruism. You might burn out. You might lose effectiveness. So to optimize to the fullest, don’t optimize too hard.
So rational “optimize just for altruism” apparently destroys itself. To optimize for altruism, you have to do things that look like they’re selfish.
Coming back a few months later, what did I even mean by “cutting corners”?
Somebody doesn’t understand the difference between the thing and the appearance of the thing, and I can’t tell whether it’s my past self or the hypothetical EAs being discussed.