Hot take: to the extent that EAs and rationalists turn crazy, part of the problem involves that some of their focuses include existential risk + having very low discount rates for the future.
To explain more, I think that utilitarianism is maybe a part of the problem, but it’s broader than that. The bigger problem is once you fundamentally believe that we will all die of something, and your group can control the chances of being extinct, that’s a fast road to craziness, given that most of these existential risks probably wouldn’t materialize anyway, and importantly gives you license to justify a lot of actions that you’d normally never consider, and this is especially dangerous once you add very low discount rates to the mix, so that preventing it is our foremost priority.
This tweet below makes a harsher version of my point, and while I probably would soften it, I also suspect that something like this is true, and I actually like it both for stating something that is maybe true, and that it makes rather weak ethics assumptions on what EAs/LWers value, and what their ethical system looks like.
Hot take: to the extent that EAs and rationalists turn crazy, part of the problem involves that some of their focuses include existential risk + having very low discount rates for the future.
To explain more, I think that utilitarianism is maybe a part of the problem, but it’s broader than that. The bigger problem is once you fundamentally believe that we will all die of something, and your group can control the chances of being extinct, that’s a fast road to craziness, given that most of these existential risks probably wouldn’t materialize anyway, and importantly gives you license to justify a lot of actions that you’d normally never consider, and this is especially dangerous once you add very low discount rates to the mix, so that preventing it is our foremost priority.
This tweet below makes a harsher version of my point, and while I probably would soften it, I also suspect that something like this is true, and I actually like it both for stating something that is maybe true, and that it makes rather weak ethics assumptions on what EAs/LWers value, and what their ethical system looks like.
https://twitter.com/AGI_HeavenHell/status/1673359793078038528