EA and rationality, at their core (at least from a predictive perspective), were about getting money and living forever. Other values were always secondary.
Materialism without any sort of deontological limits seems to converge on this. The ends justify the means. The grander of the scale at play, the more convincing this argument is.
which is exactly what we’re worried about from AI, and is why I don’t think this is an AI-specific problem, it’s just that we need to solve it asymptotically durably for the first time in history. I’m having trouble finding it right now, but there was a shortform somewhere—I thought it was by vanessa kosoy, but I don’t see it on her page; also not wentworth. IIRC it was a few days to weeks after https://www.lesswrong.com/posts/KSguJeuyuKCMq7haq/is-vnm-agent-one-of-several-options-for-what-minds-can-grow came out—about how the thing that forces being a utility maximizer is having preferences that are (only?) defined far out in the future.
To be clear, I am in fact saying this means I am quite concerned about humans whose preferences can be modeled by simple utility functions, and I agree that money and living forever are two simple preferences where, if they’re your primary preference, you’ll probably end up looking relatively like a simple utility maximizer.
Materialism without any sort of deontological limits seems to converge on this. The ends justify the means. The grander of the scale at play, the more convincing this argument is.
which is exactly what we’re worried about from AI, and is why I don’t think this is an AI-specific problem, it’s just that we need to solve it asymptotically durably for the first time in history. I’m having trouble finding it right now, but there was a shortform somewhere—I thought it was by vanessa kosoy, but I don’t see it on her page; also not wentworth. IIRC it was a few days to weeks after https://www.lesswrong.com/posts/KSguJeuyuKCMq7haq/is-vnm-agent-one-of-several-options-for-what-minds-can-grow came out—about how the thing that forces being a utility maximizer is having preferences that are (only?) defined far out in the future.
To be clear, I am in fact saying this means I am quite concerned about humans whose preferences can be modeled by simple utility functions, and I agree that money and living forever are two simple preferences where, if they’re your primary preference, you’ll probably end up looking relatively like a simple utility maximizer.