You may get massive s-risk at comparatively little potential benefit with this. On many people’s values, the future you describe may not be particularly good anyway, and there’s an increased risk of something going wrong because you’d be trying a desperate effort with something you’d not fully understand.
Ah, I forgot to add that this is a potential s-risk. Yeah.
Although I disagree that that future would be close to zero. My values tell me it would be at least a millionth as good as the optimal future, and at least a million times more valuable than a completely consciousness-less universe.
You may get massive s-risk at comparatively little potential benefit with this. On many people’s values, the future you describe may not be particularly good anyway, and there’s an increased risk of something going wrong because you’d be trying a desperate effort with something you’d not fully understand.
Ah, I forgot to add that this is a potential s-risk. Yeah.
Although I disagree that that future would be close to zero. My values tell me it would be at least a millionth as good as the optimal future, and at least a million times more valuable than a completely consciousness-less universe.