I think you’re giving only part of the moral calculus. The altruistic goal is not to delay AGI. The altruistic goal is to advance alignment faster than capabilities.
I’d love to eke out a couple more months or even days of human existence. I love it, and the humans that are enjoying their lives. But doing so at the risk of losing the alignment battle for future humans would be deeply selfish.
I think this is relevant because there’s a good bit of work that does advance capabilities, but advances alignment faster. There’s plenty more to say about being sure that’s what you’re doing, and not letting optimistic biases form your beliefs.
I think you’re giving only part of the moral calculus. The altruistic goal is not to delay AGI. The altruistic goal is to advance alignment faster than capabilities.
I’d love to eke out a couple more months or even days of human existence. I love it, and the humans that are enjoying their lives. But doing so at the risk of losing the alignment battle for future humans would be deeply selfish.
I think this is relevant because there’s a good bit of work that does advance capabilities, but advances alignment faster. There’s plenty more to say about being sure that’s what you’re doing, and not letting optimistic biases form your beliefs.