I feel that is a very good point. But most older people care more about their grandchildren surviving than themselves surviving. AI risk is not just a longtermist concern, but threatens the vast majority of people alive today (based on 3 year to 20 year timelines)
I think the loss incurred by misaligned AI depends a lot on facts about the AI’s goals. If it had goals resembling human goals, it may have a wonderful and complex life of its own, and keep humans alive in zoos and be kind to us. But people who want to slow down AI are more pessimistic: they think the misaligned AI will do something unsatisfying as filling the universe with paperclips.
I feel that is a very good point. But most older people care more about their grandchildren surviving than themselves surviving. AI risk is not just a longtermist concern, but threatens the vast majority of people alive today (based on 3 year to 20 year timelines)
I think the loss incurred by misaligned AI depends a lot on facts about the AI’s goals. If it had goals resembling human goals, it may have a wonderful and complex life of its own, and keep humans alive in zoos and be kind to us. But people who want to slow down AI are more pessimistic: they think the misaligned AI will do something unsatisfying as filling the universe with paperclips.