I believe the chance of AI leading to dystopia is certainly non-0, and it’s easy to come up with all the ways AI might try to take us out, but I find the more challenging ideas (beyond how we keep an aligned ASI) are related to how will society operate when AI does cover all jobs and we are left with time on our hands.
Agreed, higher priority is ensuring AGI/ASI is aligned with humanities best interests, but nice to think how society could look 10 to 15 years in the future if we get it right!
I believe the chance of AI leading to dystopia is certainly non-0, and it’s easy to come up with all the ways AI might try to take us out, but I find the more challenging ideas (beyond how we keep an aligned ASI) are related to how will society operate when AI does cover all jobs and we are left with time on our hands.
Agreed, higher priority is ensuring AGI/ASI is aligned with humanities best interests, but nice to think how society could look 10 to 15 years in the future if we get it right!