I agree; many of those concerns seem fairly dominated by the question of how to get a well-aligned ASI, either in the sense that they’d be quite difficult to solve in reasonable timeframes, or in the sense that they’d be rendered moot. (Perhaps not all of them, though even in those cases I think the correct approach(es) to tackling them start out looking remarkably similar to the sorts of work you might do about AI risk if you had a lot more time than we seem to have right now.)
I agree; many of those concerns seem fairly dominated by the question of how to get a well-aligned ASI, either in the sense that they’d be quite difficult to solve in reasonable timeframes, or in the sense that they’d be rendered moot. (Perhaps not all of them, though even in those cases I think the correct approach(es) to tackling them start out looking remarkably similar to the sorts of work you might do about AI risk if you had a lot more time than we seem to have right now.)