I was actually trying for a stronger claim—that AI (as a permanent solution that takes some time to develop) is better than institutional work or humanitarian aid
Have you considered diminishing returns? We have more resources available to us than are currently useful in the goal of persuing AGI. Would you argue that we should let those resources go fallow rather than work to mitigate ongoing problems in the duration of the period before our AGI efforts succeed merely because it’s not as worthy a goal as AGI?
Have you considered diminishing returns? We have more resources available to us than are currently useful in the goal of persuing AGI. Would you argue that we should let those resources go fallow rather than work to mitigate ongoing problems in the duration of the period before our AGI efforts succeed merely because it’s not as worthy a goal as AGI?