For example, the leaders of AGI capabilities research would be smarter—which is bad in that they make progress faster, but good in that they can consider arguments about X-risk better.
This mechanism seems weak to me. For example, I think the leaders of all AI companies are considerably smarter than me, but I am still doing a better job than they are of reasoning about x-risk. It seems unlikely that making them even smarter would help.
(All else equal, you’re more likely to arrive at correct positions if you’re smarter, but I think the effect is weak.)
Another example: it’s harder to give even a plausible justification for plunging into AGI if you already have a new wave of super smart people making much faster scientific progress in general, e.g. solving diseases.
If enhanced humans could make scientific progress at the same rate as ASI, then ASI would also pose much less of an x-risk because it can’t reliably outsmart humans. (Although it still has the advantage that it can replicate and self-modify.) Realistically I do not think there is any level of genetic modification at which humans can match the pace of ASI.
That all isn’t necessarily to say that human intelligence enhancement is a bad idea; I just didn’t find the given reasons convincing.
There were just four reasons right? Your three numbered items, plus “effectful wise action is more difficult than effectful unwise action, and requires more ideas / thought / reflection, relatively speaking; and because generally humans want to do good things”. I think that quotation was the strongest argument. As for numbered item #1, I don’t know why you believe it, but it doesn’t seem clearly false to me, either.