Even for danger that comes from superhumanly and robustly competent AIs, these AIs might’ve been to a significant extent created by idiosyncratically flawed AIs of jagged competence. The flaws of these predecessor AIs then shape the danger of their more capable successors, making these flaws a point of intervention worth addressing, even when the AIs with these flaws are not very dangerous directly. Similarly to how humanity is not dangerous directly to a superintelligence, except in how humanity would be able to create another superintelligence if left unchecked.
Even for danger that comes from superhumanly and robustly competent AIs, these AIs might’ve been to a significant extent created by idiosyncratically flawed AIs of jagged competence. The flaws of these predecessor AIs then shape the danger of their more capable successors, making these flaws a point of intervention worth addressing, even when the AIs with these flaws are not very dangerous directly. Similarly to how humanity is not dangerous directly to a superintelligence, except in how humanity would be able to create another superintelligence if left unchecked.