well, an intelligence that cannot self-modify (for a broad enough understanding of “self-modify”) is significantly less likely to get superintelligent (though of course it’s possible that I end up designing a superintelligence first time out of the park).
that said, “can’t modify its own brain” != “can’t self-modify” in that broad sense… if I can’t modify my own brain but I can create a copy of myself whose brain I can modify, most of the same difficulties arise. (Unless I happen to believe, like many humans do, that a copy of myself at time T is importantly different from me at time T, in which case maybe those difficulties don’t arise.)
well, an intelligence that cannot self-modify (for a broad enough understanding of “self-modify”) is significantly less likely to get superintelligent (though of course it’s possible that I end up designing a superintelligence first time out of the park).
that said, “can’t modify its own brain” != “can’t self-modify” in that broad sense… if I can’t modify my own brain but I can create a copy of myself whose brain I can modify, most of the same difficulties arise. (Unless I happen to believe, like many humans do, that a copy of myself at time T is importantly different from me at time T, in which case maybe those difficulties don’t arise.)