If I were a human-level intelligent computer program, I would put substantial effort to get ability to self-modify, but that’s not a point.
My favorite analogy here is that humans were bad at addition before invention of positional arithmetic and then they became good. My concern is that we can invent seemingly human-level system which becomes above human-level after it learns some new cognitive strategy.
If I were a human-level intelligent computer program, I would put substantial effort to get ability to self-modify, but that’s not a point. My favorite analogy here is that humans were bad at addition before invention of positional arithmetic and then they became good. My concern is that we can invent seemingly human-level system which becomes above human-level after it learns some new cognitive strategy.