To be clear: I am not saying that an AGI won’t be dangerous, that an AGI won’t be much clever than us or that it is not worth working on AGI safety. I am saying that I believe that an AGI could not theoretically kill all humans because it is not only a matter of being very intelligent.
To be clear: I am not saying that an AGI won’t be dangerous, that an AGI won’t be much clever than us or that it is not worth working on AGI safety. I am saying that I believe that an AGI could not theoretically kill all humans because it is not only a matter of being very intelligent.
Typo? (could not kill all humans)
Typo