For the avoidance of doubt, I am not arguing that MIRI’s fears about unfriendly AI are right (nor that they aren’t); just saying why it’s somewhat credible for them to think that someone could be clever enough to make an AGI might still not appreciate the dangers.
For the avoidance of doubt, I am not arguing that MIRI’s fears about unfriendly AI are right (nor that they aren’t); just saying why it’s somewhat credible for them to think that someone could be clever enough to make an AGI might still not appreciate the dangers.