The interesting flip side to this seems to me that any model that is so easily kept from dangerous knowledge just isn’t a very smart model, almost surprisingly so. After all, if you know about all sorts of chemistry or biology, how are you going to be unable to generalize to the notion of explosives or bioweapons? A truly general intelligence could do the leap very easily.
The interesting flip side to this seems to me that any model that is so easily kept from dangerous knowledge just isn’t a very smart model, almost surprisingly so. After all, if you know about all sorts of chemistry or biology, how are you going to be unable to generalize to the notion of explosives or bioweapons? A truly general intelligence could do the leap very easily.