Indeed. I would think (as someone who knows nothing of AI beyond following LW for a few years) that the likely AI risk is something that doesn’t think like a human at all, rather than something that is so close to a human in its powers of understanding that it could understand a sentence well enough to misconstrue it in a manner that would be considered malicious in a human.
There’s also the thing that you’d only get this dangerous AI that you can’t use for anything by bolting together a bunch of magical technologies to handicap something genuinely useful, much like you obtain an unusable outcome pump by attaching a fictional through wall 3D scanner to a place where two dangling wires that you have to touch together after your mother is saved would have worked just fine.
Indeed. I would think (as someone who knows nothing of AI beyond following LW for a few years) that the likely AI risk is something that doesn’t think like a human at all, rather than something that is so close to a human in its powers of understanding that it could understand a sentence well enough to misconstrue it in a manner that would be considered malicious in a human.
There’s also the thing that you’d only get this dangerous AI that you can’t use for anything by bolting together a bunch of magical technologies to handicap something genuinely useful, much like you obtain an unusable outcome pump by attaching a fictional through wall 3D scanner to a place where two dangling wires that you have to touch together after your mother is saved would have worked just fine.
The genie post is sort of useful as a musing on philosophy and the inexactitude of words, but is still ridiculous as a threat model.