1) In the first case, it may raise the general risks to think about AI, but there’s a perfectly good Schelling point of ‘don’t implement the AI’; if you could detect the thoughts, you ought to be able to detect the implementation.
In the second case, you don’t need a rule specifically about thinking. You just need a rule against torture. If someone’s torture method involves only thinking, then, well, that could be an illegal line of thought without having to make laws about thinking.
2) In general, one reason thought crimes are bad is because we don’t have strong control over what we think of. If good enough mind-reading is implemented, I suspect that people will have a greater degree of control over what they think.
3) Another reason thought crimes are bad is because we would like a degree of privacy, and enforcement of thought laws would necessarily infringe on that a lot. If your thoughts are computed, it will be possible to make the laws a function to be called on your mental state. That function could be arranged to output only a ‘OK/Not OK’ output or a ‘OK/Not OK/You are getting uncomfortably close on topic X’, with no side-effects. That would seem to me to be much less privacy-invading.
1) In the first case, it may raise the general risks to think about AI, but there’s a perfectly good Schelling point of ‘don’t implement the AI’; if you could detect the thoughts, you ought to be able to detect the implementation. In the second case, you don’t need a rule specifically about thinking. You just need a rule against torture. If someone’s torture method involves only thinking, then, well, that could be an illegal line of thought without having to make laws about thinking.
2) In general, one reason thought crimes are bad is because we don’t have strong control over what we think of. If good enough mind-reading is implemented, I suspect that people will have a greater degree of control over what they think.
3) Another reason thought crimes are bad is because we would like a degree of privacy, and enforcement of thought laws would necessarily infringe on that a lot. If your thoughts are computed, it will be possible to make the laws a function to be called on your mental state. That function could be arranged to output only a ‘OK/Not OK’ output or a ‘OK/Not OK/You are getting uncomfortably close on topic X’, with no side-effects. That would seem to me to be much less privacy-invading.