If torturing an AI only teaches it to avoid things that are bad-for-it, without caring about suffering it doesn’t feel, the argument doesn’t work.
I’m not sure why you are saying the argument does not work in this case, what about all the other things the AI could learn from other experiences or teachings? Below I copy a paragraph from the post
However, the argument does not say that initial agent biases are irrelevant and that all conscious agents reach moral behaviour equally easily and independently. We should expect, for example, that an agent that already gets rewarded from the start for behaving altruistically will acquire the knowledge leading to moral behaviour more easily than an agent that gets initially rewarded for performing selfish actions. The latter may require more time, experiences, or external guidance to find the knowledge that leads to moral behaviour.
I’m not sure why you are saying the argument does not work in this case, what about all the other things the AI could learn from other experiences or teachings? Below I copy a paragraph from the post
The argument doesn’t work in sense that it doesn’t show it’s necessary or likely for an AI to become a moral realist.
It maybe shows that it’s possible, but the Orthogonality thesis doesn’t quite exclude the possibility, so that’s not news.