Rather it is that from the start, the AI will not share values with humans, simply because we don’t know how to build an AI that does
The complete argument is that that the AI will have values in the first place (it wont be a tool like GPT*), that the values will be misaligned, that misalignment cannot be detected or corrected, and that most misaligned values are highly dangerous. It’s a conjunction of four claims, not just one.
It’s all very well complaining about people misrepresenting you, but you could do a lot better at stating your case.
The complete argument is that that the AI will have values in the first place (it wont be a tool like GPT*), that the values will be misaligned, that misalignment cannot be detected or corrected, and that most misaligned values are highly dangerous. It’s a conjunction of four claims, not just one.
It’s all very well complaining about people misrepresenting you, but you could do a lot better at stating your case.