Less Wrong memes think that person AIs won’t be sufficiently person-like but they sort of tend to assume that conclusion rather than argue for it, which causes memes that aren’t familiar with Less Wrong memes to wonder why Less Wrong memes are so incredibly confident that all AIs will necessarily act like autistic OCD people without any possibility at all of acting like normal reasonable people.
This exposes a circularity in lesswrongian reasoning: if you think of an AI as fundamental non-person like, then there is a need to bolt on human values. If you think of it as human—like , then huma-like values are more likely to be inhrerent or acquired naturally through interaction.
I don’t see the circularity. “human” is a subset of “person”; there’s no reason an AI that is a “person” will have “human” values. Also, just thinking of the AI as being human-like doesn’t actually make it human-like.
This exposes a circularity in lesswrongian reasoning: if you think of an AI as fundamental non-person like, then there is a need to bolt on human values. If you think of it as human—like , then huma-like values are more likely to be inhrerent or acquired naturally through interaction.
I don’t see the circularity. “human” is a subset of “person”; there’s no reason an AI that is a “person” will have “human” values. Also, just thinking of the AI as being human-like doesn’t actually make it human-like.
I dont’ see the relevance. Goetzel isn’t talking about building non-human persons.
If you design an AI on x-like principles, it will probably be X-like, unless something goes wrong.
Ah, I may not have gotten all the context.
If “something goes wrong” with high probability, it will probably not be X-like.