Let me make sure that I get this right: you look at the survey, measure how many people answered yes to both moral internalism and moral realism, and conclude that everyone who did not accepts the orthogonality thesis?
If yes, then I don’t think that’s a good approach, for three distinct reasons
1. You’re assuming philosophers all have internally consistent positions
2. I think you merely have a one-way implication: int∧real⟹het, but not necessarily backwards. It seems possible to reject the orthogonality thesis (and thus accept heterogonality) without believing in both moral realism and moral internalism. But most importantly,
3. Many philosophers probably evaluated morel internalism with respect to humans. Like, I would claim that this is almost universally true for humans, and I probably agree with moral realism, too. kind of. But I also believe the orthogonality thesis when it comes to AI.
All your objections are correct and important, and I think the correct results may be anything from 50% to 80%. That said, I think there’s a reasonable argument that most heterogonalists would consider morality to be the set of motivations from “with enough intelligence, any possible agent would pursue only one set of motivations” (more mathematically, the utility function from “with enough intelligence, any possible agent would pursue only one utility function”).
Let me make sure that I get this right: you look at the survey, measure how many people answered yes to both moral internalism and moral realism, and conclude that everyone who did not accepts the orthogonality thesis?
If yes, then I don’t think that’s a good approach, for three distinct reasons
1. You’re assuming philosophers all have internally consistent positions
2. I think you merely have a one-way implication: int∧real⟹het, but not necessarily backwards. It seems possible to reject the orthogonality thesis (and thus accept heterogonality) without believing in both moral realism and moral internalism. But most importantly,
3. Many philosophers probably evaluated morel internalism with respect to humans. Like, I would claim that this is almost universally true for humans, and I probably agree with moral realism, too. kind of. But I also believe the orthogonality thesis when it comes to AI.
All your objections are correct and important, and I think the correct results may be anything from 50% to 80%. That said, I think there’s a reasonable argument that most heterogonalists would consider morality to be the set of motivations from “with enough intelligence, any possible agent would pursue only one set of motivations” (more mathematically, the utility function from “with enough intelligence, any possible agent would pursue only one utility function”).