“I hold that moral intuitions are nothing but learned prejudices. Historic examples from slavery to the divine right of kings to tortured confessions of witchcraft or Judaism to the subjugation of women to genocide all point to the fallibility of these ‘moral intuitions’. There is absolutely no sense to the claim that its conclusions are to be adopted before those of a reasoned argument.”—Alonzo Fyfe
Specifically, I am a moral realist. Furthermore, I reject the claim that there is some hard distinction between ‘is’ and ‘ought’. Loyal readers should be familiar with my claim that we should focus instead on the distinction between ‘is’ and ‘is not’. Morality either belongs in the realm of ‘is’ (somehow), or it belongs in the realm of ‘is not’.
However, this does not tell us where to find morality in the realm of ‘is’. In past conferences, I have found that the neural ethicists were looking in the wrong spot.
Let me illustrate with an example. A researcher takes a hoard of subjects and performs brain scans on them while they think about planets and stars and take astronomy tests. He may learn a lot of interesting things, However, it would be a mistake to call this researcher an astronomer. Studying thoughts about stars and studying stars is not the same thing.
Neural ethicists seem to be unaware of this distinction. They study the brain while the subject thinks about moral concepts or works through some moral problem or puts down an answer on some moral test, and they think they are studying morality. They are not. They are studying beliefs and other attitudes on morality.
I have skimmed the first two links, and based only on these, I think this theory is ridiculously simplistic to be useful for us here at LW.
How do you compare the strength of two desires? How do you aggregate desires? Maybe Fyfe has answers, but I haven’t seen them. In the two links, I couldn’t even find any attempt to deal with popular corner cases such as animal rights and patient rights. And in a transhuman world, corner cases are the typical cases: constantly reprogrammed desires, splitting and merging minds, the ability to spawn millions of minds with specific desires and so on.
I don’t know, maybe this is a common problem with all current theories of ethics, and I only singled out this theory because I’m totally unversed in the literature of ethics. The result is all the same: this seems to be useless as a foundation for anything formalized and long-lasting (FAI).
“I hold that moral intuitions are nothing but learned prejudices. Historic examples from slavery to the divine right of kings to tortured confessions of witchcraft or Judaism to the subjugation of women to genocide all point to the fallibility of these ‘moral intuitions’. There is absolutely no sense to the claim that its conclusions are to be adopted before those of a reasoned argument.”—Alonzo Fyfe
Another (much longer) quote:
This Alonzo Fyfe must know of some other way to gather evidence in normative ethics. Please share it!
Non-technical version (warning: PDF)
Technical version
Alonzo’s blog
I have skimmed the first two links, and based only on these, I think this theory is ridiculously simplistic to be useful for us here at LW.
How do you compare the strength of two desires? How do you aggregate desires? Maybe Fyfe has answers, but I haven’t seen them. In the two links, I couldn’t even find any attempt to deal with popular corner cases such as animal rights and patient rights. And in a transhuman world, corner cases are the typical cases: constantly reprogrammed desires, splitting and merging minds, the ability to spawn millions of minds with specific desires and so on.
I don’t know, maybe this is a common problem with all current theories of ethics, and I only singled out this theory because I’m totally unversed in the literature of ethics. The result is all the same: this seems to be useless as a foundation for anything formalized and long-lasting (FAI).
Indeed, I keep bugging him about this. :(
As for animal rights, this is what he says whenever anyone brings up the topic.