Eliezer writes, “You wouldn’t expect to derive ‘ought’ from the raw structure of the universe.”
Let me remind that I have retreated from the position that “ought” can be derived from the laws of physics. Now I try to derive “ought” from the laws of rationality. (Extremely abbreviated sample: since Occam’s razor applies to systems of value just like it applies to models of reality and since there is nothing that counts as evidence for a system of values, a proper system of values will tend to be simple.) It is not that I find the prospect of such a derivation particularly compelling, but rather that I find the terminal values (and derivations thereof) of most educated people particularly offputting, and if I am going to be an effective critic of egalitarian and human-centered systems of values then I must propose a positive alternative.
A tentative hypothesis of mine as to why most smart thoughtful people hold terminal values that I find quite offputting is that social taboos and the possiblity of ostracism from polite society weigh much more heavily on them than on me. Because I already occupy a distinctly marginal social position and because I do not expect to live very much longer, it is easier for me to make public statements that might have an adverse effect on my reputation.
I believe that my search will lead to a system of values that adds up to normality, more or less, in the sense that it will imply that it would be unethical to, oh, for example, run for office in a multiracial country on a platform that the country’s dark-skinned men are defiling the purity of the fair-skinned women—to throw out an example of a course of action that everyone reading this will agree is unethical.
IMHO most people are much too ready to add new terminal values to the system of values that they hold. (Make sure you understand the distinction between a terminal value and a value that derives from other values.) People do not perceive people with extra terminal values as a danger or a menace. Consider for example the Jains of India, who hold that it is unethical to harm even the meanest living thing, including a bug in the soil. Consequently Jains often wear shoes that minimize the area of the shoe in contact with the ground. Do you perceive that as threatening? No, you probably do not. If anything, you probably find it reassuring: if they go through all that trouble to avoiding squishing bugs then maybe they will be less likely to defraud or exploit you. But IMHO extra terminal values become a big menace when humans use them to plan for ultratechnologies and the far future.
An engineered intelligence’s system of terminal values should be much smaller and simpler than the systems currently held or professed by most humans. (In contrast, the plans of the engineered intelligence will be complicated because they are the product of the interaction of a simple system of terminal values with a complicated model of reality.) In particular, just to describe or define a human being with the precision required by an engineered intelligence requires more bits than the intelligence’s entire system of terminal values probably ought to contain. Consequently, that system should not IMHO even make reference to human beings or the volition of human beings. (Note that such an intelligence will probably acquire the ability to communicate with humans late in its development, when it is already smarter than any human.)
(Extremely abbreviated sample: since Occam’s razor applies to systems of value just like it applies to models of reality and since there is nothing that counts as evidence for a system of values, a proper system of values will tend to be simple.)
Eliezer writes, “You wouldn’t expect to derive ‘ought’ from the raw structure of the universe.”
Let me remind that I have retreated from the position that “ought” can be derived from the laws of physics. Now I try to derive “ought” from the laws of rationality. (Extremely abbreviated sample: since Occam’s razor applies to systems of value just like it applies to models of reality and since there is nothing that counts as evidence for a system of values, a proper system of values will tend to be simple.) It is not that I find the prospect of such a derivation particularly compelling, but rather that I find the terminal values (and derivations thereof) of most educated people particularly offputting, and if I am going to be an effective critic of egalitarian and human-centered systems of values then I must propose a positive alternative.
A tentative hypothesis of mine as to why most smart thoughtful people hold terminal values that I find quite offputting is that social taboos and the possiblity of ostracism from polite society weigh much more heavily on them than on me. Because I already occupy a distinctly marginal social position and because I do not expect to live very much longer, it is easier for me to make public statements that might have an adverse effect on my reputation.
I believe that my search will lead to a system of values that adds up to normality, more or less, in the sense that it will imply that it would be unethical to, oh, for example, run for office in a multiracial country on a platform that the country’s dark-skinned men are defiling the purity of the fair-skinned women—to throw out an example of a course of action that everyone reading this will agree is unethical.
IMHO most people are much too ready to add new terminal values to the system of values that they hold. (Make sure you understand the distinction between a terminal value and a value that derives from other values.) People do not perceive people with extra terminal values as a danger or a menace. Consider for example the Jains of India, who hold that it is unethical to harm even the meanest living thing, including a bug in the soil. Consequently Jains often wear shoes that minimize the area of the shoe in contact with the ground. Do you perceive that as threatening? No, you probably do not. If anything, you probably find it reassuring: if they go through all that trouble to avoiding squishing bugs then maybe they will be less likely to defraud or exploit you. But IMHO extra terminal values become a big menace when humans use them to plan for ultratechnologies and the far future.
An engineered intelligence’s system of terminal values should be much smaller and simpler than the systems currently held or professed by most humans. (In contrast, the plans of the engineered intelligence will be complicated because they are the product of the interaction of a simple system of terminal values with a complicated model of reality.) In particular, just to describe or define a human being with the precision required by an engineered intelligence requires more bits than the intelligence’s entire system of terminal values probably ought to contain. Consequently, that system should not IMHO even make reference to human beings or the volition of human beings. (Note that such an intelligence will probably acquire the ability to communicate with humans late in its development, when it is already smarter than any human.)
Just like abstract maths is simple...