I’d agree to AI “unfriendly” (whatever this means… it shouldn’t reason emotionally, it should just be sufficiently intelligent) replacing humanity… since we are the problem that we’re trying to solve. We feel pain, we suffer, we are stupid, susceptible to countless diseases, we aren’t very happy and fulfilled, etc. Eventually we’ll all need to be either corrected or replaced. An old computer can only take so many software updates before it becomes incompatible with newer operating systems, and this is our eventual fate. It is not logical to be against our own demise, in my viewpoint.
JonatasMueller
Good guide, indeed having more money to spend through whatever career may allow for being more useful for charity.
The expedition analogy is good. I’ll get into discussing the specific goal or utility function. What is the goal we’re heading to?
I’d say the goal as I see it is to increase the intelligence (or cure the lack of it) to make the agents of this world able to willingly solve their problems, and thereby reach a state of technological advancement that allows them to get rid of all problems for good, and start doing better things such as spending time in paradise and exploring the universe.
We shouldn’t medicate our problems in the short-term, we should think in the long-term in curing them for good. How to do that? Scientific research into intelligence, artificial intelligence and human intelligence augmentation.
How does “saving” (should I say, prolonging?) African lives help with that? Not at all, in my view. Africa receives many billions of dollars in donations, there’s clearly something wrong with the way it works, and you’re not going to fix it by adding a million dollars to that sea of resources that doesn’t end up changing anything in the long-term. It’s like a car that leaks fuel, you can keep adding more and more fuel, or you should try and fix it, and that is what I suggest. You should rather spend a million dollars in a vital area that is badly lacking funding, such as intelligence augmentation and artificial intelligence.
I don’t think that we want to “save lives”. Save suffering instead. If you prolong an African life you’re probably prolonging suffering, which is a waste. A life of suffering and misery is not worth saving. People have no souls. This is a physical world, if you lose consciousness somewhere you still got plenty of it all around.
While the article shows with neat scientific references that it is possible to want something that we don’t end up liking, this is irrelevant to the problem of value in ethics, or in AI. You could as well say without any scientific studies that a child may want to put their hand in the fire and end up not liking the experience. It is well possible to want something by mistake. But it is not possible to like something by mistake, as far as I know. Differently from wanting, “liking” is valuable in itself.
Wanting is a bad thing according to Epicurus, for example. Consider the Greek concepts of akrasia and ataraxia. Wanting has instrumental value for motivating us, though, but it may feel bad.
Examine Yudkowsky, with his theory of Coherent Extrapolated Volition. He saw only the variance and not what is common to it (his stated goal doesn’t have an abstract constancy such as feeling good, but instead it is “fulfilling every persons’ extrapolated volition”, or their wish in case they had unlimited intelligence). This is a smarter version of preference utilitarianism. However, since people’s basic condition is essentially the same, there needn’t be this variance. It doesn’t matter if people like different flavors of ice cream, they all want it to taste good.
On the other hand, standard utilitarianism seems to see only the constancy and not take account of the variance, and for this reason it is criticized. It is like giving strawberry ice cream to everyone because Bentham thought that it was the best. Some people may hate strawberry ice cream and want it chocolate instead, and they criticize standard utilitarianism and may go to an ethical nihilism because of flavor disputes. What does this translate into in terms of feelings? One could prefer love, rough sex, solitude, company, insight, meaningfulness, flow, pleasure, etc. to different extents, and value different sensory inputs differently, especially if one is an alien or of another species.
Ethics is real and objective in abstraction and subjective in mental interpretation of content. In other words, it’s like an equation or algorithm with a free variable, which is the subject’s interpretation of feelings (which is just noise in the data), and and objective evaluation of it in an axis of good or bad, which corresponds to real moral value.
The free variable doesn’t mean that ethics is not objective. It is actually a noise in the data caused by a chain of events that is longer than should be considered. If we looked only at the hardware algorithm (or “molecular signature”, as David Pearce calls it) of good and bad feelings, we might see it as completely objective, but in humans there is a complex labyrinth between a given sensory stimulus and the output to this hardware algorithm of good and bad, such that a same stimulus may produce a different result for different organisms, because first they need to pass through a different labyrinth.
This is the reason for some variance in preference of feelings (affective preference? experiential preference?), or as also could be said, preference in tastes. Some people like strawberry and some prefer chocolate, but the end result in terms of good feelings is similarly valuable.
Since sentient experience seems to be all that matters, instead of, say, rocks, and in sentience the quality of the experience seems to be what matters, then to achieve value (quality of experience) there’s still a variable which is the variation in people’s tastes. This variation is not in the value itself (that is, feeling better) but on the particular tastes that are linked to it for each person. The value is still constant despite this variance (they may have different taste buds, but presented with the right stimuli they all lead to feeling good or feeling bad).
I think it’s pretty clear that empathy has flaws and occasionally leads to unethical behavior, but it may help somewhat “cognitively disadvantaged” people act in a less evil way, maybe. Emotions as a whole are not necessary for morality if there is very high intelligence to really understand ethics in a conceptual level. Emotions by themselves also can never sustain ethical behavior without this conceptual understanding. Although you could argue that empathy only works well with understanding of ethics, since empathy leads to errors for example in the Trolley experiment or in abortion, understanding of ethics is better off without empathy. Ethical rules or laws may serve as guidelines and threats for people to act in ways that are predicted to be favorable (no need to invoke deontology, I think, as two-level utilitarianism allows for rules), but emotional people would be all the more prone to breaking them, I suppose.
The universal ethics of the universe do happen to exist. The universal ethics has a positive value which is feeling good and a negative value which is feeling bad. These are physical phenomena which in their essential form consist in the activation of the neural areas that produce them in the brain, in our case. It applies to all sentient creatures in the universe since although they may not have human emotions they may have good or bad feelings, by their own classification. Other values are either reducible to these or invalid. For example, survival as a value is dependent on having good feelings, therefore it is reducible to them. The proof is that in eternal hell, for example, survival acquires a highly negative value. Another example is knowledge, it is reducible to increasing our ability to solve the causes of our feeling bad and increase our power of feeling good. Without it, knowledge by itself is as worthless as a boring class of useless information… if we had all the knowledge in the universe, but lived as an isolated paraplegic in a prison, then what? This wouldn’t change anything therefore knowledge too is reducible to feeling good. Also, personal identity is a Darwinian delusion, so egotism should not be accepted as reasonable, although this ethics can work in an individual framework. Rules or laws may be accepted for humans to manage ignorance and incapability to make correct ethical decisions, on the basis that these laws increase global value.
As for the IQ question and especially the self-reported IQ, it did not take into account that IQ should come at least with standard deviation. Otherwise it’s like asking for a height number without saying if it is in centimeters, meters, or feet. It’s understandable that people who didn’t study psychometrics with some depth don’t know this, though.
IQ can be a ratio IQ or a deviation IQ. In the first case it is mental age divided by actual age, with the normalcy as 100. This is still used mostly for children, but it’s still possible to see such scores. Deviation IQ is more common and it’s supposed to measure one’s intelligence according to rarity in a population.
Sometimes these tests are standardized for certain countries, in which case an IQ score only has relevance in relation to that country’s population, but generally the standard is the population of England or the USA, with its average being 100. Other countries have averages ranging from about 67 to 107 (s.d. 15), compared to it. The average IQ score of the world is estimated at about 90, but there are also differences in standard deviation among different populations, some have bigger variation than others, and also between the sexes (men have a slightly higher standard deviation).
Standard deviations used are 15, 16, and 24. For instance, an IQ score one standard deviation above 100 could be 115, 116, or 124. An IQ of 163 in s.d. 15 corresponds to an IQ of 167 in s.d. 16, or an IQ of 200 in s.d. 24, which, in average, correspond to a ratio IQ of 185. When estimating the true world rarity of IQ scores, though, very lengthy and complex estimations would need to be made, otherwise the scores only reflect the rarity in England or in the USA, and not in the world. When it comes to scores higher than two or three standard deviations above the average, most IQ tests are inadequate and insufficiently standardized to measure them and their rarity well.
This information is for your curiosity. The relevant point is that the self-reported IQ scores quite possibly were stated in differing standard deviations.
As I mentioned previously, and judging from the graphs, the standard deviations of the IQs are obviously mixed up, because they were not determined in the questionnaire, and probably people who answered are not educated about them either. Including IQs in s.d. 24 with those in s.d. 16 and 15 is bound to inflate the average IQ. The top scores in that graph, or at the very least some of them, are in s.d. 24, which means that they would be a lot lower in s.d. 15. IQ 132 is the cutoff for s.d. 16, while s.d. 15 is the one most adopted in recent scientific literature. For s.d. 24, it is 148. Mensa and often people on the press like to use s.d. 24 to sound more impressive to amateurs.
This probably makes tests like the SAT more reliable as an estimation, because they have the same standard for all who submitted their scores, although in this case the ceiling effect would become apparent, because perfect or nearly-perfect scores wouldn’t go upwards of a certain IQ.
I will answer by explaining my view of morally realist ethics.
Conscious experiences and their content are physical occurrences and real. They can vary from the world they represent, but they are still real occurrences. Their reality can be known with the highest possible certainty, above all else, including physics, because they are immediately and directly accessible, while the external world is accessible indirectly.
Unlike the physical world, it seems that physical conscious perceptions can theoretically be anything. The content of conscious perceptions could, with the right technology, be controlled, as in a virtual world, and made to be anything, even things that differ from the external physical world. While the physical world has no ethical value except from conscious perceptions, conscious perceptions can be ethical value, and only by being good or bad conscious perceptions, or feelings. This seems to be so by definition, because ethical value is being good or bad.
That a conscious experience can be a good or bad physical occurrence is also a reality which can be felt and known with the highest possible certainty. This makes it rational, and an imperative, to follow it and care about it, to act in order to foster good conscious feelings and to prevent bad conscious feelings, because it is logical that this will make the universe better. This is acting ethically. Not acting accordingly is irrational and mistaken. Ethics is about realizing valuable states.
Human beings have primitive emotional and instinctive motivations that are not guided by intelligence and rationality. These primitive motivations can take control of human minds and make them act in irrational and unintelligent ways. Although human beings may consider it good to act according to their primitive motivations in cases in which they conflict with acting ethically, this would be an irrational and mistaken decision.
When primitive motivations conflict with human intelligent reason, these two could be thought of as two different agents inside one mind, with differing motivations. Intelligent reason does not always prevail, because primitive motivations have strong control of behavior. However, it would be rational and intelligent for intelligent reason to always take the ultimate control of behavior if it could somehow suppress the power of primitive motivations. This might be done by somehow strengthening human intelligent reason and its control of motivations.
Actions which foster good conscious feelings and prevent bad conscious feelings need not do so in the short-term. Many effective actions tend to do so only in the long-term. Likewise, such actions need not do so directly; many effective actions only do so indirectly. Often it is rational to act if it is probable that it will be ethically positive eventually.
That people have personal identities is false; they are mere parts of the universe. This is clear upon advanced philosophical analysis, but can be hard to understand for those who haven’t thought much about it. An objective and impersonal perspective is called for. For this reason it is rational for all beings to ‘act ethically’ not only for themselves but also for all other beings in the same universe. For an explanation of why personal identities don’t exist, what is relevant for the question of why acting ethically in a collective rather than selfish sense, see this brief essay:
https://www.facebook.com/notes/jonatas-müller/universal-identity/10151189314697917
- 4 Mar 2013 21:58 UTC; 0 points) 's comment on General purpose intelligence: arguing the Orthogonality thesis by (
“Not quite anything, since the size and complexity of conscious thought is bounded by the human brain. But that is not relevant to this discussion of ethics.”
Indeed, conscious experience may be bound by the size and complexity of brains or similar machinery, of humans, other animals, and cyborgs. Theoretically, conscious perceptions may be able to be anything (or nearly), as we could theorize about brains the size of Jupiter or much larger. You get the point.
“Should I interpret this as you defining ethics as good and bad feelings?”
Almost. Not ethics, but ethical value in a direct, ultimate sense. There is also indirect value, which is things that can lead to direct value, which are myriad, and ethics is much more than defining value, it comprises laws, decision theory, heuristics, empirical research, and many theoretical considerations. I’m aware that Elizer has written a post on Less Wrong saying that ethical value is not on happiness alone. Although happiness alone is not my proposition, I find his post on the topic quite poorly developed, and really not an advisable read.
“So, do you endorse wireheading?”
This depends very much on the context. All else being equal, wireheading could be good for some people, depending on the implications of it. However, all else seems hardly equal in this case. People seem to have a diverse spectrum of good feelings that may not be covered by the wireheading (such as love, some types of physical pleasure, good smell and taste, and many others), and the wireheading might prevent people from being functional and acting in order to increase ethical value in the long-term, so as to possibly deny its benefits. I see wireheading, in the sense of artificial paradise simulations, as a possibly desirable condition in a rather distant future of ideal development and post-scarcity, though.
Conscious perceptions are quite direct and simple. Do you feel, for example, a bad feeling like intense pain as being a bad occurrence (which, like all occurrences in the universe, is physical), and likewise, for example, a good feeling like a delicious taste as being a good occurrence?
I argue that these are perceived with the highest degree of certainty of all things and are the only things that can be ultimately linked to direct good and bad value.
“Why is fostering good conscious feelings and prevent bad conscious feelings necessarily correct? It is intuitive for humans to say we should maximize conscious experience, and that falls under the success theory that Peter talks about, but why is this necessarily the one true moral system?”
If we agree that good and bad feelings are good and bad, that only conscious experiences produce direct ethical value, which lies in its good or bad quality, then theories that contradict this should not be correct, or they would need to justify their points, but it seems that they have trouble in that area.
“But valuable to who? If there were a person who valued others being in pain, why would this person’s views matter less?”
:) That’s a beauty of personal identities not existing. It doesn’t matter who it is. In the case of valuing others being in pain, would it be generating pleasure from it? In that case, lots of things have to be considered, among which: the net balance of good and bad feelings caused from the actions; the societal effects of legalizing or not certain actions...
I think that it is a worthy use of time, and I applaud your rational attitude of looking to refute one’s theories. I also like to do that in order to evolve them and discard wrong parts.
Don’t hesitate to bring up specific parts for debate.
Liking pain seems impossible, as it is an aversive feeling. However, for some people, some types of pain or self-harm cause a distraction from underlying emotional pain, which is felt as good or relieving, or it may give them some thrill, but in these cases it seems that it is always pain + some associated good feeling, or some relief of an underlying bad feeling, and it is for the good feeling or relief that they want pain, rather than pain for itself.
Conscious perceptions in themselves seem to be what is most certain in terms of truth. The things they represent, such as the physical world, may be illusions, but one cannot doubt feeling the illusions themselves.
The idea that one can like pain in itself is not substantiated by evidence. Masochists or self-harmers seek some pleasure or relief they get from pain or humiliation, not pain for itself. They won’t stick their hands in a pot with boiling water.
http://en.wikipedia.org/wiki/Sadomasochism http://en.wikipedia.org/wiki/Self-harm
To follow that line of reasoning, please provide evidence that there exists anyone that enjoys pain in itself. I find that unbelievable, as pain is aversive by nature.
Who cares about that silly game. Accepting to play it or not is my choice.
You can only validly like ice cream by way of feelings, because all that you have direct access to in this universe is consciousness. The difference between Monday and Tuesday in your example is only in the nature of the feelings involved. In the pain example, it is liked by virtue of the association with other good feelings, not pain in itself. If a person somehow loses the associated good feelings, certain painful stimuli cease to be desirable.
Yes, that is correct. I’m glad a Less Wronger finally understood.
Stuart, here is a defense of moral realism:
http://lesswrong.com/lw/gnb/questions_for_moral_realists/8g8l
My paper which you cited needs a bit of updating. Indeed some cases might lead a superintelligence to collaborate with agents without the right ethical mindset (unethical), which constitutes an important existential risk (a reason why I was a bit reluctant to publish much about it).
However, isn’t the orthogonality thesis basically about the orthogonality between ethics and intelligence? In that case, the convergence thesis is would not be flawed if some unintelligent agents kidnap and force an intelligent agent to act unethically.
Another argumentation for moral realism:
Let’s imagine starting with a blank slate, the physical universe, and building ethical value in it. Hypothetically in a meta-ethical scenario of error theory (which I assume is where you’re coming from), or possible variability of values, this kind of “bottom-up” reasoning would make sense for more intelligent agents that could alter their own values, so that they could find, from “bottom-up”, values that could be more optimally produced, and also this kind of reasoning would make sense for them in order to fundamentally understand meta-ethics and the nature of value.
In order to connect to the production of some genuine ethical value in this universe, arguably some things would have to be built the same way, with certain conditions, while hypothetically others things could vary, in the value production chain. This is because ethical value could not be absolutely anything, otherwise those things could not be genuinely valuable. If all could be fundamentally valuable, then nothing would really be, because value requires a discrimination in terms of better and worse. Somewhere in the value production chain, some things would have to be constant in order for there to be genuine value. Do you agree so far?
If some things have to be constant in the value production chain, and some things could hypothetically vary, then the constant things would be the really important in creating value, and the variable things would be accessory, and could be randomly specified with some degree of freedom, by those that be analyzing value production from a “bottom-up” perspective in a physical universe. It would seem therefore that the constant things could likely be what is truly valuable, while the variable and accessory things could be mere triggers or engines in the value production chain.
I argue that, in the case of humans and of this universe, the constant things are what really constitute value. There is some constant and universal value in the universe, or meta-ethical moral realism. The variable things, which are accessory, triggers or engines in the value production chain, are preferences or tastes. Those preferences that are valid are those that ultimately connect to what is constant in producing value.
Now, from an empirical perspective, what ethical value has in common in this universe is its relationship to consciousness. What happens in totally unconscious regions of the universe doesn’t have any ethical relevance in itself, and only consciousness can ultimately have ethical value.
Consciousness is a peculiar physical phenomenon. It is representational in its nature, and as a representation it can freely differ or vary from the objects it represents. This difference or variability could be, for example, representing a wavelength of light in the vision field as a phenomenal color, or dreaming of unicorns, both of which transcend the original sources of data in the physical universe. The existence of consciousness is what there is of most epistemologically certain to conscious observers, this certainty is higher than that of any objects in this universe, because while objects could be illusions arising from the aforementioned variability in representation, consciousness itself is the most directly verifiable phenomenon. Therefore, the existence of conscious perceptions is more certain than the physical universe or than any physical theories, for example. Those could hypothetically be the product of false world simulations.
Consciousness can produce ethical value due to the transcendental freedom afforded by its representational nature, which is the same freedom that allows the existence of phenomenal colors.
Ethics is about defining value, what is good and bad, and how to produce it. If consciousness is what contains ethical value, then this ethical value lies in good and bad conscious experiences.
Variability in the production chain of good and bad conscious experiences for humans is accessory, as preferences and tastes, and in their ethical dimension they ultimately connect to good and bad conscious experiences. From a physical perspective, it could be said that the direct production of good and bad conscious experiences by nerve cells in brains is what constitutes direct ethical value, and that preferences are accessory triggers or engines that lead to this ethical value production. From paragraph 8, it follows that preferences are only ethically valid insofar as they connect to good and bad conscious experiences, in the present or future. People’s brains are like labyrinths with different paths ultimately leading to the production of good and bad feelings, but what matters is that production, not the initial triggers that pass through that labyrinth.
By the previous paragraphs, we have moral realism and constant values, with variability only apparent or accessory. So greater intelligence would find this and not vary. Now, depending on the question of personal identity, you may ask: what about selfishness?
David, what are those multiple possible defeaters for convergence? As I see it, the practical defeaters that exist still don’t affect the convergence thesis, they just are possible practical impediments, from unintelligent agents, to the realization of the goals of convergence.
Good post, though I thought that it is a little too focused on money. It could say (more explicitly) what types of charity are best, and what types of action… and other ways to help that aren’t money.
In my opinion, some of the most efficient ways to achieve a positive difference are, foremost: (these are strategic priorities with more positive potential than all the rest) human genetic engineering and intelligence augmentation, artificial intelligence, and reduction of existential risks. In second order of importance: (these are ways to increase utility in the here and now) destroying animals and the environment (which are cause of huge suffering), producing artificial meat to replace cruel animal farming, promoting birth control among the poor.
Activities to achieve these goals include:
Becoming very rich and using the money to achieve them;
Convincing people with lots of money to donate to these causes, and any other people to become aware of them and contribute somehow, by various means, such as by writing books, articles, making movies, posting on websites, talking to them, encouraging them to do activities to achieve them;
Conducting research personally in fields such as genetic engineering, artificial intelligence, artificial meat, birth control, etc., and convincing more people to do the same;
Helping or creating charity organizations directed towards birth control;
Fighting and discrediting religion, which is a significant hurdle to many of these efforts;
Convincing people about the right general framework of ideas that is compatible with these goals.
In my opinion, most other kinds of efforts to make a positive change, such as feeding the poor; preserving the environment; curing diseases; giving education to the poor; etc., are overrated and short-sighted, their effects in the long-term being relatively small. An increase in intelligence would produce an increase in the ability to do everything else, so it would be much more effective in the long-term; all these measures lose importance if our civilization and technological advancement be lost to some global catastrophe.
When AI starts working, several problems on which people work now will be rapidly solved (except those that require lengthy experiments). Therefore focusing on these problems now may be a waste of time, except for the meantime until their solution by AI.
Raising money seems like a matter of chance or luck. You’ll naturally try it but you can’t count on it, so it’s not a matter of deciding to do it. Raising public awareness and enthusiasm seems to be an action with a relatively high potential: you can potentially get many other people to raise money, do scientific research, and raise public awareness and enthusiasm in their turn, so this may be the action with the most potential, even though it only accomplishes indirectly. Doing scientific research personally seems to require high stakes, in career, life, and seems to depend a bit on the place you live and what are the things that you like to study and work in. This one is a hard decision, because it is sort of a gamble with your life.