An exercise I adopted when I was a child that I did not read anywhere is to recall or reconstruct the thought that led to my current thought, then the thought before that, etc. By examining many such transitions, I discovered some generalizations about the unconscious function that takes thought N to thought N+1.
Richard_Hollerith
I’m going to focus on one word in your comment: “democracy”.
So, you would permit “democracy … to answer various questions formerly answered by scripture”?
It makes me sad to learn that. I am strongly opposed to the idea that counting votes is a good way of arriving at ordinary or moral truth (unless perhaps one is very picky about whose vote counts).
Of course, that pernicious idea—Majority Rule—is so prevalent in our world that I would not bother to voice my objection except that you are the leader of a project that if successful will impose on the entire future light cone decisions that will have the same unbendable and irreversible character that physical law now has. This property of irreversibility is quite unique to your project. (There are other project that would impose irreversible conditions, namely sterilization of the biosphere, if they fail or go wrong, but yours is the only one I know of that would do so if you succeed.)
What makes my agony and my sadness particularly acute is the knowledge that up to the age 19 or so, you wrote about ultimate ends in ways I found completely benign and lovable. I refer of course to documents like TMOLFAQ, which apparently you are now so ashamed of that you have removed it from the web.
Oh how sad it made me to read your Collective Extrapolated Volition document, with its horror of disenfranchisement, plus your speculations about extending the franchise to non-human primates, as if the very contingent, accidental, particular political religion of our times were a universal law of the universe that no rational agent with sufficient time to grow wise could object to!
Oh yeah: nice series of blog entries. Thanks for writing. And know that I know that it is only because your commitment to unambiguous publication of your beliefs that it is possible for me to snipe at you in this way.
I agree wholeheartedly with this post or blog entry. As one of my favorite authors once said, we are all pawns and players in the Game of Life.
If you (the reader) meet me, I will try to determine whether you are good or evil, that is, whether you’re expected impact on the future is positive or negative—and if you care for nothing but pleasure, I’m probably going to decide that you are at least a little evil—though the worst thing I will do to you is ignore you and refuse to cooperate with you. Moreover, I know how complex the human morality and rationality are, so I know that my judgement about people can always be in error.
Anders, glucose drinks are known to promote the release of neurotransmitters like serotonin one would expect to help self-control, but the problem is that the effect lasts less than an hour and is followed by a bigger effect in the opposite direction.
RI asks,
how moral or otherwise desirable would the story have been if half a billion years’ of sentient minds had been made to think, act and otherwise be in perfect accordance to what three days of awkward-tentacled, primitive rock fans would wish if they knew more, thought faster, were more the people they wished they were...
Eliezer answers,
A Friendly AI should not be a person. I would like to know at least enough about this “consciousness” business to ensure a Friendly AI doesn’t have (think it has) it. An even worse critical failure is if the AI’s models of people are people.
Suppose consciousness and personhood are mistaken concepts. Well, since personhood is an important concept in our legal systems, there is something in reality (namely, in the legal environment) that corresponds to the term “person”, but suppose there is not any “objective” way to determine whether an intelligent agent is a person where “objective” means without someone creating a legal definition or taking a vote or something like that. And suppose consciousness is a mistaken concept like phlogiston, the aether and the immortal soul are mistaken concepts. Then would not CEV be morally unjustifiable because there is no way to justify the enslavement—or “entrainment” if you want a less loaded term—of the FAI to the (extrapolated) desires of the humans?
playing this game is rational if the thrill and the dream of beeing rich is valued more than 2$
Rukasu, I believe that having feelings about winning the lottery is an even bigger waste than the $2 because feelings are a scarce resource of the mind which can always be turned to a fruitful plan. In other words, since there are always many ways to get a thrill, always choose a way that can positively impact reality.
I too would prefer for contemporary politics to show up here only very rarely.
Doug, I do not agree because my utility function depends on the identity of the people involved, not simply on N. Specifically, it might be possible for an agent to become confident that Bob is much more useful to whatever is the real meaning of life than Charlie is, in which case a harm to Bob has greater disutility in my system than a harm to Charlie. In other words, I do not consider egalitarianism to be a moral principle that applies to every situation without exception. So, for me U is not a function of (N,I,T)
When I write for a very bright “puzzle-solving-type” audience, I do the mental equivalent of deleting every fourth sentence or at least the tail part of every fourth sentence to prevent the reader from getting bored. I believe that practice helps my writings to compete with the writings around it for the critical resource of attention. There are of course many ways of competing for attention, and this is one of the least prejudicial to rational thought. I recommend this practice only in forums in which the reader can easily ask followup questions. Nothing about this practice is incompatible with the practices Eliezer is advocating. This week I am experimenting with adding three dots to the end of a sentence to signal to the reader the need mentally to complete the sentence.
So, what sentence did I delete from the above? A sentence to the effect that I only do this for writing that resembles mathematical proof fairly closely: “Suppose A. Because B, C. Therefore D, from which follows E, which is a contradiction, so our original assumption A must be false.”
After writing a first draft, I go back and add a lot more words than I had saved with the “do not bore the reader” practice. E.g. I add sentences explicitly to contradict interpretations that would lead to my being dismissed as hopelessly socially inept, eccentric or evil. Of course because I advocate outlandish positions here, I still get dismissed a lot.
(I agree, Peter.)
no matter what else she does wrong, and what else you do right, all of it together can’t outweigh the life consequences of that one little decision.
I think a person’s life should be evaluated by what effect they have on civilization (or more precisely, on the universe) not by how long they live. I think that living a long time is a merely personal end, and that a properly lived life is devoted to ends that transcend the personal. Isn’t that what you think, Eliezer?
And a randomness-adder :)
Nice story.
s/werewolf/Easter bunny/ IMHO.
Thesis: regarding some phenomenon as possible is nothing other than . . .
I consider that an accurate summary of Eliezer’s original post (OP) to which these are comments.
Will you please navigate to this page and start reading where it says,
Imagine that in an era before recorded history or formal mathematics, I am a shepherd and I have trouble tracking my sheep.
You need read only to where it says, “Markos Sophisticus Maximus”.
Those six paragraphs attempt to be a reductive exposition of the concept of whole number, a.k.a., non-negative integer. Please indicate whether you have the same objection to that exposition, namely, that the exposition treats of the number of pebbles in a bucket and therefore circularly depends on the concept of number (or whole number).
Richard, if you’re seriously proposing that consciousness is a mistaken idea, but morality isn’t, I can only say that that has got to be one unique theory of morality.
Yes, Z.M.D., I am seriously proposing. And I know my theory of morality is not unique to me because a man caused thousands of people to declare for a theory of morality that makes no reference to consciousness (or subject experience for that matter) and although most of those thousands might have switched by now to some other moral theory and although most of the declarations might have been insincere in the first place, a significant fraction have not and were not if my correspondence with a couple of dozens of those thousands is any indication.
Maybe [Eliezer is] right, and superintelligence implies consciousness. I don’t see why it would, but maybe it does. How would we know? I worry about how productive discussions about AI can be, if most of the participants are relying so heavily upon their intuitions, as we don’t have any crushing experimental evidence.
It is not only that we don’t have any experimental evidence, crushing or otherwise, but also that I have never seen anything resembling an embryo of a definition of consciousness (or personhood unless personhood is defined “arbitrarily”, e.g., by equating it to being a human being) that would commit a user of the concept to any outcome in any experiment. I have never seen anything resembling an embryo of a definition even after reading Chalmers, Churchland, literally most of SL4 before 2004 (which goes on and on about consciousness) and almost everything Eliezer published (e.g., on SL4).
-- and since most writings in psychology are worthless, it is easy to give up on the whole field before one discovers worthwhile writings like “Judgement Under Uncertainty” and “The Moral Animal”.
RI, a large part of my motivatation was simply to practice a mental skill: it is a delightful feeling to improve drastically one’s ability to observe one’s own deliberations. Three decades and a severe bump on the head separate my teenage years from today, and today I am almost completely unable to do this exercise.
BTW, it is my guess that the exercises Eliezer and I describe will confer most of their benefits on exercisers who are still teenagers.
RI, to answer your question: the function that takes thought N into thought N+1 is complex enough that I did not learn anything that could be put into neat sentences, nor do I retain any declarative memories of what I learned except that the deliberation proceeded in a much more “predictable-in-retrospect” manner when I thought about some themes than when I thought about others. E.g., I remember that thinking about my mom produced very opaque chain of thoughts.
The practice Eliezer describes strikes me as of greater potential benefit than the one I describe, but perhaps the one I describe can be accomplished by a greater fraction of teenagers reading these words. Very few individuals are blessed with the delightful hardware that the teenage Eliezer had available for such exercises.
I have yet to read most of your post-2004 writings (making a living always seems to interfere), but I am guessing that your personal Mysterious Phenomenon was consciousness.
Let me clarify that what horrifies me is the loss of potential. Once our space-time continuum becomes a bunch of supermassive black holes, it remains that way till the end of time. It is the condition of maximum physical entropy (according to Penrose). Suffering on the other hand is impermanent. Ever had a really bad cold or flu? One day you wake up and it is gone and the future is just as bright as it would have been if the cold had never been.
And pulling numbers (80%, 95%) out of the air on this question is absurd.
I disagree with the last 2 comments.
Eliezer’s priority has gradually shifted over the last 5 years or so from increasing his own knowledge to transmitting what he knows to others, which is exactly the behavior I would expect from someone with his stated goals who knows what he is doing.
Yes, he has suggested or implied many times that he expects to implement the intelligence explosion more or less by himself (and I do not like that) but ever since the Summer of AI his actions (particularly all the effort he has put into blogging and his references to 15-to-18-year-olds, which suggests that he has thought about the most effective audience to target with his blogging) strongly indicate that he understands that the best way for him to assist the singularitarian project at this time is to transmit what he knows to other.
The blog is exactly the choice of means of transmission of scientific knowledge I would expect from someone who knows what he is doing. Surely we can look past the fact the some crusty academics look down on the blog.
I know of no one who has been more effective than Eliezer over the last 8 years or so at transmitting knowledge to people with a high aptitude for math and science.
And the suggestion that Eliezer lacks discipline strikes me as extremely unlikely. Just because a person is extremely intelligent does not mean that it is easy for the person to acquire knowledge at the rate Eliezer has acquired knowledge or to become so effective at transmitting knowledge.
Summary: if they can make you believe absurdities, they can make you commit atrocities.