Objectvist ethics claims to be grounded in rational thought alone. Are you familiar enough with the main tenets of that particular philosophy and would you like to comment in what way you see it of possible use in regards to FAI theory?
StefanPernar
Hmm—interesting. I thought this could be of interest, considering that there is a large overlap in the desire to be rational on this site and combating the existential risks a rouge AI poses. Reason and existence are central to Objectivism too after all:
“it is only the concept of ‘Life’ that makes the concept of ‘Value’ possible,” and, “the fact that a living entity is, determines what it ought to do.” She writes: “there is only one fundamental alternative in the universe: existence or non-existence—and it pertains to a single class of entities: to living organisms.” also “Man knows that he has to be right. To be wrong in action means danger to his life. To be wrong in person – to be evil – means to be unfit for existence.”
I did not find an analysis in Guardians of Ayn Rand that concerned itself with those basic virtues of ‘existence’ and ‘reason’. I personally find objectivism flawed for focusing on the individual and not on the group but that is a different matter.
I realize that I am being voted down here, but am not sure why actually. This site is dedicated to rationality and the core concern of avoiding a human extinction scenario. So far Rand and lesswrong seem a pretty close match. Don’t you think it would be nice to know exactly where Rand took a wrong turn so that it can be explicitly avoided in this project? Rand making some random remarks on music taste surely does not invalidate her recognition that being rational and avoiding extinction are of crucial importance.
So where did she take a wrong turn exactly and how is this wrong turn avoided here? Nobody interested in finding out?
Me—if I qualify as an academic expert is another matter entirely of course.
Every human being in history so far has died and yet human are not extinct. Not sure what you mean.
“Given this, I conclude that Objectivism isn’t the stuff that makes you win, so it’s not rationality.”
Do you think it is worthwhile to find out where exactly their rationality broke down to avoid a similar outcome here? How would you characterize ‘winning’ exactly?
Yes—I disagree with Eliezer and have analyzed a fair bit of his writings although the style in which it is presented and collected here is not exactly conducive to that effort. Feel free to search for my blog for a detailed analysis and a summary of core similarities and differences in our premises and conclusions.
“I think we’ve been over that already. For example, Joe Bloggs might choose to program Joe’s preferences into an intelligent machine—to help him reach his goals.”
Sure—but it would be moral simply by virtue of circular logic and not objectively. That is my critique.
I realize that one will have to drill deep into my arguments to understand and put them into the proper context. Quoting certain statements out of context is definitely not helpful, Tim. As you can see from my posts, everything is linked back to a source were a particular point is made and certain assumptions are being defended.
If you have a particular problem with any of the core assumptions and conclusions I prefer you voice them not as a blatant rejection of an out of context comment here or there but based on the fundamentals. Reading my blogs in sequence will certainly help although I understand that some may consider that an unreasonable amount of time investment for what seems like superficial nonsense on the surface.
Where is your argument against my points Tim? I would really love to hear one, since I am genuinely interested in refining my arguments. Simply quoting something and saying “Look at this nonsense” is not an argument. So far I only got an ad hominem and an argument from personal incredulity.
- 16 Nov 2009 18:56 UTC; -2 points) 's comment on Open Thread: November 2009 by (
Since when are ‘heh’ and ‘but, yeah’ considered proper arguments guys? Where is the logical fallacy in the presented arguments beyond you not understanding the points that are being made? Follow the links, understand where I am coming from and formulate a response that goes beyond a three or four letter vocalization :-)
Perfectly reasonable. But the argument—the evidence if you will—is laid out when you follow the links, Robin. Granted, I am still working on putting it all together in a neat little package that does not require clicking through and reading 20+ separate posts, but it is all there none the less.
“Compassion isn’t even universal in the human mind-space. It’s not even universal in the much smaller space of human minds that normal humans consider comprehensible. It’s definitely not universal across mind-space in general.”
Your argument is beside my original point, Adelene. My claim is that compassion is a universal rational moral value. Meaning any sufficiently rational mind will recognize it as such. The fact that not every human is in fact compassionate says more about their rationality (and of course their unwillingness to consider the arguments :-) ) than about that claim. That’s why it is call ASPD—the D standing for ‘disorder’, it is an aberration, not helpful, not ‘fit’. Surely the fact that some humans are born blind does not invalidate the fact that seeing people have an enormous advantage over the blind. Compassion certainly being less obvious though—that is for sure.
Re “The argument is valid in a “soft takeoff”scenario, where few or only one AI establishes control in a rapid period of time, the dynamics described do not come into play. In that scenario, we simply get a paperclip maximizer.”—that is from Kaj Sotala over at her live journal—not me.
From Robin: Incidentally, when I said, “it may be perfectly obvious”, I meant that “some people, observing the statement, may evaluate it as true without performing any complex analysis”.
I feel the other way around at the moment. Namely “some people, observing the statement, may evaluate it as false without performing any complex analysis”
“This isn’t a logical fallacy but it is cause to dismiss the argument if the readers do not, in fact, have every reason to have said belief.”
But the reasons to change ones view are provided on the site, yet rejected without consideration. How about you read the paper linked under B and should that convince you, maybe you have gained enough provisional trust that reading my writings will not waste your time to suspend your disbelief and follow some of the links in the about page of my blog. Deal?
“My set of values are utterly whimsical [...] The reasons for my desires can be described biologically, evolutionarily or with physics of a suitable resolution. But now that I have them they are mine and I need no further reason.”
If that is your stated position then in what way can you claim to create FAI with this whimsical set of goals? This is the crux you see: unless you find some unobjectionable set of values (such as in rational morality ‘existence is preferable over non-existence’ ⇒ utility = continued existence ⇒ modified to ensure continued co-existence with the ‘other’ to make it unobjectionable ⇒ apply rationality in line with microeconomic theory to maximize this utility et cetera) you will end up being a deluded self serving optimizer.
The longer I stay around here the more I get the feeling that people vote comments down purely because they don’t understand them not because they found a logical or factual error. I expect more from a site dedicated to rationality. This site is called ‘less wrong’, not ‘less understood’, ‘less believed’ or ‘less conform’.
Tell me: in what way do you feel that Adelene’s comment invalidated my claim?
Tim: “If rerunning the clock produces radically different moralities each time, the relativists would be considered to be correct.”
Actually compassion evolved many different times as a central doctrine of all major spiritual traditions. See the charter for compassion. This is in line with my prediction that I made independently and being unaware of this fact until I started looking for it back in late 2007 and eventually finding the link in late 2008 with Karen Armstrong’s book The Great Transformation.
Tim: “Why is it a universal moral attractor?” Eliezer: “What do you mean by “morality”?”
Central point in my thinking: that is good which increases fitness. If it is not good—not fit—it is unfit for existence. Assuming this to be true we are very much limited in our freedom by what we can do without going extinct (actually my most recent blog post is about exactly that: Freedom in the evolving universe).
from the Principia Cybernetica web: http://pespmc1.vub.ac.be/POS/Turchap14.html#Heading14
“Let us think about the results of following different ethical teachings in the evolving universe. It is evident that these results depend mainly on how the goals advanced by the teaching correlate with the basic law of evolution. The basic law or plan of evolution, like all laws of nature, is probabilistic. It does not prescribe anything unequivocally, but it does prohibit some things. No one can act against the laws of nature. Thus, ethical teachings which contradict the plan of evolution, that is to say which pose goals that are incompatible or even simply alien to it, cannot lead their followers to a positive contribution to evolution, which means that they obstruct it and will be erased from the memory of the world. Such is the immanent characteristic of development: what corresponds to its plan is eternalized in the structures which follow in time while what contradicts the plan is overcome and perishes.”
Eliezer: “It obviously has nothing to do with the function I try to compute to figure out what I should be doing.”
Once you realize the implications of Turchin’s statement above it has everything to do with it :-)
Now some may say that evolution is absolutely random and direction less, or that multilevel selection is flawed or similar claims. But reevaluating the evidence against both these claims by people like Valentin Turchin, Teilhard De Chardin, John Stewart, Stuart Kaufmann, John Smart and many others regarding evolution’s direction and the ideas of David Sloan Wilson regarding multilevel selection, one will have a hard time maintaining either position.
:-)
Full discussion with Kaj at her http://xuenay.livejournal.com/325292.html?view=1229740 live journal with further clarifications by me.
Robin, your suggestion—that compassion is not a universal rational moral value because although more rational beings (humans) display such traits yet less rational being (dogs) do not—is so far of the mark that it borders on the random.
With unobjectionable values I mean those that would not automatically and eventually lead to one’s extinction. Or more precisely: a utility function becomes irrational when it is intrinsically self limiting in the sense that it will eventually lead to ones inability to generate further utility. Thus my suggested utility function of ‘ensure continued co-existence’
This utility function seems to be the only one that does not end in the inevitable termination of the maximizer.
Fun investment fact: the two trades that over 40 years turned 1′000 USD into >1′000′000 USD
1′000 USD in Gold on Jan 1970 for 34.94 USD / oz (USD 1′000.00)
1st Trade Sell Gold in Jan 1980 at 675.30 USD / oz (USD 19′327.41) Buy Dow on April 18 1980 at 763.40 (USD 19′327.41)
2nd Trade Sell Dow on Jan 14 2000 at 11′722.98 (USD 296′797.14) Buy Gold on Nov 11 2000 at 264.10 USD / oz (USD 296′797.14)
Portfolio value today: ~1′187′188.57 USD
:-)