Phil Goetz was not saying that all languages have the word “the.” He said that the word “the” is something every ENGLISH document has in common. His criticism is that this does not mean that Hamlet is more similar to an English restaurant menu than an English novel is to a Russian novel. Likewise, Eliezer’s argument does not show that we are more like petunias then like an AI.
Unknown
Caledonian, I didn’t say that the Razor leads to the conclusion that “it is more probable that two things which share a property are identical than not.” The Razor leads to the conclusion that “the two things are identical” is more likely than some other specific hypothesis that they are not identical in some specific way.
There are of course an infinite number of ways in which two things can fail to be identical, so in order to compare the probability that the two are identical with the probability that they are not, we have to sum the probabilities for all the ways they could fail to be identical; and thus the conclusion will be that they are more likely not identical than identical, as you correctly stated.
If you look back, though, you will see that I never said anything opposed to this anyway.
Eliezer, if AGI (or something else) ends up being designed without Friendliness, and Robin turns out to be right that this has no major harmful consequences, and about the doubling time, would you admit that his argument might be better, or would you say that he had been lucky?
Correction: in my last comment it should have been “if more complex claims, on average, are more probable than simpler claims,” not “if more probable claims, on average, are more probable than simpler claims”.
Cyan: “Minimum description length” works for English and probably most other languages as well, including abstract logical languages. Increase the number of properties enough, and it will definitely work for any language.
Caledonian: the Razor isn’t intended to prove anything, it is intended to give an ordering of the probability of various accounts. Suppose we have 100 properties, numbered from one to a hundred. X has property #1 through #100. Y has property #1. Which is more likely: Y has properties #1 through #100 as well, or Y has property #1, all prime numbered properties except #17, and property #85. I think it is easy enough to see which of these is simpler and more likely to be true.
Peter Turney: the argument for the Razor is that on average, more complicated claims must be assigned a lower prior probability than simpler claims. If you assign prior probabilities at all, this is necessary on average, no matter how you define simplicity. The reason is that according to any definition of simplicity that corresponds even vaguely with the way we use the word, you can’t get indefinitely simpler, but you can get indefinitely more complicated. So if all your probabilities are equal, or if more probable claims, on average, are more probable than simpler claims, your prior probabilities will not add to 1, but to infinity.
I doubt it particularly matters which precise measure of simplicity I use, probably any reasonable measure will do. Consider the same with one hundred properties: X has properties 1 through 100. If Y has properties 12, 14, 15, 27, 28, 29, 43, 49, 62, 68, 96, and 100, but no others, then it will take more bits to say which properties X and Y have, than the number of bits it will take to specify that X and Y share all the same properties.
Of course, this seems to support Guest’s argument; and yes, once we see that X and Y share a property, the simplest hypothesis is that they are the same. Of course this can be excluded by additional evidence.
“But there is just no law which says that if X has property A and Y has property A then X and Y must share any other property.”
“X & Y both have properties A & B” is logically simpler than “X & Y have property A, X has B, and Y does not have B”
So if X and Y share property A, and X has B, this is evidence, by Ockham’s razor, that Y has property B.
Peter de Blanc: see http://www.overcomingbias.com/2007/07/beware-the-insi.html, posted by Robin Hanson. In particular : “Most, perhaps all, ways to overcome bias seem like this. In the language of Kahneman and Lovallo’s classic ’93 paper, we allow an outside view to overrule an inside view… If overcoming bias comes down to having an outside view overrule an inside view, then our questions become: what are valid outside views, and what will motivate us to apply them?”
What do you think this means, if not that overcoming bias means taking outside views?
The implied disagreement here between the “inside view” of “outside views” (i.e. a limited domain) and the “outside view” of “outside views” (i.e. something that applies in general) is the same as Eliezer’s disagreement with Robin about the meaning of Aumann.
If Robin is right, then Eliezer is against overcoming bias in principle, since this would be taking an outside view (according to Robin’s understanding). Of course, if Eliezer is right, it just means that Robin is biased against inside views. Each of these consequences is very strange; if Robin is right, Eliezer is in favor of bias despite posting on a blog on overcoming bias, while if Eliezer is right, Robin is biased against his own positions, among other things.
Some of these comments to HA are unfair: he is not saying that no one else is an altruist, but only that he isn’t. So he also doesn’t care about the pain inflicted on the toddler’s parents, for example.
Still, I’m afraid he hasn’t considered all the consequences: when the toddlers burn up in the orphanage, the economic damage (in this case, the loss of the toddler’s future contribution to society), may end up lowering HA’s persistence odds. Certainly we have no reason to believe that it will increase them. So HA should really care about rescuing the toddlers.
What Alexandre said. It may be that physics is deterministic: but implying that this is logically necessary, since the merely possible does not happen, by definition, doesn’t seem reasonable to me.
Cyan: it does not feel “neither deterministic nor random”. It just feels random.
HA, both here and in your comments on the previous posts, you have continuously given the impression that you don’t know what Eliezer is talking about.
It’s Michael Ruse, not Rose.
A much stronger argument for the chocolate cake would be that there must be some incredibly small probability that atoms would be come together by chance to form a chocolate cake in the asteroid belt. However, all physical possibilities are real, according to the argument for many worlds. Therefore there is actually a chocolate cake in the asteroid belt. It just happens to be very distant from our blob of amplitude.
A similar case: there must be a world where your arm transforms into a blue tentacle, even if this world has an incredibly small amount of amplitude. Granted that you don’t expect to see this happen, there is still a different version of Eliezer who does see it happen. Of course, as you have argued, he cannot explain it. But what do you think he says about it when people ask why it happened? Does he begin to believe in magic?
Eliezer, it’s possible for there to be a good argument for something without that implying that you should accept it. There might be one good argument for X, but 10 good arguments for not-X, so you shouldn’t accept X despite the good argument.
This is an important point because if you think that a single good argument for something implies that you should accept it, and that therefore there can’t be any good arguments for the opposite, this would suggest a highly overconfident attitude.
Eliezer’s point (a quite justified one) is that the word “choice” is a name for something that human beings do, just as the name “apple” is a name for something human beings find in the world. Whatever you think an apple is, if you say it is only an illusion, then you’re not talking about apples, but something else. Likewise, whatever you might think a choice is, if you say it is only an illusion, you’re not talking about choices, but something else. For choice just means one of the things that people actually do in the real world, so it is quite real, not an illusion.
Again, if free will requires that the future not be fixed, then many-worlds implies that free will can exist. According to many-worlds it is impossible to predict the result of a quantum mechanical experiment, precisely because both results must happen to different versions of you. So before you do the experiment, it is completely indeterminate what “you” are going to see.
If free will is defined (I don’t see that anyone did it yet here), it is easy to see that it is consistent with many-worlds. Ordinarily free will has a simple definition: if a person is thinking about what to do, there is more than one thing that he can conclude and do.
According to many-worlds, there are many things that he does conclude, and does do. If there are many that he does do, then there are many that he can do. So by this definition of free will, he has free will.
Roko is basically right. In a human being, the code that is executing when we try to decide what is right or what is wrong is the same type of code that executes when we try decide how much are 6 times 7. The brain has a general pattern signifying “correctness,” whatever that may be, and it uses this identical pattern to evaluate “6 times 7 is 49″ and “murder is wrong.”
Of course you can ask why the human brain matches “murder is wrong” to the “correctness” pattern, and you might say that it is arbitrary (or you might not.) Either way, if we can program an AGI at all, it will be able to reason about ethical issues using the same code that it uses when it reasons about matters of fact. It is true that it is not necessary for a mind to do this. But our mind does it, and doubtless the first mind-programmers will imitate our minds, and so their AI will do it as well.
So it is simply untrue that we have to give the AGI some special ethical programming. If we can give it understanding, packaged into this is also understanding of ethics.
Naturally, as Roko says, this does not imply the existence of any ghost, anymore than the fact that Deep Blue makes moves unintelligible to its programmers implies a ghost in Deep Blue.
This also gives some reason for thinking that Robin’s outside view of the singularity may be correct.