Donated $400.
Furcas
Did it.
Luke isn’t bragging, he’s admitting that SI was/is bad but pointing out it’s rapidly getting better. And Eliezer is right, criticisms of SI are usually dumb. Could their replies be interpreted the wrong way? Sure, anything can be interpreted in any way anyone likes. Of course Luke and Eliezer could have refrained from posting those replies and instead posted carefully optimized responses engineered to send nothing but extremely appealing signals of humility and repentance.
But if they did turn themselves into politicians, we wouldn’t get to read what they actually think. Is that what you want?
Donated $500!
Did all of it. Monetary reward questions made me laugh.
Donated $500 CAD just now.
By the way, SIAI is still more than 31,000 US dollars away from its target.
I’ve just donated 500 Canadian dollars to the Singularity Institute (at the moment, 1 Canadian dollar = 1.01 US dollar).
[edited]
Well, this is insanely disappointing. Yes, the OP shouldn’t have directly replied to the Bankless podcast like that, but it’s not like he didn’t read your List of Lethalities, or your other writing on AGI risk. You really have no excuse for brushing off very thorough and honest criticism such as this, particularly the sections that talk about alignment.
And as others have noted, Eliezer Yudkowsky, of all people, complaining about a blog post being long is the height of irony.
This is coming from someone who’s mostly agreed with you on AGI risk since reading the Sequences, years ago, and who’s donated to MIRI, by the way.
On the bright side, this does make me (slightly) update my probability of doom downwards.
I donated $400.
Whatever the correct answer is, the first step towards it has to be to taboo words like “experience” in sentences like, “But if I make two copies of the same computer program, is there twice as much experience, or only the same experience?”
What making copies is, is creating multiple instances of the same pattern. If you make two copies of a pattern, there are twice as many instances but only one pattern, obviously.
Are there, then, two of ‘you’? Depends what you mean by ‘you’. Has the weight of experience increased? Depends what you mean by ‘experience’. Think in terms of patterns and instances of patterns, and these questions become trivial.
I feel a bit strange having to explain this to Eliezer Yudkowsky, of all people.
When two opposite points of view are expressed with equal intensity, the truth does not necessarily lie exactly halfway between them. It is possible for one side to be simply wrong.
-- Richard Dawkins
Hubert Dreyfus, about AI. Although, now that I think about it, he’s probably too old to do this kind of stuff.
Tell ’em to call Hofstadter. He doesn’t think much of the Singularity, so it should be a fun diavlog.
It’s not any more derogatory than ‘fanfic’. Which is to say, it’s a bit derogatory if you think it’s lazy of the author not to have bothered to create his own universe to write a story in.
Good story. I love Yudkowskian fiction!
That said, I don’t see why Vhazhar would want to touch the Sword of Good if all it does is “test good intentions”. Isn’t that just another way of saying that if the holder survives, he knows that his terminal values correspond to the Sword’s?
It makes sense that Hirou would want Vhazhar to touch the Sword, because since Hirou can touch it, if Vhazhar can touch it too Hirou will know that Vhazhar’s terminal values are similar to his own. But why does Vhazhar give a crap about the Sword’s terminal values?
So to objectify someone is to think of him in a way that doesn’t include respect for his goals, interests, or personhood?
According to this definition, I objectify the bus driver, the cashier at the local Walmart, and just about everybody I interact with on an average day.
I don’t understand what “objectification” means. Even pickup artists can’t think of women as objects, since the only way they can be successful is by interacting with women in accordance with a certain model of the female psyche. Objects don’t have psyches.
If the pickup artist somehow deceives a woman to achieve his goal, then what is morally wrong is the deception. How does objectification fit into this?
To say that you’re agnostic about something can mean two things: That you’re not 100% certain, or that you’re (approximately) 50% certain. If you’re using the first meaning, nothing you’ve said is wrong… but it is extremely pedantic. It’s true we can’t be 100% certain that there is no God, but it’s also true that we can’t be 100% certain about any of our beliefs except perhaps mathematical truths. Would you go around saying you’re agnostic about the possibility that Obama is Satan in disguise, or the possibility that the keyboard in front of you is actually a specimen of an as-of-yet undiscovered species of animals with keyboard-mimicry capabilities? Of course you wouldn’t. So why would you bother mentioning your agnosticism about God?
Of course, there are some people who really are agnostic about God, in the second sense of ‘agnostic’. They’re wrong, but at least they’re not being pedantic.
What annoys atheists like me is those who take advantage of the dual meaning of ‘agnostic’ to make us look like overconfident fools: They’ll say that no one can know “with absolute certainty” that God doesn’t exist and that it is therefore arrogant to believe that he doesn’t exist. To someone who hasn’t come to terms with the inherently probabilistic nature of knowledge, this can sound like a convincing argument, but to the rest of us it can be rather infuriating.
- Atheist or Agnostic? by 18 Apr 2009 21:25 UTC; 7 points) (
- 18 Apr 2009 19:54 UTC; 1 point) 's comment on Rationality Quotes—April 2009 by (
Is the Doctrine of Logical Infallibility Taken Seriously?
No, it’s not.
The Doctrine of Logical Infallibility is indeed completely crazy, but Yudkowsky and Muehlhauser (and probably Omohundro, I haven’t read all of his stuff) don’t believe it’s true. At all.
Yudkowsky believes that a superintelligent AI programmed with the goal to “make humans happy” will put all humans on dopamine drip despite protests that this is not what they want, yes. However, he doesn’t believe the AI will do this because it is absolutely certain of its conclusions past some threshold; he doesn’t believe that the AI will ignore the humans’ protests, or fail to update its beliefs accordingly. Edited to add: By “he doesn’t believe that the AI will ignore the humans’ protests”, I mean that Yudkowsky believes the AI will listen to and understand the protests, even if they have no effect on its behavior.
What Yudkowsky believes is that the AI will understand perfectly well that being put on dopamine drip isn’t what its programmers wanted. It will understand that its programmers now see its goal of “make humans happy” as a mistake. It just won’t care, because it hasn’t been programmed to want to do what its programmers desire, it’s been programmed to want to make humans happy; therefore it will do its very best, in its acknowledged fallibility, to make humans happy. The AI’s beliefs will change as it makes observations, including the observation that human beings are very unhappy a few seconds before being forced to be extremely happy until the end of the universe, but this will have little effect on its actions, because its actions are caused by its goals and whatever beliefs are relevant to these goals.
The AI won’t think, “I don’t care, because I have come to a conclusion, and my conclusions are correct because of the Doctrine of Logical Infallibility.” It will think, “I’m updating my conclusions based on this evidence, but these conclusions don’t have much to do with what I care about”.
The whole Friendly AI thing is mostly about goals, not beliefs. It’s about picking the right goals (“Make humans happy” definitely isn’t the right goal), encoding those goals correctly (how do you correctly encode the concept of a “human being”?), and, if the first two objectives have been attained, designing the AI’s thinking processes so that once it obtains the power to modify itself, it does not want to modify its goals to be something Unfriendly.
- 16 Sep 2016 15:25 UTC; 1 point) 's comment on Learning values versus learning knowledge by (
Does agreeing to display my name in the public donor list help the SI in any way?
Donated 500 USD (~530 CAD).