Don’t let EY chill your free speech—this is supposed to be a community blog devoted to rationality… not a SIAI blog where comments are deleted whenever convenient.
You are compartmentalizing. What you should be asking yourself is whether the decision is correct (has better expected consequences than the available alternatives), not whether it conflicts with freedom of speech. That the decision conflicts with freedom of speech doesn’t necessarily mean that it’s incorrect, and if the correct decision conflicts with freedom of speech, or has you kill a thousand children (estimation of its correctness must of course take this consequence into account), it’s still correct and should be taken.
(There is only one proper criterion to anyone’s actions, goodness of consequences, and if any normally useful heuristic stays in the way, it has to be put down, not because one is opposed to that heuristic, but because in a given situation, it doesn’t yield the correct decision. )
(This is a note about a problem in your argument, not an argument for correctness of EY’s decision. My argument for correctness of EY’s decision is here and here.)
This is possible but by no means assured. It is also possible that he simply didn’t choose to write a full evaluation of consequences in this particular comment.
What you should be asking yourself is whether the decision is correct (has better expected consequences than the available alternatives), not whether it conflicts with freedom of speech.
Upvoted. This just helped me get unstuck on a problem I’ve been procrastinating on.
whether the decision is correct (has better expected consequences than the available alternatives), not whether it conflicts with freedom of speech.
Sounds like a good argument for WikiLeaks dilemma (which is of course confused by the possibility the government is lying their asses off about potential harm)
The question with WikiLeaks is about long-term consequences. As I understand it, the (sane) arguments in favor can be summarized as stating that expected long-term good outweighs expected short-term harm. It’s difficult (for me) to estimate whether it’s so.
I suspect it’s also difficult for Julian (or pretty much anybody) to estimate these things; I guess intelligent people will just have to make best guesses about this type of stuff. In this specific case a rationalist would be very cautious of “having an agenda”, as there is significant opportunity to do harm either way.
(There is only one proper criterion to anyone’s actions, goodness of consequences, and if any normally useful heuristic stays in the way, it has to be put down, not because one is opposed to that heuristic, but because in a given situation, it doesn’t yield the correct decision.)
Shouldn’t AI researchers precommit to not build AI capable of this kind of acausal self-creation? This will lower chances of disaster both causally and acausally.
And please, define how do you tell moral heuristics and moral values apart. E.g. which is “don’t change moral values of humans by wireheading”?
We’re basically talking about a logical illusion… an AI Ontological Argument… with all the flaws of an ontological argument (such as bearing no proof)… that was foolishly censored leading to a lot of bad press, hurt feelings, lost donations, and general existential risk increase.
From, as you call it, a purely correctness optimizing perspective, It’s long term bad having silly, irrational stuff like this associated with LW. I think that EY should apologize, and we should get an explicit moderation policy for LW, but in the mean time I’ll just undo any existential risk savings hoped to be gained from censorship.
In other words, this is less about Free Speech, as it is about Dumb Censors :p
It’s long term bad having silly, irrational stuff like this associated with LW.
Whether it’s irrational is one of the questions we are discussing in this thread, so it’s bad conduct to use your answer as an element of an argument. I of course agree that it appears silly and irrational and absurd, and that associating that with LW and SIAI is in itself a bad idea, but I don’t believe it’s actually irrational, and I don’t believe you’ve seriously considered that question.
We’re basically talking about a logical illusion… an AI Ontological Argument… with all the flaws of an ontological argument (such as bearing no proof)…
In other words, you don’t understand the argument, and are not moved by it, and so your estimation of improbability of the outrageous prediction stays the same. The only proper way to argue past this point is to discuss the subject matter, all else would be sophistry that equally applies to predictions of Astrology.
You are compartmentalizing. What you should be asking yourself is whether the decision is correct (has better expected consequences than the available alternatives), not whether it conflicts with freedom of speech. That the decision conflicts with freedom of speech doesn’t necessarily mean that it’s incorrect, and if the correct decision conflicts with freedom of speech, or has you kill a thousand children (estimation of its correctness must of course take this consequence into account), it’s still correct and should be taken.
(There is only one proper criterion to anyone’s actions, goodness of consequences, and if any normally useful heuristic stays in the way, it has to be put down, not because one is opposed to that heuristic, but because in a given situation, it doesn’t yield the correct decision. )
(This is a note about a problem in your argument, not an argument for correctness of EY’s decision. My argument for correctness of EY’s decision is here and here.)
This is possible but by no means assured. It is also possible that he simply didn’t choose to write a full evaluation of consequences in this particular comment.
Upvoted. This just helped me get unstuck on a problem I’ve been procrastinating on.
Sounds like a good argument for WikiLeaks dilemma (which is of course confused by the possibility the government is lying their asses off about potential harm)
The question with WikiLeaks is about long-term consequences. As I understand it, the (sane) arguments in favor can be summarized as stating that expected long-term good outweighs expected short-term harm. It’s difficult (for me) to estimate whether it’s so.
I suspect it’s also difficult for Julian (or pretty much anybody) to estimate these things; I guess intelligent people will just have to make best guesses about this type of stuff. In this specific case a rationalist would be very cautious of “having an agenda”, as there is significant opportunity to do harm either way.
Very much agree btw
Shouldn’t AI researchers precommit to not build AI capable of this kind of acausal self-creation? This will lower chances of disaster both causally and acausally.
And please, define how do you tell moral heuristics and moral values apart. E.g. which is “don’t change moral values of humans by wireheading”?
We’re basically talking about a logical illusion… an AI Ontological Argument… with all the flaws of an ontological argument (such as bearing no proof)… that was foolishly censored leading to a lot of bad press, hurt feelings, lost donations, and general existential risk increase.
From, as you call it, a purely correctness optimizing perspective, It’s long term bad having silly, irrational stuff like this associated with LW. I think that EY should apologize, and we should get an explicit moderation policy for LW, but in the mean time I’ll just undo any existential risk savings hoped to be gained from censorship.
In other words, this is less about Free Speech, as it is about Dumb Censors :p
Whether it’s irrational is one of the questions we are discussing in this thread, so it’s bad conduct to use your answer as an element of an argument. I of course agree that it appears silly and irrational and absurd, and that associating that with LW and SIAI is in itself a bad idea, but I don’t believe it’s actually irrational, and I don’t believe you’ve seriously considered that question.
In other words, you don’t understand the argument, and are not moved by it, and so your estimation of improbability of the outrageous prediction stays the same. The only proper way to argue past this point is to discuss the subject matter, all else would be sophistry that equally applies to predictions of Astrology.