I’m pretty split on this. I found the quotes from Ben Todd and Robert Wiblin to be quite harmless, but the quotes from Jacy Reese to be quite bad. I don’t think it’s possible to judge the scope of the problem discussed here based on the post alone. In either case, I think the effort to hold EA to high standards is good.
siIver
I would like downvotes and upvotes to be both shown rather than mathed out against each other, and also them not be anonymous. I also endorse restricting downvotes.
First question: I know you admire Trump’s persuasion skills, but what I want to know is why you think he’s a good person/president etc.
Answer: [talks about Trump’s persuasion skills]
Yeah, okay.
Link is missing!
The true degree of our emotional disconnect
Changes in AI Safety Funding
Should probably have been posted in the open thread (not meant as a reproach)
I feel like I am repeating myself. Here is the chain of arguments
1) A normal person seeing this article and its upvote count will walk away having a very negative view of LessWrong (reasons in my original reply)
2) Making the valid points of this article is in no way dependent on the negative consequences of 1). You could do the same (in fact, a better job at the same) without offending anyone.
3) LessWrong can be a gateway for people to care about existential risk and AI safety.
4) AI safety is arguably the biggest problem in the world right now and extremely low efforts go into solving it, globally speaking.
5) Due to 4) getting people to care about AI safety is extrmely important. Due to that and 3), harming the reputation of LessWrong is really bad
6) Therefore, this article is awful, harmful, and should be resented by everyone.
The claim that money buys elections in its correct form is not totalitarian, but rather a claim about percentage. Moreover, two things that trigger exceptions are a) Name Recognition and b) Free coverage. This election had both. Presidential elections in general are most likely not to follow this rule quite as closely.
This is the ultimate example of… there should be a name for this.
You figure out that something is true, like utilitarianism. Then you find a result that seems counter intuitive. Rather than going “huh, I guess my intuition was wrong, interesting” you go “LET ME FIX THAT” and change the system so that it does what you want...
man, if you trust your intuition more than the system, then there is no reason to have a system in the first place. Just do what is intuitive.
The whole point of having a system like utilitarinism is that we can figure out the correct answers in an abstarct, general way, but not necessarily for each particular situation. Having a system tells us what is correct in each situation, not vice versa.
The utility monster is nothing to be fixed. It’s a natural consequence of doing the right thing, that just happens to make some people uncomfortable. It’s hardly the only uncomfortable consequence of utilitarianism, either.
I’m afraid you misunderstand the difference between the Smoking Lesion and Newcomb’s problem. In the Smoking Lesion, if you are the kind of person who is affected by the thing which causes lung cancer and the desire to smoke, and you resist this desire, you still die of cancer. Your example is just Newcomb’s problem with an infallible forecaster, where if you don’t smoke you don’t die of cancer. This is an inherent difference. They are not the same.
I’m pretty happy with this article… though one of my concerns is that the section on how exactly AI could wipe out humanity was a bit short. It wants to cure cancer, it kills all humans, okay, but a reader might just think “well this is easy, tell it not to harm humans.” I’d have liked if the article had at least hinted at why the problem is more difficult.
Still, all in all, this could have been much worse.
One thing to keep in mind is that, just because something already exists somewhere on earth, doesn’t make it useless on LW. The thing that – in theory – makes this site valuable in my experience, is that you have a guarantee of content being high quality if it is being received well. Sure I could study for years and read all content of the sequences from various fields, but I can’t read them all in one place without anything wasting my time in between.
So I don’t think “this has already been figured out in book XX” implies that it isn’t worth reading. Because I won’t go out to read book XX, but I might read this post.
Hm, thanks. It seems like I was misinformed about ads – I had the belief that they increase sales almost all of the time, which, based on what you said and a quick search. appears to have been totally false. With that and the ‘largely’ I missed, I’d now say the test was mostly correct.
Throwing around the “religion” label seems to be committing the non-centrist fallacy . . . . .
The answer to your question depends on what exactly it is that you’re asking. Do I believe most of the sequence posts are correct? Yes. Do I believe it is useful to treat them as standards? Yeah. Do I think you aren’t allowed to criticize them? No, by all means, if you have issues with their content, we can discuss that (I have criticized them once). But I think you should point out specific things that aren’t accurate about the sequence posts, rather than rejecting them for the sake of it.
This may be a naive and over-simplified stance, so educate me if I’m being ignorant--
but isn’t promiting anything that speeds up AI reasearch the absolute worst thing we can do? If the fate of humanity rests on the outcome of the race between solving the friendly AI problem and reaching intelligent AI, shouldn’t we only support research that goes exclusively into the former, and perhaps even try to slow down the latter? The link you shared seems to fall into the latter category, aiming for general promotion of the idea and accelerating research.
Feel free to just provide a link if the argument has been discussed before.
I think this is the first article in a long time that straight up changed my opinion in a significant way. I always considered empathy a universally good thing – in all forms. In fact I held it as one of the highest values. But the logic of the article is hard to argue with.
I still tentatively disagree that it [emotional empathy] inherently bad. Following what I read, I’d say it’s harmful because it’s overvalued/misunderstood. The solution would be to recognize that it’s an egoistical thing – as I’m writing this I can confirm that I think this now. Whereas cognitive empathy is the selfless thing.
Doing more self-analysis, I think I already understood this on some level, but I was holding the concept of empathy in such high regards that I wasn’t able to consciously criticize it.
I think this article is something that people outside of this community really ought to read.
I read the first post, which is excellent. Thanks for sharing.
No, I fully acknowledge that the post tries to do those things, see the second half of my reply. I argue that it fails at doing so but is harmful for our reputation etc.
In being ironically guilty of not addressing your actual argument here, I’ll point out that flaws of LW, valid or otherwise, aren’t flaws of rationality. Rationality just means avoiding biases/fallacies. Failure can only be in the community.