Since you can vote for for multiple charities, there’s no reason (apart from your personal feelings about the individual charities, of course) to not also vote on the following (the last two are Givewell rated charities):
MathiasZaman
Here’s my take on the situation, but I feel like I should disclaim that this of course influenced by my own situations and personal biases.
Society tells us that we should love and respect our parents. I disagree. You should do that to the extent that those people deserve that love and respect. If your parents are abusive or bad in a different way, they don’t deserve anything you’re not willing to give. Internalizing this is a hard thing to do, because both you’ve been trained in a variety of ways that this is a bad thing. What might help is asking yourself the question: If a friend (or romantic partner) would act in this way, would you still be seeing them? Your parents should be held to similar standards, I feel.
If you do want to give them more chances or at least explain your position, I’d suggest to do it in writing. You can send them an email or give them a letter and they can read it. You should probably schedule a conversation about that letter afterwards. (This can be immediately afterwards, if they agree to read the whole thing before commenting.) This has the advantage that they can’t interrupt you and you can think carefully about what you want to say. It lets you add additional explanations and disclaimers that would be harder to convey when speaking. It can also give them time to think about your message.
If you want to confront them about their behavior and how they make you feel and they’re unwilling to compromise and change their behavior in the future, breaking contact probably should be considered.
The way your (admittedly one-sided) post looks to me is that your parents are emotionally abusive. You should probably take care of your own mental health before worrying about the happiness of your parents.
Here’s a summary of the video, written while watching it.
Summary
A positive singularity is possible in about 10 years
Three ways towards singularity: AI, Nano-technology and computer-brain interfaces
The person we’re following (Ben Goertzel) works with machine consciousness
Doesn’t follow a materialistic outlook on consciousness. He sees consciousness as the ground on which things are formed and different structures manifest consciousness in different ways. (For example, a human brains manifests consciousness in one way and a coin manifests it in another way.)
Introduction of the second person we’re following (Hugo de Garis), who works with Goertzel on the Conscious Robot Project.
Apparently, there’s a province in China that pours great amounts of money in computer science and software.
de Garis is trying to tap into that money in order to kickstart a machine intelligence and robots in China and maybe the world.
de Garis sees China as the next big culture, while America is growing stagnant and “fat”
Goertzel sees artificial scientists as the big step towards a singularity
20 minutes in and we see their robot. It falls over when told to go right.
Goertzel says building a “thinking machine” shouldn’t be harder than creating the google search engine.
The mind as pattern-recognition engine
More robot. It walks a bit wobbly. They talk to the robot a little bit.
The robot does a couple of dance moves and kicks a ball.
Goertzel’s motivation to do this research is lessening suffering, removing death, removing limitations of the human body.
A brief explanation of how AI wouldn’t have to be a copy of a human mind.
Explanation of how a superhuman mind would lead to a rapid increase in technology and how humans won’t be in control anymore.
De Garis is more pessimistic than Goertzel. Foresees a great debate as the gap between human and machine intelligences closes.
De Garis sees the worst scenario as the most probable. He foresees a violent conflict between humans who want super-powerful AI (he calls the AI artelects and the humans who want them cosmists) and those who don’t want such a thing (which he calls terrans). He says he’s glad he’ll be dead before this global war happens.
Goertzel thinks we can accurately predict the events leading up to a singularity, but not anything beyond that.
The reason Goertzel works in AI is because it feels natural to him.
The goal for Goertzel is going beyond the human condition.
Before Goertzel had kids, he wanted to be immortal, but now he has kids, that drive has lessened.
White text on a black background tells us de Garis’ lab is dismantled and the Chinese government is reconsidering the profitability of the home robot.
Goertzel continues working on robots.
After seeing the documentary, I’m a bit at a loss as to who the target audience is. For people who are new to the concept of AI or the singularity this probably isn’t the best way to learn more and for people who are already familiar with those concepts, this documentary doesn’t offer a lot of new insights. I also don’t think it stands strongly enough as a human-interest piece.
The bits with the robot were kinda fun, but they didn’t provide a lot of information. The robot didn’t look like much more than a cleverbot with legs and since we’re not told anything about the robot, I don’t have a particular reason to assume it isn’t just that.
Secrets require cognitive effort. You need to keep track of what you can say to whom. Who told you what and when and why. It often involves lying or omitting information, both of which require additional effort and care in your conversation. I also find that it messes up some interpersonal relationships and habits. I’m not prone to keeping secrets and withholding information to my partner, family or friends, so if friend X tells me not to tell something to friend Y it forces me to act different from what feels natural.
I feel this is a stupid question, but I’d rather ask it than not knowing: Why would anyone want that? I can understand opposing things like: democracy, secularism and multiculturalism, but replacing them with a traditional monarchy just doesn’t seem right. And I don’t mean morally, I just don’t see how it could create a working society.
I can fully understand opposing certain ideas, but if you’re against democracy because it doesn’t work, why go to a system of governance that has previously shown not to work?
The lack of up- and down-voting and the limited threading kills it value for me, personally.
This certainly isn’t a safe option for everyone.
Alternative: Liven up Less Wrong. I’m not sure how to do that, but it’s possible solution to your problem.
I’ve debated myself about writing a detailed reply, since I don’t want to come across as some brainwashed LW fanboi. Then I realized this was a stupid reason for not making a post. Just to clarify where I’m coming from.
I’m in more-or-less the same position as you are. The main difference being that I’ve read pretty much all of the Sequences (and am slowly rereading them) and I haven’t signed up for cryonics. Maybe those even out. I think we can say that our positions on the LW—Non-LW scale are pretty similar.
And yet my experience has been almost completely opposite of yours. I don’t like the point-by-point response on this sort of thing, but to properly respond and lay out my experiences, I’m going to have to do it.
Rationality doesn’t guarantee correctness.
I’m not going to spend much time on this one, seeing as how pretty much everyone else commented on this part of your post.
Some short points, though:
Given some data, rational thinking can get to the facts accurately, i.e. say what “is”. But, deciding what to do in the real world requires non-rational value judgments to make any “should” statements. This is in a part of the Sequences you’ve probably haven’t read. I generally advice “Three Worlds Collide” to people struggling with this distinction, but I haven’t gotten any feedback on how useful that is.;
Rationality can help you make “should”-statements, if you know what your preferences are. It helps you optimize towards your preferences.
When making a trip by car, it’s not worth spending 25% of your time planning to shave off 5% of your time driving.
I believe the Sequences give the example that to be good at baseball, one shouldn’t calculate the trajectory of the ball. One should just use the intuitive “ball-catching” parts of the brain and train those. While overanalyzing things seems to be a bit of a hobby for the aspiring rationalist community, if you think that they’re the sort of persons who will spend 25% of their time to shave 5% of driving time you’re simply wrong about who’s in that particular community.
LW tends to conflate rationality and intelligence.
This is actually a completely different issue. One worth addressing, but not as part of “rationality doesn’t guarantee correctness.”
In particular, AI risk is overstated
I’m not the best suited to answer this, and it’s mostly about your estimate towards that particular risk. As ChristianKl points out, a big chunk of this community doesn’t even think Unfriendly AGI is currently the biggest risk for humanity.
What I will say is that if AGI is possible (which I think it is), than UFAI is a risk. And since Friendliness is likely to be as hard as actually solving AGI, it’s good that groundwork is being lain before AGI is becoming a reality. At least, that how I see it. I’d rather have some people working on that issue than none at all. Especially if the people working for MIRI are best at working on FAI, rather than another existential risk.
LW has a cult-like social structure
No more than any other community. Everything you say in that part could be applied to the time I got really into Magic: The Gathering.
I don’t think Less Wrong targets “socially awkward intellectuals” inasmuch as it was founded by socially awkward intellectuals and that socially awkward intellectuals are more likely to find the presented material interesting.
However, involvement in LW pulls people away from non-LWers.
This has, in my case, not been true. My relationships with my close friends haven’t changed one bit because of Less Wrong or the surrounding community, nor have my other personal relationships. If anything, Less Wrong has made me more likely to meet new people or do things with people I don’t have a habit doing things with. LessWrong showed my that I needed a community to support myself (a need that I didn’t consciously realized I had before) and HPMOR taught me a much-needed lesson about passing up on opportunities.
For the sake of honesty and completeness, I must say that I do very much enjoy the company of aspiring rationalists, both in meatspace at the meetups or in cyberspace (through various channels, mostly reddit, tumblr and skype). Fact of the matter is, you can talk about different things with aspiring rationalists. The inferential differences are smaller on some subjects. Just like how the inferential differences about the intricacies of Planeswalkers and magic are lower with my Magic: The Gathering friends.
Many LWers are not very rational.
This is only sorta true. Humans in general aren’t very rational. Knowing this gets you part of the way. Reading Influence: Science and Practice or Thinking: Fast and Slow won’t turn you into a god, but they can help you realize some mistakes you are making. And that still remains hard for all but the most orthodox aspiring rationalists. And I keep using “aspiring rationalists” because I think that sums it up: The Less Wrong-sphere just strives to do better than default in the area of both epistemic and instrumental rationality. I can’t think of anyone I’ve met (online or off-) that believes that “perfect rationality” is a goal mere humans can attain.
And it’s hard to measure degrees of rationality. Ideally, LWers should be more rational than average, but you can’t quite measure that, can you. My experience is that aspiring rationalists at least put in greater effort to reaching their goals.
For the Rationality movement, the problems (sadness! failure! future extinction!) are blamed on a Lack of Rationality, and the long plan of reading the sequences, attending meetups, etc. never achieves the impossible goal of Rationality
Rationality is a tool, not a goal. And the best interventions in my life have been shorter-term: Get more exercise, use HabitRPG, be aware of your preferences, Ask, Tell and Guess culture, Tsuyoku Naritai, Spaced Repetition Software… are the first things that come to mind that I use regularly that do actually improve my life and help me reach my goals.
And as anecdotal evidence: I once put it to the skype-group of rationalists that I converse with that every time I had no money, I felt like I was a bad rationalist, since I wasn’t “winning.” Not a single one blamed it on a Lack of Rationality.
Rationalists tend to have strong value judgments embedded in their opinions, and they don’t realize that these judgments are irrational.
If you want to understand that behavior, I encourage you to read the Sequences on morality. I could try to explain it, but I don’t think I can do it justice. I generally hate the “just read the Sequences”-advice, but here I think it’s applicable.
LW membership would make me worse off.
This is where I disagree the biggest. (Well, not that it would make you worse off. I won’t judge that.) Less Wrong has most definitely improved my life. The suggestion to use HabitRPG or leechblock, the stimulating conversations and boardgames I have at the meetup each month, the lessons I learned here that I could apply in my job, discovering my sexual orientation, having new friends, picking up a free concert, being able to comfort my girlfriend more effectively, being able to better figure out which things are true, doing more social things… Those are just the things I can think of off the top of my mind at 3.30 AM that Less Wrong allowed me to do.
I don’t intend to convince you to become more active on Less Wrong. Hell, I’m not all that active on Less Wrong, but it has changed my life for the better in a way that a different community wouldn’t have done.
Ideally, LW/Rationality would help people from average or inferior backgrounds achieve more rapid success than the conventional path of being a good student, going to grad school, and gaining work experience, but LW, though well-intentioned and focused on helping its members, doesn’t actually create better outcomes for them.
It does, at least for me, and I seriously doubt that I’m the only one. I haven’t reached a successful career (yet, working on that), but my life is more successful in other areas thanks in part to Less Wrong. (And my limited career-related successes are, in part, attributable to Less Wrong.) I can’t quantify how much this success can be attributed to LW, but that’s okay, I think. I’m reasonably certain that it played a significant part. If you have a way to measure this, I’ll measure it.
“Art of Rationality” is an oxymoron.
I like that phrase because it’s a reminder that (A) humans aren’t perfectly rational and require practice to become better rationalists and (B) that rationality is a thing you need to do constantly. I like this SSC post as an explanation.
I don’t think it does for meta-reasons. The opening quote is build up too much to not be perfectly fitting and clear. It’s also more narratively pleasing to have it return in the final chapter.
That question is kinda obvious. Thanks for pointing it out.
From what I remember from my History classes, monarchies worked pretty okay with an enlightened autocrat who made benefiting the state and the populace as his or her prime goal. But the problem there was that they didn’t stay in power and they had no real way of making absolutely sure their children had the same values. All it takes to mess things up is one oldest son (or daughter if you do away with the Salic law) who cares more about their own lives than those of the population.
So I don’t think technology level plays a decisive factor. It probably will improve things for the monarchy, since famines are a good way to start a revolution, but giving absolute power to people without a good fail-safe when you’ve got a bad ruler seems like a good way to rot a system from the inside.
Slime mold can be used to map subway routes.
Edit: Markets can also be seen as a non-human optimizing actor, even if the smallest parts are human.
I’m not necessarily saying that democracy is the best thing ever. I just have issues jumping from “democracies aren’t really as good as you’re supposed to believe” to “and therefore a monarchy is better.”
In a conversation on tumblr it recently came up that learning and doing a couple of exercises on the Sunk Cost Fallacy did not prevent people from committing it. Similarly, in Thinking: Fast and Slow Daniel Kahneman describes students not adjusting their beliefs about humans after learning about the Bystander Effect.
Learning about biases obviously isn’t enough, but are there known tricks for better dealing with them after learning about a specific bias?
Everything is math, but that doesn’t mean that the word “biology” isn’t useful. Even if healthcare isn’t a perfect word or even a perfect concept, it helps us in everyday conversations and discussions about the way the world works and should work.
People are thinking in terms of grades, where 75% is a C. People think most things are not worse than a C grade (or maybe this is just another example of the pattern I’m seeing)
I don’t think it’s this. Belgium doesn’t use letter-grading and still succumbs to the problem you mentioned in areas outside the classroom.
I wouldn’t say Less Wrong needs a single leader, but in general good communities tend to have figures that can serve as “pillars of the community.” They tend to help group cohesion and provide good content. They can also serve the role of tutor for new people or by mapping out the direction a community can/should go in.
Answer the question the interviewer means, not the question as you’d break it down on Less Wrong. Or more broadly: adapt your communication to the intended argument and goal.
In this particular example, you should know the values of the company before you end up at the interview, so this answer should be: Yes, followed by one or two examples show that your values match those of the company.
(copy-pasted from my tumblr)
The ending to HPMOR isn’t bad. It fits the story and, while open-ended still gives a lot of closure.
It just doesn’t measure up to, like, the rest of the book. Part of it is probably the hype. The final chapters probably fell a bit flat just in comparison to what people expected. But even correcting for that, I still find that it’s slightly disappointing. The best parts, for me, where the buildup to the “there is light in the world” speech and the Stanford Prisoner Experiment arc. They are both intense emotional moments. I literally cried while listening to the podcast version of Azkaban.
The other great parts are the cool, big action sequences.
The ending provides none of those. And yet it sorta promises them without ever delivering.
Here was the original thread proposing this as a solution to the prophecy and here is the comment by Eliezer Yudkowsky confirming to be influenced by that thread.