If Kevin doesn’t go through with taking that bet for $202, I’ll take it for $101.
ChrisHallquist
I disliked the moral philosophy question. I felt comfortable putting down “consequentialist,” but I can see how someone might feel none of the answers suited them well. I would have made the fourth option simply “other,” and maybe added a moral realism vs. anti-realism question.
See the Phil Papers survey. On the normative ethics question, “other” beat out the three “standard” moral philosophies, and there’s no indication that everyone in that category is a moral anti-realist.
Also, for the Newton question:
My answer: friragrra bu svir
Correct answer: fvkgrra rvtugl frira
Now I feel dumb for putting such a high confidence in my answer. Should I feel dumb?
I guess if I had thought about it more, I would have realized that my confidence that my 30 year range was not too low exceeded my confidence that it was not too high, and adjusted my answer downwards a few years, accordingly.
What does the Executive Director of the Singularity Institute do?
Is Siri going to kill us all?
.
Okay, I’m joking, but recent advances in AI—Siri, Watson, Google’s self-driving car—make me think the day when machines surpass humans in intelligence is coming a lot faster than I would have previously thought. What implications does this have for the Singularity Institute’s project?
“Watch out for when you are sacrificing epistemology for instrumental gains. If there is ever a time where you want to have certain beliefs because it more convenient and you are trying to talk yourself into them, that is a giant red flag.”
Bingo. At times, I’ve thought something like this is the secret to being rational.
This is an excellent post.
I’ll toss in another example: volunteering vs. donating to charity. People like the idea of volunteering, even when they could do more good by working longer hours and donating the money to charity.
When I first entered college, I had the idea that I’d go to med school and then join Doctors Without Borders. Do a lot of good in the world, right? The problem was that, while I’m good at a lot of things, biology is not my strong suit, so I found that part of the pre-med requirements frustrating. I ended up giving up and going to grad school in philosophy.
To maximize my do-gooding, I would have been better off majoring in Computer Science or Engineering (I’m really, really good at math), and committing to giving some percentage of my future earnings at a high-paying tech job to charity. Alas...
Now whenever I meet someone who tells me they want to go into a do-gooding career, I tell them they’d be better off becoming lawyers so they can donate lots of money to charity. They never like this advice.
Question: do you have any advice for people who want “to do something about Singularity” but are afraid of falling into the trap you describe?
Yeah. What are the MD specialties that make all the money? Radiology, Oncology...
It’s worth noting that “Humanity” /= “Human-like (or better) intelligences that largely share our values” /= “Civilization.” This gives us three different kinds of existential risk.
Robin Hanson, as I understand him, seems to expect that only the third will survive, and seems to be okay with that. Many Less Wrongers, on the other hand, seem not so concerned with humanity per se, but would care about the survival of human-like intelligences sharing our values. And someone could care an awful lot about humanity per se, and want to put a lot of effort into making sure humans aren’t largely replaced by AIs of any kind.
I’m not a huge reader of blog comment threads, so it’s possible these debates have been done to death in comments and I’m not aware of it, but it would be nice to see some OPs on this issue.
Long after first seeing this post, I decided to go back and upvote this and related lukeprog articles. The reason is that I’ve started reading Luke’s draft paper The Singularity and Machine Ethics, and I’m sufficiently impressed that I now think Luke may have figured out how to do philosophy correctly. I now encourage everyone to take what he says about philosophy, and scholarship in general, extra-seriously.
Didn’t the IQ section say to only report a score if you’ve got an official one? The percentage of people answering not answering that question should have been pretty high, if they followed that instruction. How many people actually answered it?
Also: I’ve already pointed out that the morality question was flawed, but after thinking about it more, I’ve realized how badly flawed it was. Simply put, people shouldn’t have had to choose between consequentialism and moral anti-realism, because there are a number of prominent living philosophers who combine the two.
JJC Smart is an especially clear example, but there are others. Joshua Greene’s PhD thesis was mainly a defense of moral anti-realism, but also had a section titled “Hurrah for Utilitarianism!” Peter Singer is a bit fuzzy on meta-ethics, but has flirted with some kind of anti-realism.
And other moral anti-realists take positions on ethical questions without being consequentialists, see i.e. JL Mackie’s book Ethics. Really, I have to stop myself from giving examples now, because they can be multiplied endlessly.
So again: normative ethics and meta-ethics are different issues, and should be treated as such on the next survey.
As per my previous comments on this, separate out normative ethics and meta-ethics.
And maybe be extra-clear on not answering the IQ question unless you have official results? Or is that a lost cause?
In that case: use the exact ethics questions from the PhilPapers Survey (http://philpapers.org/surveys/), probably minus lean/accept distinction and the endless drop-down menu for “other.”
For IQ: maybe you could nudge people to greater honesty by splitting up the question: (1) have you ever taken an IQ test with [whatever features were specified on this year’s survey], yes or no? (2) if yes, what was your score?
Is anyone interested in betting against me?
Tell me more about how the bet would be set up, and I might be interested.
I mean, you might try to set up a rational business firm to dislodge the incumbents in some existing industry. But the failure of such an upstart wouldn’t prove that rational firms are not ceteris paribus better, only that their advantages are not greater than the advantages of being an incumbent.
Methods of Rationality updates—will there be any?
Yes. But not enough to make it feel like the story is moving towards a conclusion.
medical advances
They will be incremental, enough so that very, very few people will notice any practical impact on their lives. Some advances, however, will be accompanied by a great deal of hype. Evidence will emerge that some treatments which we think work now do not in fact work.
signing up for cryonics?
I will continue to believe cryonics is something I would sign up for if I had money to spare, but will also continue to feel too poor to have money to spare.
the future of AGI
This isn’t really about AGI, but I expect to be amazed by at least one cool new application of computing that vaguely resembles human intelligence.
For anyone who wants to make predictions that you don’t want to make public, I just sent a letter to my future self on FutureMe.org, and you might want to consider doing the same.
This post seems too vague to be useful.
I just got done re-reading Stephen Pinker’s book How the Mind Works and seeing the phrase “largely circumstantial” in this post reminded me of Pinker’s discussion of the so-called “nature-nurture issue.” He points out that it’s absurd to think that because nature is important, nurture doesn’t matter, but he compares the statement “nature and nurture are both important” to statements like, “The behavior of a computer comes from from a complex interaction between the processor and the input,” which is “true but useless.”
I feel the same way about statements like “more is possible.” I understand the desire to be inspirational, but my brain is objecting too much. How much more is possible? Under what circumstances? etc.
I may have seen this before, but it just hit me how useful it is. It needs to get a prominent link somewhere, or added to the wiki, or something.
I missed the last LW meetup for Madison, WI (my city). Does anyone no how it went? Is there interest in having another one? I’d certainly like that.