If you have worked your way through most of the sequences you are likely to agree with the majority of these statements
I realize this is not the main point of the post, but this statement made me curious: what fraction of Less Wrong readers become convinced of these less mainstream beliefs?
To this end I made a Google survey! If you have some spare time, please fill it out. (Obviously, we should overlook the deliberately provocative phrasing when answering).
I’ll come back two weeks from now and post a new comment with the results.
All in all, 77 people responded. It seems we do drink the Kool-Aid! Of the substantial questions, the most contentious ones were “many clones” and timeless physics, and even they got over 50%. Thanks to everyone who responded!
I want people to cut off my head when I’m medically dead, so my head can be preserved and I can come back to life in the (far far) future. Agree 73%
Disagree 27%
It is possible to run a person on Conways Game of Life. This would be a person as real as you or me, and wouldn’t be able to tell he’s in a virtual world because it looks exactly like ours. Agree 90%
Disagree 10%
Right now there exist many copies/clones of you, some of which are blissfully happy and some of which are being tortured and we should not care about this at all. Agree 53%
Disagree 47%
Most scientists disagree with this but that’s just because it sounds counter-intuitive and scientists are biased against counterintuitive explanations. Agree 32%
Disagree 68%
Besides, the scientific method is wrong because it is in conflict with probability theory. Agree 23%
Disagree 77%
Oh, and probability is created by humans, it doesn’t exist in the universe. Agree 77%
Disagree 23%
Every fraction of a second you split into thousands of copies of yourself. Agree 74%
Disagree 26%
Of course you cannot detect these copies scientifically, but that because science is wrong and stupid. Agree 7%
Disagree 93%
In fact, it’s not just people that split but the entire universe splits over and over. Agree 77%
Disagree 23%
Time isn’t real. There is no flow of time from 0 to now. All your future and past selves just exist. Agree 53%
Disagree 47%
Computers will soon become so fast that AI researchers will be able to create an artificial intelligence that’s smarter than any human. When this happens humanity will probably be wiped out. Agree 68%
Disagree 32%
To protect us against computers destroying humanity we must create a super-powerful computer that won’t destroy humanity. Agree 70%
Disagree 30%
Ethics are very important and we must take extreme caution to make sure we do the right thing. Agree 82%
Disagree 18%
Also, we sometimes prefer torture to dust-specs. Agree 69%
Disagree 31%
If everything goes to plan a super computer will solve all problems (disease, famine, aging) and turn us into super humans who can then go on to explore the galaxy and have fun. Agree 79%
Disagree 21%
the truth of all these statements is completely obvious to those who take the time to study the underlying arguments. People who disagree are just dumb, irrational, miseducated or a combination thereof. Agree 27%
Disagree 73%
I learned this all from this website by these guys who want us to give them our money. Agree 66%
I want to fill it out, I really do, but the double statements make me hesitate.
For example I do believe that there are ~lots of “clones of me” around, but I disagree that we shouldn’t care about this. It has significant meaning when you’re an average utilitarian, or something approaching one.
Most of the questions seem to be loaded or ambiguous in some way.
For example, this one implies intelligence is simply a hardware problem:
Computers will soon become so fast that AI researchers will be able to create an artificial intelligence that’s smarter than any human. When this happens humanity will probably be wiped out.
Well, to some extent, that’s true. If a malicious god gave us a computer with infinite or nigh-infinite computing power, we could probably have AIXI up and running within a few days. Similar comments apply to brain emulation—things like the Blue Brain project indicate our scanning ability, poor as it may seem, is still way beyond our ability to run the scanned neurons.
Even if you don’t interpret ‘hardware problem’ quite that generously, you still have an argument for hard takeoff—this is the ‘hardware overhang’ argument: if you prefer to argue that software is the bottleneck, then you have the problem that when we finally blunder into a working AI, it will be running on hardware far beyond what was needed for an intelligently-written AI.
So you’re faced with a bit of a dilemma. Either hardware is the limit in which case Moore’s law means you expect an AI soon and then quickly passing human with a few more cranks of the law, or you expect an AI much further out, but when it comes it’ll improve even faster than the other kind would.
I realize this is not the main point of the post, but this statement made me curious: what fraction of Less Wrong readers become convinced of these less mainstream beliefs?
To this end I made a Google survey! If you have some spare time, please fill it out. (Obviously, we should overlook the deliberately provocative phrasing when answering).
I’ll come back two weeks from now and post a new comment with the results.
Here are the crackpot belief survey results.
All in all, 77 people responded. It seems we do drink the Kool-Aid! Of the substantial questions, the most contentious ones were “many clones” and timeless physics, and even they got over 50%. Thanks to everyone who responded!
I want people to cut off my head when I’m medically dead, so my head can be preserved and I can come back to life in the (far far) future.
Disagree 27%Agree 73%
It is possible to run a person on Conways Game of Life. This would be a person as real as you or me, and wouldn’t be able to tell he’s in a virtual world because it looks exactly like ours.
Disagree 10%Agree 90%
Right now there exist many copies/clones of you, some of which are blissfully happy and some of which are being tortured and we should not care about this at all.
Disagree 47%Agree 53%
Most scientists disagree with this but that’s just because it sounds counter-intuitive and scientists are biased against counterintuitive explanations.
Disagree 68%Agree 32%
Besides, the scientific method is wrong because it is in conflict with probability theory.
Disagree 77%Agree 23%
Oh, and probability is created by humans, it doesn’t exist in the universe.
Disagree 23%Agree 77%
Every fraction of a second you split into thousands of copies of yourself.
Disagree 26%Agree 74%
Of course you cannot detect these copies scientifically, but that because science is wrong and stupid.
Disagree 93%Agree 7%
In fact, it’s not just people that split but the entire universe splits over and over.
Disagree 23%Agree 77%
Time isn’t real. There is no flow of time from 0 to now. All your future and past selves just exist.
Disagree 47%Agree 53%
Computers will soon become so fast that AI researchers will be able to create an artificial intelligence that’s smarter than any human. When this happens humanity will probably be wiped out.
Disagree 32%Agree 68%
To protect us against computers destroying humanity we must create a super-powerful computer that won’t destroy humanity.
Disagree 30%Agree 70%
Ethics are very important and we must take extreme caution to make sure we do the right thing.
Disagree 18%Agree 82%
Also, we sometimes prefer torture to dust-specs.
Disagree 31%Agree 69%
If everything goes to plan a super computer will solve all problems (disease, famine, aging) and turn us into super humans who can then go on to explore the galaxy and have fun.
Disagree 21%Agree 79%
the truth of all these statements is completely obvious to those who take the time to study the underlying arguments. People who disagree are just dumb, irrational, miseducated or a combination thereof.
Disagree 73%Agree 27%
I learned this all from this website by these guys who want us to give them our money.
Disagree 34%Agree 66%
I want to fill it out, I really do, but the double statements make me hesitate.
For example I do believe that there are ~lots of “clones of me” around, but I disagree that we shouldn’t care about this. It has significant meaning when you’re an average utilitarian, or something approaching one.
Most of the questions seem to be loaded or ambiguous in some way.
For example, this one implies intelligence is simply a hardware problem:
Well, to some extent, that’s true. If a malicious god gave us a computer with infinite or nigh-infinite computing power, we could probably have AIXI up and running within a few days. Similar comments apply to brain emulation—things like the Blue Brain project indicate our scanning ability, poor as it may seem, is still way beyond our ability to run the scanned neurons.
Even if you don’t interpret ‘hardware problem’ quite that generously, you still have an argument for hard takeoff—this is the ‘hardware overhang’ argument: if you prefer to argue that software is the bottleneck, then you have the problem that when we finally blunder into a working AI, it will be running on hardware far beyond what was needed for an intelligently-written AI.
So you’re faced with a bit of a dilemma. Either hardware is the limit in which case Moore’s law means you expect an AI soon and then quickly passing human with a few more cranks of the law, or you expect an AI much further out, but when it comes it’ll improve even faster than the other kind would.
I think this survey is a really good illustration of why degrees of belief are so helpful.