This post is obviously a good opportunity for humour and entertainment, but on a serious note, the strangest thing about this question is that I don’t think that an AI would be able to tell me anything stranger than I have already learned in the last 10 years of my life:
the fundamental laws of the universe, quantum mechanics and special relativity violate your intuitions about objects being in a definite state, and about time being an absolute background parameter, and the universe is so big that most people in the world simply cannot grasp its size (learned this at circa age 14-19)
The human mind is subject to a huge array of biases, including my mind, (learned this at circa age 23-24)
I happen to have been born at what looks like a rather special time in the evolution of the human race, and I also happen to be smart enough to understand this fact when most other people don’t, which seems a priori ridiculously unlikely if my reference class is the set of all humans, or even of all humans in the same region of personality and intelligence space. This induces various bits of anthropic paranoia, such as “I am in fact a simulation designed to get to the bottom of human values by an AI”.
unfriendly AI means that the activities of a tiny set of people could determine the future of this universe that is too big for most people to even comprehend.
most of the people on the planet believe in a personal God, in spite of the evidence against this, but human cognitive biases are so bad that this no longer surprises me. Some Cambridge mathematicians who I am fairly sure are at a 1 in 100,000 level of mathematical ability also believe in said personal God. This still surprises and shocks me.
I doubt that any person in any other part of space-time faces a reality as bizarre as this. Can anyone even think of an internally consistent fictional reality that is weirder than this?
I don’t think that an AI would be able to tell me anything stranger than I have already learned in the last 10 years of my life:
You know, as soon as I finished reading this sentence, and before reading anything else, the same cognitive template that produced the AI-Box Experiment immediately said, “I bet I can tell him something stranger, never mind an AI.”
Since the AI-Box experiment is shrouded in secrecy, I have to assign a significant probability that it is a simple hoax: the people you “fooled” were in on it, or that you used a technicality, or that there is a genuine effect but that digital urban legend has blown it out of all proportion.
However, I am intrigued. Will you email me and tell me this odd thing, if I promise to keep it secret?
My suspicion would be that it is related to
“I happen to have been born at what looks like a rather special time in the evolution of the human race, and I also happen to be smart and lucky enough to understand this fact when the vast majority of other people don’t, which seems a priori ridiculously unlikely if my reference class is the set of all humans, or even of all humans in the same region of personality and intelligence space. This induces various bits of anthropic paranoia, such as “I am in fact a simulation designed to get to the bottom of human values by an AI”.”
Or perhaps something bizzarre involving conscious experience (the thing in the world that I am most thoroughly confused about), anthropics and AGIs simulating me.
Did Eliezer have a specific thing in mind? I thought he meant that—like in the AI Box experiment—he suspects a human could already do what it’s being predicted a superintelligence could not. Without yet knowing how.
I can have an intuition about the solvability of a problem without much clue about how to solve it, and definitely without a set of possible solutions in mind.
I happen to have been born at what looks like a rather special time in the evolution of the human race, and I also happen to be smart enough to understand this fact when most other people don’t, which seems a priori ridiculously unlikely if my reference class is the set of all humans, or even of all humans in the same region of personality and intelligence space. This induces various bits of anthropic paranoia, such as “I am in fact a simulation designed to get to the bottom of human values by an AI”.
versus
I have a tail
perhaps my intuition for “strange” requires more high-grade strangeness than yours does, but I really don’t think that there’s much of a contest here. Having a tail wouldn’t disturb me anywhere near as much as the above does. Even being in denial about having a tail wouldn’t disturb me anywhere near as much as the above already does.
Perhaps I am optimizing for “disturbing” here. Sure, it would be strange if by some fluke a particular jellyfish had been involved in a crucial way in the evolution of human intelligence by stinging a monkey who went swimming in the sea and causing a particular brain structure change, or if Barack Obama was a closet furry, but it wouldn’t disturb me in the slightest.
It wouldn’t even surprise me if Barack Obama were a closet furry. But maybe I’m generalizing from one example.
Anyway, if you selected a random human out of all humans that have ever lived up to right now, what do you think is the probability that you would select a living one? I’d bet more than 1%.
It wouldn’t even surprise me if Barack Obama were a closet furry.
It would surprise me. I’m pretty sure closet furries are pretty rare. I just wouldn’t be more surprised than that about any other given person.
Anyway, if you selected a random human out of all humans that have ever lived up to right now, what do you think is the probability that you would select a living one?
From what I’ve read, estimates vary from 5% to 10%.
This post is obviously a good opportunity for humour and entertainment, but on a serious note, the strangest thing about this question is that I don’t think that an AI would be able to tell me anything stranger than I have already learned in the last 10 years of my life:
the fundamental laws of the universe, quantum mechanics and special relativity violate your intuitions about objects being in a definite state, and about time being an absolute background parameter, and the universe is so big that most people in the world simply cannot grasp its size (learned this at circa age 14-19)
The human mind is subject to a huge array of biases, including my mind, (learned this at circa age 23-24)
I happen to have been born at what looks like a rather special time in the evolution of the human race, and I also happen to be smart enough to understand this fact when most other people don’t, which seems a priori ridiculously unlikely if my reference class is the set of all humans, or even of all humans in the same region of personality and intelligence space. This induces various bits of anthropic paranoia, such as “I am in fact a simulation designed to get to the bottom of human values by an AI”.
unfriendly AI means that the activities of a tiny set of people could determine the future of this universe that is too big for most people to even comprehend.
most of the people on the planet believe in a personal God, in spite of the evidence against this, but human cognitive biases are so bad that this no longer surprises me. Some Cambridge mathematicians who I am fairly sure are at a 1 in 100,000 level of mathematical ability also believe in said personal God. This still surprises and shocks me.
I doubt that any person in any other part of space-time faces a reality as bizarre as this. Can anyone even think of an internally consistent fictional reality that is weirder than this?
You know, as soon as I finished reading this sentence, and before reading anything else, the same cognitive template that produced the AI-Box Experiment immediately said, “I bet I can tell him something stranger, never mind an AI.”
Since the AI-Box experiment is shrouded in secrecy, I have to assign a significant probability that it is a simple hoax: the people you “fooled” were in on it, or that you used a technicality, or that there is a genuine effect but that digital urban legend has blown it out of all proportion.
However, I am intrigued. Will you email me and tell me this odd thing, if I promise to keep it secret?
My suspicion would be that it is related to
Or perhaps something bizzarre involving conscious experience (the thing in the world that I am most thoroughly confused about), anthropics and AGIs simulating me.
Did Eliezer have a specific thing in mind? I thought he meant that—like in the AI Box experiment—he suspects a human could already do what it’s being predicted a superintelligence could not. Without yet knowing how.
Well if he didn’t have a specific thing in mind he must have had a whole set of things in mind, so I urge him to pick one of them.
I can have an intuition about the solvability of a problem without much clue about how to solve it, and definitely without a set of possible solutions in mind.
Yes but this boils down to
-- “I think I can tell you LOTS of things about reality that will freak you our”
-- what, exactly?
-- I don’t know! I just have a strong intuition!
-- Well I have a strong intuition that you can’t…
Maybe he has a mathematical model.
I think “you have a tail” is stranger.
versus
perhaps my intuition for “strange” requires more high-grade strangeness than yours does, but I really don’t think that there’s much of a contest here. Having a tail wouldn’t disturb me anywhere near as much as the above does. Even being in denial about having a tail wouldn’t disturb me anywhere near as much as the above already does.
Perhaps I am optimizing for “disturbing” here. Sure, it would be strange if by some fluke a particular jellyfish had been involved in a crucial way in the evolution of human intelligence by stinging a monkey who went swimming in the sea and causing a particular brain structure change, or if Barack Obama was a closet furry, but it wouldn’t disturb me in the slightest.
It wouldn’t even surprise me if Barack Obama were a closet furry. But maybe I’m generalizing from one example.
Anyway, if you selected a random human out of all humans that have ever lived up to right now, what do you think is the probability that you would select a living one? I’d bet more than 1%.
It would surprise me. I’m pretty sure closet furries are pretty rare. I just wouldn’t be more surprised than that about any other given person.
From what I’ve read, estimates vary from 5% to 10%.