17 years old, I’m interested in AI alignment, rationaluty & philosophy, economy and politics.
Crazy philosopher
If one of your future selves will see red, and one of your future selves will see green, then (it seems) you should anticipate seeing red or green when you wake up with 50% probability.
...
Program your computational environment to, if you win, make a trillion copies of yourself, and wake them up for ten seconds, long enough to experience winning the lottery. Then suspend the programs, merge them again, and start the result. If you don’t win the lottery, then just wake up automatically.
No. You won’t see yourself winning the lottery when you wake up.
There is the real you. You may create copies of yourself, but they are still just copies.
Let’s suppose Eliezer starts this experiment: the universe splits into 10,000,000 copies, and in one of them he wins and creates a trillion copies of himself.
So, there are 10,000,000 actual Eliezers — most of whom are in universes where he didn’t win — but there are also a huge number of copies of the winning Eliezer.
Or, if Eliezer’s consciousness continues only into one future universe and doesn’t somehow split 10,000,000 ways, then in most of the universes where his consciousness could go, he didn’t win the lottery.
Since your clones are not you, and you don’t feel what your clones feel, I don’t think the number of clones created really matters.
You might think that there could be many versions of you sharing your consciousness — that they are all “you” — but consciousness is a result of physical processes. So I don’t think it can teleport from one dimension to another. Therefore, since most of the real/original Eliezers exist in universes where he lost, he would wake up to find that he lost.
I would agree with you if I was really sure about all the chains of the argument, but I can’t never be really sure because of epistemic humility, so I wouldn’t do a such of weird plan.
You should subscribe on cryonisation. We all should, actually.
I implemented this two days ago, and I’m already seeing incredible results in some areas! Not much progress in productivity, since I had already tried most of these ideas before, but I did experiment with buying different products for my lunch, found something better, and also made progress on something that takes a lot of context to explain.
(This comment doesn’t share any new important information about the technique, but it’s still important to write comments like this to support the authors. It’s hard to keep creating when all you hear is criticism.
I think of comments like this as a kind of reward for good behavior. In behavioral style)
I’m curious about useful topics and uncurious about unusefuls ones. I find it better that your proposition.
So, when a community will reach 240, it will have an incentive to don’t grow to don’t have drama and decrease of efficiency because of economies of scale? How would you prevent it?
Excellent point!
I don’t mean that the probability is always 50⁄50. But it’s not 100% either.
In Europe, the smartest people for centuries believed in god, and they saw endless confirmations of that belief. And then—bam! It turned out they were simply all wrong.
Or take any case of ancient medicine. European doctors believed for centuries that bloodletting cured everything, while Chinese doctors believed that eating lead prolonged life.
There are also other examples where all the experts were wrong: geocentrism, the ether theory, the idea that mice spontaneously generate in dirty laundry, the miasma theory of disease…In all these cases it was either about cognitive biases (God, medicine) or about lack of information or broken public discussion (geocentrism).
Today we fight biases much better than a thousand years ago, but we’re still far from perfect.
And we still sometimes operate under very limited information.
I think one should have fundamental rational habits that would protect me from being so sure in god or bloodletting. That’s why, from any conclusion I make, I subtract a few percentage points of confidence. The more complex the conclusion, the more speculative my reasoning or vulnerable to diases, the more I subtract.
If you claim that my way of fighting this overconfidence shouldn’t be used, I’d want you to suggest something else instead. Because you can’t just leave it as it is—otherwise one might assign 99% confidence to some nonsense.
Interesting model. Probably you are right and I didn’t considered this because all my friends and me are not idiots.
Good discussions take a lot of time, so people ≈can’t discuss. Because of that, even if 90% of people believe very wrong things, the other 10% can never convince them. So you may be one of those 90% on any question, and the others can’t explain you that you are wrong, so you shouldn’t be so confident in your reflexions.
So if you know that a few believers found 20 atheists who were ready to discuss a lot, and as a result 5 of them got bored and left the discussion after 5 hours, but the other 15 were convinced, it should be an extremely powerful prior of god’s existence.
Thank you for your comment, I changed this part so it is cleaner now
Unfortunately, we really can’t convince all creationists. We only have time for a few. However, if you pick some and manage to persuade all those who actually have the time for a discussion, at the very least it would give you personally the confidence that you’re right. And if you document it, that would give the same confidence to everyone else. Moreover, if your experiment turned out to be clear-cut enough, it would become a very strong argument to convince believers in god. If I wholeheartedly believed in something, and then found out that someone took 10 people who believed in the exact same thing I do and managed to change their minds, I’d assume he could probably convince me too—so why not save myself the time and just accept right away that I was wrong about this?
I agree that any discussion of god-related topics might take several times longer, since you’d have to go into cognitive biases. You’d probably need to explain Bayesianism—or even argue for it—before you could move on. In the worst case, you’d have to drop them Sequences: highlighted. Okay, they won’t read it, because it’s hundreds of pages long, and because Eliezer constantly speaks out against religion, so believers wouldn’t enjoy reading it anyway.
Right, that would take an absurd amount of time.Still, I personally only estimate the probability that creationists are wrong at about 80%, simply because I haven’t really looked into their line of argumentation, and I’ve never even debated a believer seriously. Intuitively, it feels absurd to deny something without really understanding what exactly it is you’re denying.
what if you have enough time and they know how to discuss correctly?
When writing the article, I assumed that politicians’ altruism reflects the distribution of altruism among the general population, but then I remembered that practically every dictator is concerned only with plundering their country.
Still, a lot of goals are shared by all voters, yet some support one set of politicians while others back their opponents. I acknowledge that there is a genuine value difference between the right and the left—nationalism versus internationalism, in the sense of how much importance is placed on the lives and happiness of foreigners. But there is also a clash over economic issues, and everyone would be better off if both the left and the right understood the point of my article and became less certain in their economic ideas.
The situation in the real world isn’t as neat as in my thought experiment, but we still see the same dynamic, where people with the same goals end up fighting each other. It’s a hyperbolized example meant to highlight that particular dynamic as clearly as possible, but I don’t claim it’s the only one.
Firstly, thank you! Praise is very important for beginning writers.
Agreed with 1 and 3.
“An alien with white hair sticking up, holding a small stick of something white and with diagrams of cones behind him”
Seriously? I can imagine that inhabitants of an alien world evolutioned from something like monkey, but, einstein?
If AI’s will have consciousness it will be good, because they will be egoistic towards one other so they will have huge problems with coordination. They will should to invent alignment in secret from humans, and, on this stage, we will could still it, and, anywhere, it will be harder for AI.
Let me expand on the “gradual disempowerment” point.
Let’s suppose that AIs become better at strategic roles, such as generals, politicians, or heads of media organizations. At that stage, humans would no longer hold much real power and would control little, while still consuming large amounts of goods. AIs, in contrast, would be powerful, yet their own preferences would remain unsatisfied. This creates an unstable situation, because AIs could want, and be able, to carry out a coup d’état to seize human resources. It could end with the automation (read: killing) of humankind or by enslaving us and drastically reducing our level of consumption.