Pascal’s Muggle Pays

Link post

Reply To (Eliezer Yudkowsky): Pascal’s Muggle Infinitesimal Priors and Strong Evidence

Inspired to Finally Write This By (Lesser Wrong): Against the Linear Utility Hypothesis and the Leverage Penalty.

The problem of Pascal’s Muggle begins:

Suppose a poorly-dressed street person asks you for five dollars in exchange for doing a googolplex’s worth of good using his Matrix Lord powers.

“Well,” you reply, “I think it very improbable that I would be able to affect so many people through my own, personal actions – who am I to have such a great impact upon events? Indeed, I think the probability is somewhere around one over googolplex, maybe a bit less. So no, I won’t pay five dollars – it is unthinkably improbable that I could do so much good!”

“I see,” says the Mugger.

At this point, I note two things. I am not paying. And my the probability the mugger is a Matrix Lord is much higher than five in a googolplex.

That looks like a contradiction. It’s positive expectation to pay, by a lot, and I’m not paying.

Let’s continue the original story.

A wind begins to blow about the alley, whipping the Mugger’s loose clothes about him as they shift from ill-fitting shirt and jeans into robes of infinite blackness, within whose depths tiny galaxies and stranger things seem to twinkle. In the sky above, a gap edged by blue fire opens with a horrendous tearing sound – you can hear people on the nearby street yelling in sudden shock and terror, implying that they can see it too – and displays the image of the Mugger himself, wearing the same robes that now adorn his body, seated before a keyboard and a monitor.

“That’s not actually me,” the Mugger says, “just a conceptual representation, but I don’t want to drive you insane. Now give me those five dollars, and I’ll save a googolplex lives, just as promised. It’s easy enough for me, given the computing power my home universe offers. As for why I’m doing this, there’s an ancient debate in philosophy among my people – something about how we ought to sum our expected utilities – and I mean to use the video of this event to make a point at the next decision theory conference I attend. Now will you give me the five dollars, or not?”

“Mm… no,” you reply.

No?” says the Mugger. “I understood earlier when you didn’t want to give a random street person five dollars based on a wild story with no evidence behind it. But now I’ve offered you evidence.”

“Unfortunately, you haven’t offered me enough evidence,” you explain.

I’m paying.

So are you.

What changed?

I

The probability of Matrix Lord went up, but the odds were already there, and he’s probably not a Matrix Lord (I’m probably dreaming or hypnotized or nuts or something).

At first the mugger could benefit by lying to you. More importantly, people other than the mugger could benefit by trying to mug you and others who reason like you, if you pay such muggers. They can exploit taking large claims seriously.

Now the mugger cannot benefit by lying to you. Matrix Lord or not, there’s a cost to doing what he just did and it’s higher than five bucks. He can extract as many dollars as he wants in any number of ways. A decision function that pays the mugger need not create opportunity for others.

I pay.

In theory Matrix Lord could derive some benefit like having data at the decision theory conference, or a bet with another Matrix Lord, and be lying. Sure. But if I’m even 99.999999999% confident this isn’t for real, that seems nuts.

(Also, he could have gone for way more than five bucks. I pay.)

(Also, this guy gave me way more than five dollars worth of entertainment. I pay.)

(Also, this guy gave me way more than five dollars worth of good story. I pay.)

II

The leverage penalty is a crude hack. Our utility function is given, so our probability function had to move or the Shut Up and Multiply would do crazy things like pay muggers.

The way out is our decision algorithm. As per Logical Decision Theory, our decision algorithm is correlated to lots of things, including the probability of muggers approaching you on the street and what benefits they offer. The reason real muggers use a gun rather than a banana is mostly that you’re far less likely to hand cash over to someone holding a banana. The fact that we pay muggers holding guns is why muggers hold guns. If we paid muggers holding bananas, muggers would happily point bananas.

There is a natural tendency to slip out of Functional Decision Theory into Causal Decision Theory. If I give this guy five dollars, how often will it save all these lives? If I give five dollars to this charity, what will that marginal dollar be spent on?

There’s a tendency for some, often economists or philosophers, to go all lawful stupid about expected utility and berate us for not making this slip. They yell at us for voting, and/​or asking us to justify not living in a van down by the river on microwaved ramen noodles in terms of our expected additional future earnings from our resulting increased motivation and the networking effects of increased social status.

To them, we must reply: We are choosing the logical output of our decision function, which changes the probability that we’re voting on reasonable candidates, changes the probability there will be mysterious funding shortfalls with concrete actions that won’t otherwise get taken, changes the probability of attempted armed robbery by banana, and changes the probability of random people in the street claiming to be Matrix Lords. It also changes lots of other things that may or may not seem related to the current decision.

Eliezer points out humans have bounded computing power, which does weird things to one’s probabilities, especially for things that can’t happen. Agreed, but you can defend yourself without making sure you never consider benefits multiplied by 3↑↑↑3 without also dividing by 3↑↑↑3. You can have a logical algorithm that says not to treat differently claims of 3↑↑↑3 and 3↑↑↑↑3 if the justification for that number is someone telling you about it. Not because the first claim is so much less improbable, but because you don’t want to get hacked in this way. That’s way more important than the chance of meeting a Matrix Lord.

Betting on your beliefs is a great way to improve and clarify your beliefs, but you must think like a trader. There’s a reason logical induction relies on markets. If you book bets on your beliefs at your fair odds without updating, you will get dutch booked. Your decision algorithm should not accept all such bets!

People are hard to dutch book.

Status quo bias can be thought of as evolution’s solution to not getting dutch booked.

III

Split the leverage penalty into two parts.

The first is ‘don’t reward saying larger numbers’. Where are these numbers coming from? If the numbers come from math we can check, and we’re offered the chance to save 20,000 birds, we can care much more than about 2,000 birds. A guy designing pamphlets picking arbitrary numbers, not so much.

Scope insensitivity can be thought of as evolution’s solution to not getting Pascal’s mugged. The one child is real. Ten thousand might not be. Both scope insensitivity and probabilistic scope sensitivity get you dutch booked.

Scope insensitivity and status quo bias cause big mistakes. We must fight them, but by doing so we make ourselves vulnerable.

You also have to worry about fooling yourself. You don’t want to give your own brain reason to cook the books. There’s an elephant in there. If you give it reason to, it can write down larger exponents.

The second part is applying Bayes’ Rule properly. Likelihood ratios for seeming high leverage are usually large. Discount accordingly. How much is a hard problem. I won’t go into detail here, except to say that if calculating a bigger impact doesn’t increase how excited you are about an opportunity, you are doing it wrong.