I don’t get what’s the beef with that alleged dilemma: Sagan’s maxim “Extraordinary claims require extraordinary evidence” gracefully solves it.

More formally, in a Bayesian setting, Sagan’s maxim can be construed as the requirement for the prior to be a non-heavy-tailed probability distribution.

In fact, in formal applications of Bayesian methods, typical light-tailed maximum entropy distributions such as normal or exponential are used.

Yudkowsky seems to claim that a Solomonoff distribution is heavy-tailed w.r.t. the relevant variables, but he doesn’t provide a proof of that claim, and indeed the claim is even difficult to formalize properly, since the Solomonoff induction model has no explicit notion of world state variables, it just defines a probability distribution over observations.

Anyway, that’s an interesting question, and, if it turns out that the Solomonoff prior is indeed heavy-tailed w.r.t. any relevant state variable, it would seem to me as a good reason not to use Solomonoff induction.

IIUC, Yudkowsky’s epistemology is essentially that Solomonoff induction is the ideal of unbounded epistemic rationality that any boundedly rational reasoner should try to approximate.

I contest that Solomonoff induction is the self-evident ideal epistemic rationality.

Seconded. There seems to be no reason to privilege Turing machines or any particular encoding. (both choices have unavoidable inductive bias that is essentially arbitrary)

What is the mere Earth at stake, compared to a tiny probability of 3^^^^3 lives?

Do you really think this would be clearer or more rigorous if written in mathematical notation?

Anyway, that’s an interesting question, and, if it turns out that the Solomonoff prior is indeed heavy-tailed w.r.t. any relevant state variable, it would seem to me as a good reason not to use Solomonoff induction.

What is the mere Earth at stake, compared to a tiny probability of 3^^^^3 lives?
Do you really think this would be clearer or more rigorous if written in mathematical notation?

The problem is that Solomonoff induction is an essentially opaque model.

Think of it as a black box: you put in a string of bits representing your past observations and it gives you a probability distribution on string of bits representing your possible future observations. If you open the lid of the box, you will see many (ideally infinitely many) computer programs with arbitrary structure. There is no easy way to map that model to a probability distribution on non-directly observable world state variables such as “the number of people alive”.

Isn’t that kinda the point?

My interpretation is that Yudkowsky assumes Solomonoff induction essentially a priori and thus is puzzled by the dilemma it allegedly yields. My point is that:

It’s not obvious that Solomonoff induction actually yeilds that dilemma.

If it does, then this would be a good reason to reject it.

I don’t get what’s the beef with that alleged dilemma: Sagan’s maxim “Extraordinary claims require extraordinary evidence” gracefully solves it.

More formally, in a Bayesian setting, Sagan’s maxim can be construed as the requirement for the prior to be a non-heavy-tailed probability distribution.

In fact, in formal applications of Bayesian methods, typical light-tailed maximum entropy distributions such as normal or exponential are used.

Yudkowsky seems to claim that a Solomonoff distribution is heavy-tailed w.r.t. the relevant variables, but he doesn’t provide a proof of that claim, and indeed the claim is even difficult to formalize properly, since the Solomonoff induction model has no explicit notion of world state variables, it just defines a probability distribution over observations.

Anyway, that’s an interesting question, and, if it turns out that the Solomonoff prior is indeed heavy-tailed w.r.t. any relevant state variable, it would seem to me as a good reason not to use Solomonoff induction.

I think you’re missing the relevant piece—bounded rationality.

And it doesn’t matter what the Solomonoff prior

actuallylooks like if you can’t compute it.IIUC, Yudkowsky’s epistemology is essentially that Solomonoff induction is the ideal of unbounded epistemic rationality that any boundedly rational reasoner should try to approximate.

I contest that Solomonoff induction is the self-evident ideal epistemic rationality.

Seconded. There seems to be no reason to privilege Turing machines or any particular encoding. (both choices have unavoidable inductive bias that is essentially arbitrary)

Do you really think this would be clearer or more rigorous if written in mathematical notation?

Isn’t that kinda the point?

[edit: these are not rhetorical questions.]

The problem is that Solomonoff induction is an essentially opaque model.

Think of it as a black box: you put in a string of bits representing your past observations and it gives you a probability distribution on string of bits representing your possible future observations. If you open the lid of the box, you will see many (ideally infinitely many) computer programs with arbitrary structure. There is no easy way to map that model to a probability distribution on non-directly observable world state variables such as “the number of people alive”.

My interpretation is that Yudkowsky assumes Solomonoff induction essentially a priori and thus is puzzled by the dilemma it allegedly yields. My point is that:

It’s not obvious that Solomonoff induction actually yeilds that dilemma.

If it does, then this would be a good reason to reject it.

And

mypoint is that:It

seemslike it should; the article explicitly asks the reader to try and disprove this.That’s kinda the point of the article.

It appears we all agree. I think.