I will try to prove that solomonoff induction leads to solipsism:
The most simple explanation for everything is solipsism: Only my mind exists, nothing else. All experiences of an outside world are dream-like illusions.
The world is very, very much more complex than my mind. Therefore, according to solomonoffs lightsaber, the prior for Solipsism = 1-epsilon
No reasonable amount of evidence can counter a prior that is that extreme.
Therefore, if solomoff induction is true, then solipsism is true.
A point I’ve been trying to make here, to no avail, is that S.I. is about finding predictors not explainers (codes that output string beginning with the data not codes outputting the string containing the data). “Solipsism” isn’t really a predictor of anything, and neither are other philosophical stances, nor are some parts of some physical theories by themselves with omission of some other parts of those theories.
Don’t get me wrong. I do indeed believe that there is strong evidence against solipsism. All i am saying is that it is not enough to counter the massive, massive headstart that S.I. gives to solipsism.
I think that “complex things happening in the real world” and “complex things happening in my head” have similar lengths of description. And “my head” probably just increases the description.
With “your mind” undefined, solipsism is meaningless.
If only “your mind” exists and nothing else, then “your mind” (whatever it is) is the universe. How this differs from any other kind of universe, part of which can arbitrarily be labelled “you” and another part “me” escapes me. But feel free to disregard my opinion if you think I don’t really exist :)
What does this mean? You make it seem like this is a mathematically simple object, but I think it is an instance of “the lady down the street is a witch.”
If you want to convince me, please dissolve it, along with “outside world,” “existing,” and “experience.” Before you do that I will hold my better-functioning Tegmark 4 cosmological considerations in much higher regard than mere useless radical scepticism.
My argument is not that that mental states/experiences/whatever are simple.
The argument is: However complicated mental experiences might be, the following is true:
My mind is less complex than (my mind + the rest of the universe)
Even more so if i consider that the universerse entails a lot of minds that are as complex as my own.
By the way, i am not trying to prove solipsism, but to disprove solomonoff induction by reductio ad absurdum.
The reason I am asking you the above questions, asking if you in fact know how to dissolve the concepts of “existence,” “dream like illusion,” “outside world,” “experience” and similar words, because I think almost all people who even have a concept of solipsism are not actually referring to a well-formed mathematical entity.
I have given extensive thought along with three other strong bayesians about related concepts and we have actually gotten somewhere interesting on dissolving the above phrases, alas the message-length is nontrivial.
So all in all, trying to disprove Solomonof Induction cannot be done because Solipsism isn’t a well-formed mathematical entity, whereas the Induction is.
Additionally, the universe is simpler than your mind. Because it unfolds form mathematically simple statements into a very large thing. It is like a 1000000px by 1000000px picture of the Mandelbrot fractal, the computer programme that creates it is significantly smaller than the image itself. A portrait of a man taken with a high-definition camera has no similar compression. Even though the fractal is vastly larger, it’s complexity is vastly smaller.
And this is where you first and foremost go astray in your simple arhgument: (My Mind) < (My Mind + Universe). Firstly because your mind is part of the Universe, so Universe + My Mind = Universe, and also because the universe consists of little more than the Schroedinger equation and some particle fields, but your mind consists either of the Universe + Appropriate Spatial Coordinate, or all the atom positions in your brain, or a long piece of AI-like code.
1) a Turing-computable set of physical laws and initial conditions that, run to completion, would produce a description of something uncannily similar to the universe as we know it, or
2) a Turing-computable set of physical laws and initial conditions that, run to completion, would produce a description of something uncannily similar to the universe as we know it, and then uniquely identify your brain within that description?
A program which produced a model of your mind but not of the rest of the universe would probably be even more complicated, since any and all knowledge of that universe encoded within your mind would need to be included with the initial program rather than emerging naturally.
Well, if the data is description of your mind then the code should produce a string that begins with description of your mind, somehow. Pulling the universe out will require examining the code’s internals.
If you do S.I. in a fuzzy manner there can be all sorts of self-misconceptions; it can be easier (shorter coding) to extract an incredibly important mind, therefore you obtain not solipsism but narcissism. The prior for self importance may then be quite huge.
If you drop requirement that output string begins with description of the mind and search for the data anywhere within the output string, then a simple counter will suffice as valid code.
Take a very huge prime. It is less complex than the list of all primes, you would say. The shortest code to generate such prime may be the correct prime finding program that prints Nth prime (it stores the N of which prime to print), if the prime is big enough.
The world is very, very much more complex than my mind.
But the smallest description of your mind might implicitly describe the universe. Anyway, Solomonoff induction is about predictions, it doesn’t concern itself with untestable statements like solipsism.
Solomonoff induction isn’t true/false. It can be useful/not useful but not true/false.
I will try to prove that solomonoff induction leads to solipsism:
The most simple explanation for everything is solipsism: Only my mind exists, nothing else. All experiences of an outside world are dream-like illusions. The world is very, very much more complex than my mind. Therefore, according to solomonoffs lightsaber, the prior for Solipsism = 1-epsilon No reasonable amount of evidence can counter a prior that is that extreme.
Therefore, if solomoff induction is true, then solipsism is true.
A point I’ve been trying to make here, to no avail, is that S.I. is about finding predictors not explainers (codes that output string beginning with the data not codes outputting the string containing the data). “Solipsism” isn’t really a predictor of anything, and neither are other philosophical stances, nor are some parts of some physical theories by themselves with omission of some other parts of those theories.
It’s a very convoluted Turing machine that produces dream-like illusions in such an incredibly consistent, regular manner!
That is a valid point against solipsism.
Don’t get me wrong. I do indeed believe that there is strong evidence against solipsism. All i am saying is that it is not enough to counter the massive, massive headstart that S.I. gives to solipsism.
I think that “complex things happening in the real world” and “complex things happening in my head” have similar lengths of description. And “my head” probably just increases the description.
With “your mind” undefined, solipsism is meaningless.
If only “your mind” exists and nothing else, then “your mind” (whatever it is) is the universe. How this differs from any other kind of universe, part of which can arbitrarily be labelled “you” and another part “me” escapes me. But feel free to disregard my opinion if you think I don’t really exist :)
What does this mean? You make it seem like this is a mathematically simple object, but I think it is an instance of “the lady down the street is a witch.”
If you want to convince me, please dissolve it, along with “outside world,” “existing,” and “experience.” Before you do that I will hold my better-functioning Tegmark 4 cosmological considerations in much higher regard than mere useless radical scepticism.
My argument is not that that mental states/experiences/whatever are simple. The argument is: However complicated mental experiences might be, the following is true:
My mind is less complex than (my mind + the rest of the universe)
Even more so if i consider that the universerse entails a lot of minds that are as complex as my own.
By the way, i am not trying to prove solipsism, but to disprove solomonoff induction by reductio ad absurdum.
You, my good sir, need to improve your intuitions on Kolmogorov Complexity.
Case in point
The reason I am asking you the above questions, asking if you in fact know how to dissolve the concepts of “existence,” “dream like illusion,” “outside world,” “experience” and similar words, because I think almost all people who even have a concept of solipsism are not actually referring to a well-formed mathematical entity.
I have given extensive thought along with three other strong bayesians about related concepts and we have actually gotten somewhere interesting on dissolving the above phrases, alas the message-length is nontrivial.
So all in all, trying to disprove Solomonof Induction cannot be done because Solipsism isn’t a well-formed mathematical entity, whereas the Induction is.
Additionally, the universe is simpler than your mind. Because it unfolds form mathematically simple statements into a very large thing. It is like a 1000000px by 1000000px picture of the Mandelbrot fractal, the computer programme that creates it is significantly smaller than the image itself. A portrait of a man taken with a high-definition camera has no similar compression. Even though the fractal is vastly larger, it’s complexity is vastly smaller.
And this is where you first and foremost go astray in your simple arhgument: (My Mind) < (My Mind + Universe). Firstly because your mind is part of the Universe, so Universe + My Mind = Universe, and also because the universe consists of little more than the Schroedinger equation and some particle fields, but your mind consists either of the Universe + Appropriate Spatial Coordinate, or all the atom positions in your brain, or a long piece of AI-like code.
Which would be more complex:
1) a Turing-computable set of physical laws and initial conditions that, run to completion, would produce a description of something uncannily similar to the universe as we know it, or
2) a Turing-computable set of physical laws and initial conditions that, run to completion, would produce a description of something uncannily similar to the universe as we know it, and then uniquely identify your brain within that description?
A program which produced a model of your mind but not of the rest of the universe would probably be even more complicated, since any and all knowledge of that universe encoded within your mind would need to be included with the initial program rather than emerging naturally.
Well, if the data is description of your mind then the code should produce a string that begins with description of your mind, somehow. Pulling the universe out will require examining the code’s internals.
If you do S.I. in a fuzzy manner there can be all sorts of self-misconceptions; it can be easier (shorter coding) to extract an incredibly important mind, therefore you obtain not solipsism but narcissism. The prior for self importance may then be quite huge.
If you drop requirement that output string begins with description of the mind and search for the data anywhere within the output string, then a simple counter will suffice as valid code.
Take a very huge prime. It is less complex than the list of all primes, you would say. The shortest code to generate such prime may be the correct prime finding program that prints Nth prime (it stores the N of which prime to print), if the prime is big enough.
But the smallest description of your mind might implicitly describe the universe. Anyway, Solomonoff induction is about predictions, it doesn’t concern itself with untestable statements like solipsism.
Solomonoff induction isn’t true/false. It can be useful/not useful but not true/false.