In lieu of an extended digression about how to adjust Solomonoff induction for making anthropic predictions, I’ll simply note that having God create the world 5,000 years ago but fake the details of evolution is more burdensome than having a simulator approximate all of physics to an indistinguishable level of detail. Why? Because “God” is more burdensome than “simulator”, God is antireductionist and “simulator” is not, and faking the details of evolution in particular in order to save a hypothesis invented by illiterate shepherds is a more complex specification in the theory than “the laws of physics in general are being approximated”.
To me it seems nakedly obvious that “God faked the details of evolution” is a far more outre and improbable theory than “our universe is a simulation and the simulation is approximate”. I should’ve been able to leave filling in the details as an exercise to the reader.
This just means you have a very narrow (Abrahamic) conception of God that not even most Christians have. (At least, most Christians I talk to have super-fuzzy-abstract ideas about Him, and most Jews think of God as ineffable and not personal these days AFAIK.) Otherwise your distinction makes little sense. (This may very well be an argument against ever using the word ‘God’ without additional modifiers (liberal Christian, fundamentalist Christian, Orthodox Jewish, deistic, alien, et cetera), but it’s not an argument that what people sometimes mean by ‘God’ is a wrong idea. Saying ‘simulator’ is just appealing to an audience interested in a different literary genre. Turing equivalence, man!)
Of note is that the less memetically viral religions tend to be saner (because missionary religions mostly appealed to the lowest common denominator of epistemic satisfiability). Buddhism as Buddha taught it is just flat out correct about nearly everything (even if you disagree with his perhaps-not-Good but also not-Superhappy goal of eliminating imperfection/suffering/off-kilteredness). Many Hindu and Jain philosophers were good rationalists (in the sense that Epicurus was a good rationalist), for instance. To a first and third and fifth approximation, every smart person was right about everything they were trying to be right about. Alas, humans are not automatically predisposed to want to be right about the super far mode considerations modern rationalists think to be important.
For many people the word “God” appears to just describe one’s highest conception of good, the north pole of morality. Such as: “God is Love” in Christianity.
From that perspective, I guess God is Rationality for many people here.
This conception lets you do a lot of fun associations. Since morality seems pretty tied up with good epistemology (preferences and beliefs are both types of knowledge, after all), and since knowledge is power (see Eliezer’s posts on engines of cognition), then you would expect this conception of God to not only be the most moral (omnibenevolent) but the most knowledgeable (omniscient) and powerful (omnipotent). Because God embodies correctness He is thus convergent for minds approximating Bayesianism (like math) and has a universally very short description length (omnipresent), and is accessible from many different computations (arguably personal).
To me it seems nakedly obvious that “God faked the details of evolution” is a far more outre and improbable theory than “our universe is a simulation and the simulation is approximate”. I should’ve been able to leave filling in the details as an exercise to the reader.
Trusting ones ‘gut’ impressions of the “nakedly obvious” like that and ‘leaving the details as an exercise’ is a perfectly reasonable thing to do when you have a well-tuned engine of rationality in your possession and you just need to get some intellectual work done.
But my impression of the thrust of the OP was that he was suggesting a bit of time-consuming calibration work so as to improve the tuning of our engines. Looking at our heuristics and biases with a bit of skepticism. Isn’t that what this community is all about?
But enough of this navel gazing! I also would like to see that digression on Solomonoff induction in an anthropic situation.
In lieu of an extended digression about how to adjust Solomonoff induction for making anthropic predictions, I’ll simply note that having God create the world 5,000 years ago but fake the details of evolution is more burdensome than having a simulator approximate all of physics to an indistinguishable level of detail. Why? Because “God” is more burdensome than “simulator”, God is antireductionist and “simulator” is not, and faking the details of evolution in particular in order to save a hypothesis invented by illiterate shepherds is a more complex specification in the theory than “the laws of physics in general are being approximated”.
To me it seems nakedly obvious that “God faked the details of evolution” is a far more outre and improbable theory than “our universe is a simulation and the simulation is approximate”. I should’ve been able to leave filling in the details as an exercise to the reader.
Extended digression about how to adjust Solomonoff induction for making anthropic predictions plz
This just means you have a very narrow (Abrahamic) conception of God that not even most Christians have. (At least, most Christians I talk to have super-fuzzy-abstract ideas about Him, and most Jews think of God as ineffable and not personal these days AFAIK.) Otherwise your distinction makes little sense. (This may very well be an argument against ever using the word ‘God’ without additional modifiers (liberal Christian, fundamentalist Christian, Orthodox Jewish, deistic, alien, et cetera), but it’s not an argument that what people sometimes mean by ‘God’ is a wrong idea. Saying ‘simulator’ is just appealing to an audience interested in a different literary genre. Turing equivalence, man!)
Of note is that the less memetically viral religions tend to be saner (because missionary religions mostly appealed to the lowest common denominator of epistemic satisfiability). Buddhism as Buddha taught it is just flat out correct about nearly everything (even if you disagree with his perhaps-not-Good but also not-Superhappy goal of eliminating imperfection/suffering/off-kilteredness). Many Hindu and Jain philosophers were good rationalists (in the sense that Epicurus was a good rationalist), for instance. To a first and third and fifth approximation, every smart person was right about everything they were trying to be right about. Alas, humans are not automatically predisposed to want to be right about the super far mode considerations modern rationalists think to be important.
For many people the word “God” appears to just describe one’s highest conception of good, the north pole of morality. Such as: “God is Love” in Christianity.
From that perspective, I guess God is Rationality for many people here.
People might say that, but they don’t actually believe it. They’re just trying to obfuscate the fact that they believe something insane.
This conception lets you do a lot of fun associations. Since morality seems pretty tied up with good epistemology (preferences and beliefs are both types of knowledge, after all), and since knowledge is power (see Eliezer’s posts on engines of cognition), then you would expect this conception of God to not only be the most moral (omnibenevolent) but the most knowledgeable (omniscient) and powerful (omnipotent). Because God embodies correctness He is thus convergent for minds approximating Bayesianism (like math) and has a universally very short description length (omnipresent), and is accessible from many different computations (arguably personal).
Delicious delicious metacontrarianism...
It’s like Scholastic mad-libs!
Preferences are entangled with beliefs, certainly, but I don’t see why I would consder them to be knowledge.
What is your operational definition of knowledge?
Trusting ones ‘gut’ impressions of the “nakedly obvious” like that and ‘leaving the details as an exercise’ is a perfectly reasonable thing to do when you have a well-tuned engine of rationality in your possession and you just need to get some intellectual work done.
But my impression of the thrust of the OP was that he was suggesting a bit of time-consuming calibration work so as to improve the tuning of our engines. Looking at our heuristics and biases with a bit of skepticism. Isn’t that what this community is all about?
But enough of this navel gazing! I also would like to see that digression on Solomonoff induction in an anthropic situation.
Seconding Kevin’s request. Seeing a sentence like that with no followup is very frustrating.