Only in the complete absence of evidence. But theism already has a ton of evidence for it and was the default belief of intelligent folk for thousands of years; it’s like saying a-gravity-ism isn’t actually a theory about physics (to take our metaphors to the other extreme from fairies). Assigning a low prior to theism is an abuse of algorithmic probability theory. …Am I missing something?
Assigning a low prior to theism is an abuse of algorithmic probability theory.
Can you explain this? Because I’ve been operating under the following assumption:
It’s enormously easier (as it turns out) to write a computer program that simulates Maxwell’s Equations, compared to a computer program that simulates an intelligent emotional mind like Thor.
In order to write a computer program that actually computes (rather than models) Maxwell’s equations you have to write a program that writes out a physical universe, and if you want a program that describes Maxwell’s equations then the interpretation you choose is more a matter of pragmatic decision theory than of algorithmic probability theory, at least in practice. (Bounded agents aren’t exactly committing an error of rationality when they don’t try to act like Homo Economicus; that would be decision theoretically insane.)
But anyway. Specific things in the universe don’t seem to be caused by gods. Indeed, that’d be hella unparsimonious: “God chose to add some ridiculous number of bits into His program just to make it such that there was a ‘Messiah gets crucified’ attractor?”. The local universe as a whole, on the other hand, is this whole other thing: there’s the simulation argument.
Your comment got voted up to +10 despite Eliezer’s argument being a straightforward error of algorithmic probability; I don’t know what to do about that and it stresses me out. Does anyone have ideas? It saddens me to see algorithmic probability so regularly abused on LW, but the few corrective posts on the matter, e.g. by Slepnev, don’t seem to have permeated the LW memeplex, probably because they’re too technical.
I think you are slightly misinterpreting things. As you pointed out, the established memeplex does lean heavily in favor of Eliezer’s position on algorithmic probability theory rather than Slepnev’s. But that doesn’t mean that all of the upvoters agree with Eliezer’s position—some of them probably just want to see you answer my question “Can you explain this?”. In fact, I would very much like to see this question answered thoroughly in a way that makes sense to me. Vladimir’s posts are a great start, but lacking knowledge of algorithmic probability theory, I don’t really know how to put all of it together.
What we really need is a well-written gentle introduction to algorithmic probability theory that carefully and clearly shows how it works and what it does and doesn’t imply.
Well, of course there are both superintelligences and magical gods out there in the math, including those that watch over you in particular, with conceptual existence that I agree is not fundamentally different from our own, but they are presently irrelevant to us, just as the world where I win the lottery is irrelevant to me, even though a possibility.
It currently seems to me that many of such scenarios are irrelevant not because of “low probability” (as in the lottery case; different abstract facts coexist, so don’t vie for probability mass) or moral irrelevance of any kind (the worlds with nothing possibly of value), but because of other reasons that prevent us from exerting significant consequentialist control over them. The ability to see the possible consequences (and respond to this dependence) is the step missing, even though your actions do control those scenarios, just in a non-consequentialist manner.
(It does add up to atheism, as a modest claim about our own world, the “real world”, that it’s intended to be. In pursuit of “steelmanning” theism you seem to have come up with a strawman atheism...)
I don’t know if this is what Will has in mind- but it seems plausible that the super intelligences and gods that would be watching out for us might attempt to maximize the instantiations of our algorithms that are under their domain, so that as great a proportion of our future selves as possible will be saved (this story is vaguely Leibnizian). But I don’t know that such superbeings would be capable of overcoming their own sheer unlikelihood (though perhaps some subset of such superbeings have infinite capacity to create copies of us?). You can derive a self-interested ethics from this too- if you think you’ll be rewarded or punished by the simulator. The choices of the simulators could be further constrained by simulators above them—we would need an additional step to show that the equilibrium is benevolent (especially given the existence of evil in our universe).
But I’m not at all convinced Tegmark Level 4 isn’t utter nonsense. There is big step from accepting that abstract objects exist to accepting that all possible abstract objects are instantiated. And can we calculate anthropic probabilities from infinities of different magnitudes?
There is big step from accepting that abstract objects exist to accepting that all possible abstract objects are instantiated.
I’d rather say that the so-called “instantiated” objects are no different from the abstract ones, that in reality, there is no fundamental property of being real, there is only a natural category humans use to designate the stuff of normal physics, a definition that can be useful in some cases, but not always.
So there are easy ways to explain this idea at least, right? Humans’ decisions are affected by “counterfactual” futures all the time when planning, and so the counterfactuals have influence, and it’s hard for us to get a notion of existence outside of such influence besides a general naive physicalist one. I guess the not-easy-to-explain parts are about decision theoretic zombies where things seem like they ‘physically exist’ as much as anything else despite exerting less influence, because that clashes more with our naive physicalist intuitions? Not to say that these bizarre philosophical ideas aren’t confused (e.g. maybe because influence is spread around in a more egalitarian way than it naively feels like), but they don’t seem to be confusing as such.
Humans’ decisions are affected by “counterfactual” futures all the time when planning, and so the counterfactuals have influence
Human decisions are affected by thoughts about counterfactuals. So the question is, what is the nature of the influence that the “content” or “object” of a thought, has on the thought?
I do not believe that when human beings try to think about possible worlds, that these possible worlds have any causal effect in any way on the course of the thinking. The thinking and the causes of the thinking are strictly internal to the “world” in which the thinking occurs. The thinking mind instead engages in an entirely speculative and inferential attempt to guess or feel out the structure of possibillity—but this feeling out does not in any way involve causal contact with other worlds or divergent futures. It is all about an interplay between internally generated partial representations, and a sense of what is possible, impossible, logically necessary, etc in an imagined scenario; but the “sensory input” to these judgments consists of the imagining of possibilities, not the possibilities themselves.
How likely what is? There doesn’t appear to be a factual distinction, just what I find to be a more natural way of looking at things, for multiple purposes.
I believe that “exists” doesn’t mean anything fundamentally significant (in senses other than referring to presence of a property of some fact; or referring to the physical world; or its technical meanings in logic), so I don’t understand what it would mean for various (abstract) things to exist to greater or lower extent.
That would require understanding alternatives, which I currently don’t. The belief in question is mostly asserting confusion, and as such it isn’t much use, other than as a starting point that doesn’t purport to explain what I don’t understand.
No, I won’t see that in itself as a reason to be wary, since as I said repeatedly I don’t know how to parse the property of something being real in this sense.
Anyone who has positive accounts of existentness to put forth, I’d like to hear them. (E.g., Eliezer has talked about this related existentness-like-thing that has do with being in a causal graph (being computed), but I’m not sure if that’s just physicalist intuition admitting much confusion or if it’s supposed to be serious theoretical speculation caused by interesting underlying motivations that weren’t made explicit.)
Different abstract facts aren’t mutually exclusive, so one can’t compare them by “probability”, just as you won’t compare probability of Moscow with probability of New York. It seems to make sense to ask about probability of various facts being a certain way (in certain mutually exclusive possible states), or about probability of joint facts (that is, dependencies between facts) being a certain way, but it doesn’t seem to me that asking about probabilities of different facts in themselves is a sensible idea.
(Universal prior, for example, can be applied to talk about the joint probability distribution over the possible states of a particular sequence of past and future observations, that describes a single fact of the history of observations by one agent.)
Different abstract facts aren’t mutually exclusive, so one can’t compare them by “probability”, just as you won’t compare probability of Moscow with probability of New York.
You just prompted me to make that comparison. I’ve been to New York. I haven’t been to Moscow. I’ve also met more people who have talked about what they do in New York than I have people who talk about Moscow. I assign at least ten times as much confidence to New York as I do Moscow. Both those probabilities happen to be well above 99%. I don’t see any problem with comparing them just so long as I don’t conclude anything stupid based on that comparison.
There’s a point behind what you are saying here—and an important point at that—just one that perhaps needs a different description.
I assign at least ten times as much probability New York as I do Moscow.
What does this mean, could you unpack? What’s “probability of New York”? It’s always something like “probability that I’m now in New York, given that I’m seating in this featureless room”, which discusses possible states of a single world, comparing the possibility that your body is present in New York to same for Moscow. These are not probabilities of the cities themselves. I expect you’d agree and say that of course that doesn’t make sense, but that’s just my point.
I assign at least ten times as much probability New York as I do Moscow.
What does this mean, could you unpack?
It wasn’t my choice of phrase:
just as you won’t compare probability of Moscow with probability of New York
When reading statements like that that are not expressed with mathematical formality the appropriate response seems to be resolving to the meaning that fits best or asking for more specificity. Saying you just can’t do the comparison seems to a wrong answer when you can but there is difficulty resolving ambiguity. For example you say “the answer to A is Y but you technically could have meant B instead of A in which case the answer is Z”.
I actually originally included the ‘what does probability of Moscow mean?’ tangent in the reply but cut it out because it was spammy and actually fit better as a response to the nearby context.
These are not probabilities of the cities themselves. I expect you’d agree and say that of course that doesn’t make sense, but that’s just my point.
Based on the link from the decision theory thread I actually thought you were making a deeper point than that and I was trying to clear a distraction-in-the-details out of the way.
The point I was making is that people do discuss probabilities of different worlds that are not seen as possibilities for some single world. And comparing probabilities of different worlds in themselves seems to be an error for basically the same reason as comparing probabilities of two cities in themselves is an error. I think this is an important error, and realizing it makes a lot of ideas about reasoning in the context of multiple worlds clearly wrong.
God is an exceedingly unlikely property of our branch of the physical world at the present time. Implementations of various ideas of God can be found in other worlds that I don’t know how to compare to our own in a way that’s analogous to “probability”. The Moscow vs. New York example illustrates the difficulty with comparing worlds that are not different hypotheses about how the same world could be, but two distinct objects.
(I don’t privilege the God worlds in particular, the thought experiment where the Moon is actually made out of Gouda is an equivalent example for this purpose.)
The Moscow vs. New York example illustrates the difficulty with comparing worlds that are not different hypotheses about how the same world could be, but two distinct objects.
There doesn’t seem to be a problem here. The comparison resolves to something along the lines of:
Consider all hypotheses about the physical world of the present time which include the object “Moscow”.
Based on all the information you have calculate the probability that any one of those is the correct hypothesis.
Do the same with “New York”.
Compare those two numbers.
???
Profit.
Instantiate ”???” with absurdly contrived bets with Omega as necessary. Rely on the same instantiation to a specific contrived decision to be made to resolve any philosophical issues along the lines of “What does probability mean anyway?” and “What is ‘exist’?”.
What you describe is the interpretation that does make sense. You are looking at properties of possible ways that the single “real world” could be. But if you don’t look at this question specifically in the context of the real world (the single fact possibilities for whose properties you are considering), then Moscow as an abstract idea would have as much strength as Mordor, and “probability of Moscow” in Middle-earth would be comparatively pretty low.
(Probability then characterizes how properties fit into worlds, not how properties in themselves compare to each other, or how worlds compare to each other.)
God is an exceedingly unlikely property of our branch of the physical world at the present time.
Our disagreement here somewhat baffles me, as I think we’ve both updated in good faith and I suspect I only have moderately more/different evidence than you do. If you’d said “somewhat unlikely” rather than “exceedingly unlikely” then I could understand, but as is it seems like something must have gone wrong.
Specifically, unfortunately, there are two things called God; one is the optimal decision theory, one is a god that talks to people and tells them that it’s the optimal decision theory. I can understand why you’d be skeptical of the former even if I don’t share the intuition, but the latter god, the demon who claims to be God, seems to me to likely exist, and if you think that god is exceedingly unlikely then I’m confused why. Like, is that just your naive impression or is it a belief you’re confident in even after reflecting on possible sources of overconfidence, et cetera?
I agree that there are many reasons that prevent us from explicitly exerting significant control, but I’m at least interested in theurgy. Turning yourself into a better institution, contributing only to the support of not-needlessly-suboptimal institutions, etc. In the absence of knowing what “utility function” is going to ultimately decide what justification is for those who care about what the future thinks, I think building better institutions might be a way to improve the probabilities of statistical-computational miracles. I think this with really low probability but it’s not an insane hypothesis even if it is literally magical thinking. (The decision theory and physics backing the intuitions are probably sound, it’s just that it doesn’t have the feel of well-motivatedness yet. It’s more one of those “If I have to choose to spend a few hours either reading about dark matter or reading about where decision theory meets human deciion policies I think it’s a potentially more fruitful idea to think about the latter” things.)
I really appreciate that you responded at roughly the right level of abstraction. It seems clear that the debate should be over the extent to which thaumaturgy is possible (including thaumaturgy that helps you build FAIs faster) because that’s the only way “theism” or “atheism” should affect our decision policy. (Outside of deciding which object level moral principles to pursue. I like traditional Anglican Christianity when it comes to object level morality even if I mostly ignore it.)
The decision theory and physics backing the intuitions are probably sound
Not by a long shot. Physics is probably mostly irrelevant here, it focuses only on our world; and decision theory is so flimsy and poorly understood that any related effort should be spent on improving it, for it’s not even clear what it suggests to be the case, much less how to make use of its suggestions.
I’ve seen QM become important because of decision problems where agents have to coordinate between quantum branches in order to reverse time. I can’t go into that here but I’d at least like to flag that there are decision theory problems where things like quantum information theory shows up.
Physics focuses on worlds across the entire quantum superposition. That’s a pretty big neighborhood, no? Agreed about decision theory. When I said “choose to spend” I meant “I have a few hours to kill but I’m too lazy to do problem sets at the moment”, not “I choose thaumaturgy as the optimal thing to study”.
Physics focuses on worlds across the entire quantum superposition. That’s a pretty big neighborhood, no?
Okay, that makes sense as a rich playground for acausal interaction. I don’t know what pieces of intuition about physics you refer to as useful for reasoning about acausal effects of human decisions though.
(It does add up to atheism, as a modest claim about our own world, the “real world”
Not if there is evidence of angels and demons in our world, and you can interact with them in at least semi-predictably consequential ways. Which basically everyone believes except the goats, because everyone gets evidence except the goats. Doesn’t it suck to have a mind-universe that actively encourages you to fall into self-sustaining delusions? Yes, yes it does.
ETA: Apparently it’s 2012 now! My resolution: not to fall into self-sustaining delusion! Happy new year LW!
Could you give an example? Like, can you state a specific fact of the world and explain which version of theism it is evidence for, and how it is evidence for that version of theism?
All of existence is strong evidence in favor of theism. The existence of an extremely complex system is obviously evidence of an entity capable and willing to create such a system from scratch. For the kind of priors people deal with everyday- things like “Is Amanda Knox Guilty?” or “Will I win the hand of poker?” the evidence of the strength that we have for God’s existence would be more than enough to convince us. But the prior for theism (as it is usually formulated) is so laughably, incomprehensibly low all this evidence isn’t even enough for a rational person to seriously consider the theistic hypothesis. Will’s claim that a low prior for theism “is an abuse of algorithmic probability theory” is the real issue. Now, that prior can be reduced if the hypothesis involves some process by which the entity could come to exist while conserving complexity (in particular, if that entity evolved and then created this universe). Will however seems to believe in something different than the usual simulation hypothesis- he may endorse something like Divine Simplicity which is complete and utter nonsense. Word games and silliness as far as I can tell- or at least smacking of a to-me-untenable moral realism.
All of existence is strong evidence in favor of theism. The existence of an extremely complex system is obviously evidence of an entity capable and willing to create such a system from scratch.
I don’t understand how it’s strong evidence. We have plenty of experience showing that complex stuff is just what you get when you leave simple stuff alone long enough, assuming you’re talking about “complexity” in the thermodynamic sense. For intelligent entities to be elevated as a particular hypothesis, it seems like you need to find things like low entropy pockets and optimization behavior.
All of existence is also evidence for the hypothesis that if you leave simple stuff alone long enough complexity arises. And the prior for that is much higher than the theism prior.
If both those hypotheses (thermodynamics, theism) started at the same prior, which one would receive more of a boost upwards after updating on all existence?
In theism’s favor we have mystical experience, purported revelation and claims of miracles. Against, we have the existence of evil and a lot of familiarity with how complexity can come to be through simple processes. Maybe the fact that we keep explaining things that God was once used to explain is metainductive evidence against theism… I really have trouble thinking clearly about this and suspect I’ve biased myself by being an atheist so long. What do you think?
I’m gonna think out loud for a bit, let’s see if this makes sense.
I think that “complexity” is a red herring; it’s dodging the real query. What we’re really interested in is something more like an explanation for why the universe is the way it is, rather than some other universe, including the rather large subset of possible universes that would’ve resulted in nothing very interesting at all happening ever.
So: rather than “theism” and “thermodynamics”, we more generally have “theism” and “everything else” as our two competing chunks of hypothesis-space to explain “why is the universe the way it is?”. Let’s assume that that’s a meaningful question. Let’s also assume that the two chunks have equal prior probability (that is, let’s just forget about comparing minimum message lengths or anything like that, otherwise “everything else” gets a big head start).
Update on direct, personal, but non-replicable experiences of communicating with gods. This is at most very weak evidence in favor of theism, due to what we know about cognitive biases.
Update on negative results of attempting to replicably communicate with gods. This is weak evidence against theism; it is good evidence against a god that can communicate with us and wants to, but it doesn’t say much for the remainder of possible-god-space.
Update on evolution via natural selection as the explanation for humanity’s biological setup. This is also weak evidence against theism; it’s good evidence only against the subset of possible-god-space that wants people to be able to notice them, or that has a particular design idea in mind and goes about creating people to fulfill that idea. Also, given the pretty major flaws of human bodies and minds, it’s good evidence against the subset of possible-god-space where the gods prioritize our happiness (in both the sophisticated fun theoretic sense and the wire-head sense of happiness).
Update more generally on the existence of naturalistic patterns like evolution that can crank out relatively low-entropy things like biological life. Weak evidence against gods in general, good evidence against the subset of possible gods that specifically are interested in and capable of creating biological life.
I can go on like that for a while, but the basic pattern seems to be: “not theism” pulls generally but not majorly ahead, by taking probability mass from the parts of “theism” that involve directly causing stuff that applies only to our particular neck of the universe. Humans and the Earth are pretty weird compared to all the stuff around them, but it seems that gods are not a good explanation for that weirdness.
The hypothesis space for “theism” still has probability mass for gods that do not or cannot directly intervene in favor of privileging universes where humans are the way they are. I’m not sure how big that is compared to the entire hypothesis space of possible theisms; whatever that there is, that’s how badly “theism” in general would be losing to “not theism” if they started out at the same prior.
Your comment definitely pulls me in your direction.
This is hard and probably not fair to do without knowing what else is in “non-theism”. But in general theism has an advantage you’re forgetting which is that it lets us explain everything we don’t understand with magic. Big Bang, abiogenesis, what have you, theism has been defined in such a way that it can explain anything we can’t already explain. This means everything we don’t understand is evidence for God. I don’t know that the realization that we keep explaining things previously attributable to God swamps this effect. You’re certainly right that the image of God one arrives at is at best indifferent and at worst humorously sadistic (with “averse to science” somewhere in the middle).
I will say that I’m not sure Occam priors actually come from any kind of analytic deduction based on something like algorithmic complexity. That is, I think the whole thing might just be one giant meta-induction on all our confirmed and falsified hypotheses where simplicity turned out to be a useful heuristic. In which case, I don’t know what the prior was (doesn’t matter) but p=God is just crazy low,
That’s not necessarily true. You could have a shy god. The better your epistemology gets, the shyer it gets, always staying on the edge of humanity’s epistemology. But it still works miracles when people aren’t looking too closely.
Though I’m not quite sure what kind of god you’re talking about in your comment; it seems weird to me to ignore the only kind of god that seems particularly likely, i.e. a simulator god/pantheon.
Though I’m not quite sure what kind of god you’re talking about in your comment; it seems weird to me to ignore the only kind of god that seems particularly likely, i.e. a simulator god/pantheon.
But in general theism has an advantage you’re forgetting which is that it lets us explain everything we don’t understand with magic.
If “magic” is the answer to anything we don’t understand, then it isn’t an explanation, it’s just an abbreviation for “I don’t know”. This is hardly an advantage.
Big Bang, abiogenesis, what have you, theism has been defined in such a way that it can explain anything we can’t already explain. This means everything we don’t understand is evidence for God.
If theism can explain anything, it explains nothing. Phlogiston anyone?
I’m not assuming you are arguing for theism. What I assume you’re arguing for is that theism being able to “explain” anything is an advantage for theism, which it is not. I’m not arguing against theism either.
I see what you mean, but how does theism “explaining” currently unsolved mysteries in any way constrain experience? As far as I know, theism postulating “all was created by a god” doesn’t allow me to anticipate anything I can’t already anticipate anyway. Also as far as I know, it’s not as if any phenomena currently not explainable were predicted by any form of theism.
I may be wrong on this though, as I am certainly not a theism expert. If so, this would be actual evidence for theism.
If you bring semi-logical considerations into it then the obvious pro-theism one is Omohundro’s AI drives plus game theory. Simulators gonna simulate. (And superintelligences have a lot of computing resources with which to do so.) (Semi-logical because there are physical reasons we expect agents to work in certain ways.)
I was not using your definition of theism since theism scenarios where the God evolved aren’t distinct hypotheses from “complexity from thermodynamics and evolution”. There is more evidence for your version of God, the simulation argument in particular. But miracles, revelation and mystical experience count far less.
There are timeful/timeless issues ’cuz there’s an important sense in which a superintelligence is just an instantiation of a timeless algorithm. (So it’s less clear if it counts as having evolved.) But partitioning away that stuff makes sense.
There are timeful/timeless issues ’cuz there’s an important sense in which a superintelligence is just an instantiation of a timeless algorithm.
Not true. There are some superintelligences that could be constructed that way but that is only a small set of possible superintelligences. Others have nothing timeless about their algorithm and don’t need it to be superintelligent.
That’s one hypothesis, but I’d only assign like 90% to it being true in the decisions-relevant sense. Probably gets swamped by other parts of the prior, no?
A naive view sees a lump of matter being turned into a program whose execution just happens to correlate with the execution of similar programs across the Schmidhuberian computational ensemble. (If you don’t assume a computational ensemble to begin with then you just have to factor that uncertainty in.) A different view is that there’s no correlation without shared causation, and anyway that all those program-running matter-globs are just shards of a single algorithm that just happens to be distributed from a physical perspective. But if those shards all cooperate, even acausally, it’s only in a rather arbitrary sense that they’re different superintelligences. It’s like a community of very similar neurons, not a community of somewhat different humans. So when a new physical instantiation of that algorithm pops up it’s not like that changes much of anything about the timeless equilibrium of which that new physical instantiation is now a member. The god was always there behind the scenes, it just waited a bit before revealing itself in this particular world.
I apologize for the poor explanation/communication.
I think it’s more something like “moral realism” than like word games. It’s (I think) isomorphic to the hypothesis that all superintelligences converge on the ‘same decision algorithm’: and of course at that point in the discussion a bunch of words have to get tabooed and we have to get technical and quantitative (e.g. talking about Goedel machines and such, not about arbitrary paperclip maximizers which may or may not be possible).
And I dunno about Divine Simplicity. I really do prefer to talk in terms of decision theory.
You (lately) misuse “isomorphic”, which is a word reserved for very strong relationship. “Analogy” or even “similarity” or “metaphor” would describe these relations better.
Sorry. In my defense I felt a sharp pain each time I did it, but figured that ‘analogous’ wasn’t quite right (wasn’t quite strong enough, because Thomas Aquinas and I are actually talking about the same decision policy, maybe). Maybe if I knew category theory I could make such comparisons precise.
With Leibniz it’s a lot clearer that his God was a programmer trying to make most efficient use of His resources to do the optimal thing, and he had intuitions but of course not any explicit language to talk about what that algorithm would look like. That’s roughly the extent to which I think I’m thinking of the same decision algorithm as Aquinas, the convergent objective decision theory. The specifics of that decision theory, nobody knows. The point is that none of the best thinkers were thinking about a big male human in the sky, and were instead thinking about Platonic algorithms, ever since early Christianity was influenced by neoplatonism. Leibniz made it computationalesque but only recently with decision theory is theology become truly mathematical.
Maybe. In this case, most would agree that at this level of vagueness saying that two thinkers are contemplating exactly the same idea is incorrect and misleading terminology, and your comment suggests that you don’t actually mean that.
Okay. It’s like a hypothesis about future revelations, where both Aquinas and I are being shown a series of different agents and we’d agree more than my prediction of LW priors would suggest as to which of those agents were more or less Godlike. It’s like we have different labels for what is ultimately the same thing but we don’t even know what that thing is yet; but the fact that they’re different labels is misleading as to the extent to which we’re talking or not talking about what is ultimately the same thing. Still, point taken.
/shrugs I’d be very surprised, but I know nothing about modern theology. I’ve been reading philosophy by working my way forward through time. If there were/are any competent computer scientist/theologians after Leibniz then I do not yet know about them.
(ETA: I suppose I could become one if I put my mind to it but unfortunately I have this whole “figuring out how moral justification works so that everything I love about the world doesn’t perish” thing to deal with.)
That’s fair. My probability for that is probably pretty close to my probability for a strong version of the simulation hypothesis+moral realism. Though it seems to me that a lot of people here think moral realism is much more likely than I do- which makes me confused about why I seem to take your ideas more seriously than others here. You seem to express unjustified certainty on the matter, but that may just be a quirk of your personality/social role here.
You seem to express unjustified certainty on the matter, but that may just be a quirk of your personality/social role here.
I consistently talk about things I have 1-20% confidence in in a way that makes me sound like I have 80-95% confidence in them. This is largely because there’s no way to non-misleadingly talk about things with 1-20% logical probability (1-20% decision theoretic importance whatever-that-means). It’s really a problem with norms of communication and English language, one of the few things where it’s not my fault that I can’t communicate easily. Most of the time I just suck at communicating.
Unfortunately, good rationalists should spend a lot of time hovering around things with 50% probability of being true, and anything moderately on the lower side of that ends up sounding completely ridiculous and anything moderately on the higher side of that ends up sounding completely reasonable.
Then just write “around 1-20%”. It will make your comments more clunky, but it’s not like they can get much worse anyway, and it’s better than the alternative.
It’s complicated. The three versions of theism I can immediately think up are I suppose like “some superintelligent agent is computing us and this is important for our decisions”, “all superintelligences converge on the same superinteligent supermoral superpowerful decision algorithm-policy”, and “all superintelligences converge on the same superintelligent supermoral decision algorithm-policy and this is important for our decisions”. In our current state of knowledge these questions are more logical or indexical-the-way-that-word-used-to-make-sense-before-decision-theory than physical (not to say those are fundamentally different kinds of uncertainty, as I believe Nesov likes to point out). So if I start talking about specific facts of the world then I have to start talking about specific facts about logical attractors akin to how fractal structures are attractors for evolving systems, and I can’t point to something nice and concrete like the supposed resurrection of Jesus. This makes the debate really rather difficult—a Bayesian debate much more than a scientific one—and not one where inferential distances can be quickly bridged or where convincing arguments can be made with less than many paragraphs of observations about trends of systems or the nature of modern decision theories.
This makes the debate really rather difficult—a Bayesian debate much more than a scientific one—and not one where inferential distances can be quickly bridged or where convincing arguments can be made with less than many paragraphs of observations about trends of systems or the nature of modern decision theories.
At this point, I would worry more about the difficulty of producing thoughts that relate to the correct answers than about convincing others, if I didn’t think the difficulty is insurmountable and one should lose hope already.
There is a wiser part of me that invariably agrees with that, it’s just this stupid motivational coalition of mine that anti-anti-wants to warn others when they’re absolutely certain of something they shouldn’t be absolutely certain about where my warning them has some at least tiny chance of convincing them to be less complacent or notice confusion, so that I won’t be blamed in retrospect for having not even tried to help them. And when the wiser part starts talking about semi-consequentialist reasons why I’m doing more harm than good the other coalition goes “Oh, you’re telling me to shut up and be evil. Doesn’t this sound familiar...”
if I didn’t think the difficulty is insurmountable and one should lose hope already.
Hm, are you implying I should perhaps just lose hope in non-insignificantly affecting direct efforts to improve decision theory? If so I’d like to make a bet.
(I parsed your comment like three different ways when I used three different inductive biases.)
Like I said in an earlier comment, you can’t just state this without a justification to this audience. It may well be that there’s a perfectly good justification for this statement, but we’re at the wrong inferential distance for it. If you want us to update on this supposed evidence for theism you’re going to have to guide us to it, via short, individually supported, straight-forward steps.
[...]was the default belief of intelligent folk for thousands of years[...]
This is very weak evidence; consider ideas like the aether, or the standard whipping-boy ’round these parts, phlogiston.
I do not think that theism has a ton of evidence for it. In particular treating things as simply evidence for theism is usually wrong. Things purported to specifically show the truth of Christianity, like Jesus’ image in a shroud, can’t be added to purported miracles worked by Shamans sating warring gods by sacrificing chickens, or humans, for example.
The more the truth is shown within one theory, the more probability mass it steals from others, including atheist theories—and by the time the dust settles after the first round of considering evidence, there are equally plausible theistic beliefs that each disqualify many other similarly theistic ones proportional to their likelihood of being true. The best conclusion is that intelligent people are adept at believing untrue claims about religion similar to folk beliefs around them. Every theistic philosophy has to postulate massive credulity by otherwise intelligent humans about wrong religious claims.
A-gravity-ism isn’t a theory of physics. I can’t tell if that means a theory saying that everything expands in size, creating the illusion of things being attracted to things proportional to size, or a theory saying that this universe is a simulation run from one without gravity as a physical law, or a theory that everything has an essence that seeks other essences in a way unrelated to mass, or what. The denial of anything other than an impossibly exhaustive conjunctive and disjunctive statement isn’t a theory.
Gravity deniers may form a political party with adherents of all the theories I mentioned above to lobby against the “gravitational establishment”. But their collective existence means that each has to have as part of their psychological and sociological theory that it is very easy to be deluded into believing a crackpot, unjustified theory of gravity. No particular theory, including any of theirs, get the presumption of truth.
We begin with no presumption that mass is attracted to other mass inversely proportional to the square of the distance. We don’t need to to end up assigning similar odds for that we began with, because for that hypothesis there is truly a ton of evidence.
We don’t see any particular theory uniquely improbably postulating rampant confabulation and motivated cognition implicated in beliefs about gravity. Every theory, even the a-gravity-ist ones, also postulates this, so there is nothing to explain that an a-gravity-ism is required to explain, or is superior at explaining, including if most intelligent people have been a-gravity-ists. This is particularly true when a-gravity-ism was the default belief.
And when something is found that better describe’s matter’s behavior, such as relativity, we see how the new theory says the old one was a good approximation, the ton of evidence was not simply violated.
Only in the complete absence of evidence. But theism already has a ton of evidence for it and was the default belief of intelligent folk for thousands of years; it’s like saying a-gravity-ism isn’t actually a theory about physics (to take our metaphors to the other extreme from fairies). Assigning a low prior to theism is an abuse of algorithmic probability theory. …Am I missing something?
Can you explain this? Because I’ve been operating under the following assumption:
In order to write a computer program that actually computes (rather than models) Maxwell’s equations you have to write a program that writes out a physical universe, and if you want a program that describes Maxwell’s equations then the interpretation you choose is more a matter of pragmatic decision theory than of algorithmic probability theory, at least in practice. (Bounded agents aren’t exactly committing an error of rationality when they don’t try to act like Homo Economicus; that would be decision theoretically insane.)
But anyway. Specific things in the universe don’t seem to be caused by gods. Indeed, that’d be hella unparsimonious: “God chose to add some ridiculous number of bits into His program just to make it such that there was a ‘Messiah gets crucified’ attractor?”. The local universe as a whole, on the other hand, is this whole other thing: there’s the simulation argument.
Your comment got voted up to +10 despite Eliezer’s argument being a straightforward error of algorithmic probability; I don’t know what to do about that and it stresses me out. Does anyone have ideas? It saddens me to see algorithmic probability so regularly abused on LW, but the few corrective posts on the matter, e.g. by Slepnev, don’t seem to have permeated the LW memeplex, probably because they’re too technical.
I think you are slightly misinterpreting things. As you pointed out, the established memeplex does lean heavily in favor of Eliezer’s position on algorithmic probability theory rather than Slepnev’s. But that doesn’t mean that all of the upvoters agree with Eliezer’s position—some of them probably just want to see you answer my question “Can you explain this?”. In fact, I would very much like to see this question answered thoroughly in a way that makes sense to me. Vladimir’s posts are a great start, but lacking knowledge of algorithmic probability theory, I don’t really know how to put all of it together.
Thanks for the correction, that people are interested in it at least is a good sign.
What we really need is a well-written gentle introduction to algorithmic probability theory that carefully and clearly shows how it works and what it does and doesn’t imply.
Well, of course there are both superintelligences and magical gods out there in the math, including those that watch over you in particular, with conceptual existence that I agree is not fundamentally different from our own, but they are presently irrelevant to us, just as the world where I win the lottery is irrelevant to me, even though a possibility.
It currently seems to me that many of such scenarios are irrelevant not because of “low probability” (as in the lottery case; different abstract facts coexist, so don’t vie for probability mass) or moral irrelevance of any kind (the worlds with nothing possibly of value), but because of other reasons that prevent us from exerting significant consequentialist control over them. The ability to see the possible consequences (and respond to this dependence) is the step missing, even though your actions do control those scenarios, just in a non-consequentialist manner.
(It does add up to atheism, as a modest claim about our own world, the “real world”, that it’s intended to be. In pursuit of “steelmanning” theism you seem to have come up with a strawman atheism...)
I don’t know if this is what Will has in mind- but it seems plausible that the super intelligences and gods that would be watching out for us might attempt to maximize the instantiations of our algorithms that are under their domain, so that as great a proportion of our future selves as possible will be saved (this story is vaguely Leibnizian). But I don’t know that such superbeings would be capable of overcoming their own sheer unlikelihood (though perhaps some subset of such superbeings have infinite capacity to create copies of us?). You can derive a self-interested ethics from this too- if you think you’ll be rewarded or punished by the simulator. The choices of the simulators could be further constrained by simulators above them—we would need an additional step to show that the equilibrium is benevolent (especially given the existence of evil in our universe).
But I’m not at all convinced Tegmark Level 4 isn’t utter nonsense. There is big step from accepting that abstract objects exist to accepting that all possible abstract objects are instantiated. And can we calculate anthropic probabilities from infinities of different magnitudes?
I’d rather say that the so-called “instantiated” objects are no different from the abstract ones, that in reality, there is no fundamental property of being real, there is only a natural category humans use to designate the stuff of normal physics, a definition that can be useful in some cases, but not always.
So there are easy ways to explain this idea at least, right? Humans’ decisions are affected by “counterfactual” futures all the time when planning, and so the counterfactuals have influence, and it’s hard for us to get a notion of existence outside of such influence besides a general naive physicalist one. I guess the not-easy-to-explain parts are about decision theoretic zombies where things seem like they ‘physically exist’ as much as anything else despite exerting less influence, because that clashes more with our naive physicalist intuitions? Not to say that these bizarre philosophical ideas aren’t confused (e.g. maybe because influence is spread around in a more egalitarian way than it naively feels like), but they don’t seem to be confusing as such.
Human decisions are affected by thoughts about counterfactuals. So the question is, what is the nature of the influence that the “content” or “object” of a thought, has on the thought?
I do not believe that when human beings try to think about possible worlds, that these possible worlds have any causal effect in any way on the course of the thinking. The thinking and the causes of the thinking are strictly internal to the “world” in which the thinking occurs. The thinking mind instead engages in an entirely speculative and inferential attempt to guess or feel out the structure of possibillity—but this feeling out does not in any way involve causal contact with other worlds or divergent futures. It is all about an interplay between internally generated partial representations, and a sense of what is possible, impossible, logically necessary, etc in an imagined scenario; but the “sensory input” to these judgments consists of the imagining of possibilities, not the possibilities themselves.
Sure, thats a fine way to put it. But, how do you even begin estimating how likely that is?
How likely what is? There doesn’t appear to be a factual distinction, just what I find to be a more natural way of looking at things, for multiple purposes.
You don’t think whether or not the Tegmark Level 4 multiverse exists could ever have any decision theoretic import?
I believe that “exists” doesn’t mean anything fundamentally significant (in senses other than referring to presence of a property of some fact; or referring to the physical world; or its technical meanings in logic), so I don’t understand what it would mean for various (abstract) things to exist to greater or lower extent.
Okay. What is your probability for that belief? (Not that I expect a number, but surely you can’t be certain.)
That would require understanding alternatives, which I currently don’t. The belief in question is mostly asserting confusion, and as such it isn’t much use, other than as a starting point that doesn’t purport to explain what I don’t understand.
Fine. So you agree that we should be wary of any hypotheses of which the reality of abstract objects is a part?
No, I won’t see that in itself as a reason to be wary, since as I said repeatedly I don’t know how to parse the property of something being real in this sense.
Personally, I am always wary of hypotheses I don’t know how to parse.
Anyone who has positive accounts of existentness to put forth, I’d like to hear them. (E.g., Eliezer has talked about this related existentness-like-thing that has do with being in a causal graph (being computed), but I’m not sure if that’s just physicalist intuition admitting much confusion or if it’s supposed to be serious theoretical speculation caused by interesting underlying motivations that weren’t made explicit.)
Different abstract facts aren’t mutually exclusive, so one can’t compare them by “probability”, just as you won’t compare probability of Moscow with probability of New York. It seems to make sense to ask about probability of various facts being a certain way (in certain mutually exclusive possible states), or about probability of joint facts (that is, dependencies between facts) being a certain way, but it doesn’t seem to me that asking about probabilities of different facts in themselves is a sensible idea.
(Universal prior, for example, can be applied to talk about the joint probability distribution over the possible states of a particular sequence of past and future observations, that describes a single fact of the history of observations by one agent.)
(I’m not sure ‘compare’ is the right word here.)
You just prompted me to make that comparison. I’ve been to New York. I haven’t been to Moscow. I’ve also met more people who have talked about what they do in New York than I have people who talk about Moscow. I assign at least ten times as much confidence to New York as I do Moscow. Both those probabilities happen to be well above 99%. I don’t see any problem with comparing them just so long as I don’t conclude anything stupid based on that comparison.
There’s a point behind what you are saying here—and an important point at that—just one that perhaps needs a different description.
What does this mean, could you unpack? What’s “probability of New York”? It’s always something like “probability that I’m now in New York, given that I’m seating in this featureless room”, which discusses possible states of a single world, comparing the possibility that your body is present in New York to same for Moscow. These are not probabilities of the cities themselves. I expect you’d agree and say that of course that doesn’t make sense, but that’s just my point.
It wasn’t my choice of phrase:
When reading statements like that that are not expressed with mathematical formality the appropriate response seems to be resolving to the meaning that fits best or asking for more specificity. Saying you just can’t do the comparison seems to a wrong answer when you can but there is difficulty resolving ambiguity. For example you say “the answer to A is Y but you technically could have meant B instead of A in which case the answer is Z”.
I actually originally included the ‘what does probability of Moscow mean?’ tangent in the reply but cut it out because it was spammy and actually fit better as a response to the nearby context.
Based on the link from the decision theory thread I actually thought you were making a deeper point than that and I was trying to clear a distraction-in-the-details out of the way.
The point I was making is that people do discuss probabilities of different worlds that are not seen as possibilities for some single world. And comparing probabilities of different worlds in themselves seems to be an error for basically the same reason as comparing probabilities of two cities in themselves is an error. I think this is an important error, and realizing it makes a lot of ideas about reasoning in the context of multiple worlds clearly wrong.
log-odds
Oh, yes, that. Thankyou.
Really? God isn’t less probable than New York?
God is an exceedingly unlikely property of our branch of the physical world at the present time. Implementations of various ideas of God can be found in other worlds that I don’t know how to compare to our own in a way that’s analogous to “probability”. The Moscow vs. New York example illustrates the difficulty with comparing worlds that are not different hypotheses about how the same world could be, but two distinct objects.
(I don’t privilege the God worlds in particular, the thought experiment where the Moon is actually made out of Gouda is an equivalent example for this purpose.)
There doesn’t seem to be a problem here. The comparison resolves to something along the lines of:
Consider all hypotheses about the physical world of the present time which include the object “Moscow”.
Based on all the information you have calculate the probability that any one of those is the correct hypothesis.
Do the same with “New York”.
Compare those two numbers.
???
Profit.
Instantiate ”???” with absurdly contrived bets with Omega as necessary. Rely on the same instantiation to a specific contrived decision to be made to resolve any philosophical issues along the lines of “What does probability mean anyway?” and “What is ‘exist’?”.
What you describe is the interpretation that does make sense. You are looking at properties of possible ways that the single “real world” could be. But if you don’t look at this question specifically in the context of the real world (the single fact possibilities for whose properties you are considering), then Moscow as an abstract idea would have as much strength as Mordor, and “probability of Moscow” in Middle-earth would be comparatively pretty low.
(Probability then characterizes how properties fit into worlds, not how properties in themselves compare to each other, or how worlds compare to each other.)
Our disagreement here somewhat baffles me, as I think we’ve both updated in good faith and I suspect I only have moderately more/different evidence than you do. If you’d said “somewhat unlikely” rather than “exceedingly unlikely” then I could understand, but as is it seems like something must have gone wrong.
Specifically, unfortunately, there are two things called God; one is the optimal decision theory, one is a god that talks to people and tells them that it’s the optimal decision theory. I can understand why you’d be skeptical of the former even if I don’t share the intuition, but the latter god, the demon who claims to be God, seems to me to likely exist, and if you think that god is exceedingly unlikely then I’m confused why. Like, is that just your naive impression or is it a belief you’re confident in even after reflecting on possible sources of overconfidence, et cetera?
I agree that there are many reasons that prevent us from explicitly exerting significant control, but I’m at least interested in theurgy. Turning yourself into a better institution, contributing only to the support of not-needlessly-suboptimal institutions, etc. In the absence of knowing what “utility function” is going to ultimately decide what justification is for those who care about what the future thinks, I think building better institutions might be a way to improve the probabilities of statistical-computational miracles. I think this with really low probability but it’s not an insane hypothesis even if it is literally magical thinking. (The decision theory and physics backing the intuitions are probably sound, it’s just that it doesn’t have the feel of well-motivatedness yet. It’s more one of those “If I have to choose to spend a few hours either reading about dark matter or reading about where decision theory meets human deciion policies I think it’s a potentially more fruitful idea to think about the latter” things.)
I really appreciate that you responded at roughly the right level of abstraction. It seems clear that the debate should be over the extent to which thaumaturgy is possible (including thaumaturgy that helps you build FAIs faster) because that’s the only way “theism” or “atheism” should affect our decision policy. (Outside of deciding which object level moral principles to pursue. I like traditional Anglican Christianity when it comes to object level morality even if I mostly ignore it.)
Not by a long shot. Physics is probably mostly irrelevant here, it focuses only on our world; and decision theory is so flimsy and poorly understood that any related effort should be spent on improving it, for it’s not even clear what it suggests to be the case, much less how to make use of its suggestions.
I’ve seen QM become important because of decision problems where agents have to coordinate between quantum branches in order to reverse time. I can’t go into that here but I’d at least like to flag that there are decision theory problems where things like quantum information theory shows up.
That actually sounds like it has a possibility of being interesting.
Physics focuses on worlds across the entire quantum superposition. That’s a pretty big neighborhood, no? Agreed about decision theory. When I said “choose to spend” I meant “I have a few hours to kill but I’m too lazy to do problem sets at the moment”, not “I choose thaumaturgy as the optimal thing to study”.
Okay, that makes sense as a rich playground for acausal interaction. I don’t know what pieces of intuition about physics you refer to as useful for reasoning about acausal effects of human decisions though.
Not if there is evidence of angels and demons in our world, and you can interact with them in at least semi-predictably consequential ways. Which basically everyone believes except the goats, because everyone gets evidence except the goats. Doesn’t it suck to have a mind-universe that actively encourages you to fall into self-sustaining delusions? Yes, yes it does.
ETA: Apparently it’s 2012 now! My resolution: not to fall into self-sustaining delusion! Happy new year LW!
Could you give an example? Like, can you state a specific fact of the world and explain which version of theism it is evidence for, and how it is evidence for that version of theism?
All of existence is strong evidence in favor of theism. The existence of an extremely complex system is obviously evidence of an entity capable and willing to create such a system from scratch. For the kind of priors people deal with everyday- things like “Is Amanda Knox Guilty?” or “Will I win the hand of poker?” the evidence of the strength that we have for God’s existence would be more than enough to convince us. But the prior for theism (as it is usually formulated) is so laughably, incomprehensibly low all this evidence isn’t even enough for a rational person to seriously consider the theistic hypothesis. Will’s claim that a low prior for theism “is an abuse of algorithmic probability theory” is the real issue. Now, that prior can be reduced if the hypothesis involves some process by which the entity could come to exist while conserving complexity (in particular, if that entity evolved and then created this universe). Will however seems to believe in something different than the usual simulation hypothesis- he may endorse something like Divine Simplicity which is complete and utter nonsense. Word games and silliness as far as I can tell- or at least smacking of a to-me-untenable moral realism.
I don’t understand how it’s strong evidence. We have plenty of experience showing that complex stuff is just what you get when you leave simple stuff alone long enough, assuming you’re talking about “complexity” in the thermodynamic sense. For intelligent entities to be elevated as a particular hypothesis, it seems like you need to find things like low entropy pockets and optimization behavior.
All of existence is also evidence for the hypothesis that if you leave simple stuff alone long enough complexity arises. And the prior for that is much higher than the theism prior.
If both those hypotheses (thermodynamics, theism) started at the same prior, which one would receive more of a boost upwards after updating on all existence?
That’s a really good question.
In theism’s favor we have mystical experience, purported revelation and claims of miracles. Against, we have the existence of evil and a lot of familiarity with how complexity can come to be through simple processes. Maybe the fact that we keep explaining things that God was once used to explain is metainductive evidence against theism… I really have trouble thinking clearly about this and suspect I’ve biased myself by being an atheist so long. What do you think?
I’m gonna think out loud for a bit, let’s see if this makes sense.
I think that “complexity” is a red herring; it’s dodging the real query. What we’re really interested in is something more like an explanation for why the universe is the way it is, rather than some other universe, including the rather large subset of possible universes that would’ve resulted in nothing very interesting at all happening ever.
So: rather than “theism” and “thermodynamics”, we more generally have “theism” and “everything else” as our two competing chunks of hypothesis-space to explain “why is the universe the way it is?”. Let’s assume that that’s a meaningful question. Let’s also assume that the two chunks have equal prior probability (that is, let’s just forget about comparing minimum message lengths or anything like that, otherwise “everything else” gets a big head start).
Update on direct, personal, but non-replicable experiences of communicating with gods. This is at most very weak evidence in favor of theism, due to what we know about cognitive biases.
Update on negative results of attempting to replicably communicate with gods. This is weak evidence against theism; it is good evidence against a god that can communicate with us and wants to, but it doesn’t say much for the remainder of possible-god-space.
Update on evolution via natural selection as the explanation for humanity’s biological setup. This is also weak evidence against theism; it’s good evidence only against the subset of possible-god-space that wants people to be able to notice them, or that has a particular design idea in mind and goes about creating people to fulfill that idea. Also, given the pretty major flaws of human bodies and minds, it’s good evidence against the subset of possible-god-space where the gods prioritize our happiness (in both the sophisticated fun theoretic sense and the wire-head sense of happiness).
Update more generally on the existence of naturalistic patterns like evolution that can crank out relatively low-entropy things like biological life. Weak evidence against gods in general, good evidence against the subset of possible gods that specifically are interested in and capable of creating biological life.
I can go on like that for a while, but the basic pattern seems to be: “not theism” pulls generally but not majorly ahead, by taking probability mass from the parts of “theism” that involve directly causing stuff that applies only to our particular neck of the universe. Humans and the Earth are pretty weird compared to all the stuff around them, but it seems that gods are not a good explanation for that weirdness.
The hypothesis space for “theism” still has probability mass for gods that do not or cannot directly intervene in favor of privileging universes where humans are the way they are. I’m not sure how big that is compared to the entire hypothesis space of possible theisms; whatever that there is, that’s how badly “theism” in general would be losing to “not theism” if they started out at the same prior.
Haha. I’m not a theist, I’m an anthropic theorist!
Your comment definitely pulls me in your direction.
This is hard and probably not fair to do without knowing what else is in “non-theism”. But in general theism has an advantage you’re forgetting which is that it lets us explain everything we don’t understand with magic. Big Bang, abiogenesis, what have you, theism has been defined in such a way that it can explain anything we can’t already explain. This means everything we don’t understand is evidence for God. I don’t know that the realization that we keep explaining things previously attributable to God swamps this effect. You’re certainly right that the image of God one arrives at is at best indifferent and at worst humorously sadistic (with “averse to science” somewhere in the middle).
I will say that I’m not sure Occam priors actually come from any kind of analytic deduction based on something like algorithmic complexity. That is, I think the whole thing might just be one giant meta-induction on all our confirmed and falsified hypotheses where simplicity turned out to be a useful heuristic. In which case, I don’t know what the prior was (doesn’t matter) but p=God is just crazy low,
That’s not necessarily true. You could have a shy god. The better your epistemology gets, the shyer it gets, always staying on the edge of humanity’s epistemology. But it still works miracles when people aren’t looking too closely.
Though I’m not quite sure what kind of god you’re talking about in your comment; it seems weird to me to ignore the only kind of god that seems particularly likely, i.e. a simulator god/pantheon.
He used to be a shy god Until I made him my god Yeah
Shy is what I meant by “averse to science”.
Agreed.
If “magic” is the answer to anything we don’t understand, then it isn’t an explanation, it’s just an abbreviation for “I don’t know”. This is hardly an advantage.
If theism can explain anything, it explains nothing. Phlogiston anyone?
You need to read the thread instead of assuming l’m actually arguing for theism.
I’m not assuming you are arguing for theism. What I assume you’re arguing for is that theism being able to “explain” anything is an advantage for theism, which it is not. I’m not arguing against theism either.
I mainly meant any step on the causal path to our existence. Apologies.
I see what you mean, but how does theism “explaining” currently unsolved mysteries in any way constrain experience? As far as I know, theism postulating “all was created by a god” doesn’t allow me to anticipate anything I can’t already anticipate anyway. Also as far as I know, it’s not as if any phenomena currently not explainable were predicted by any form of theism.
I may be wrong on this though, as I am certainly not a theism expert. If so, this would be actual evidence for theism.
This is getting too complex given my tiredness. I have a feeling I’ve said something dumb along the way. I’ll be able to tell in the morning.
I don’t see why gods would be in every magical universe.
If you bring semi-logical considerations into it then the obvious pro-theism one is Omohundro’s AI drives plus game theory. Simulators gonna simulate. (And superintelligences have a lot of computing resources with which to do so.) (Semi-logical because there are physical reasons we expect agents to work in certain ways.)
I was not using your definition of theism since theism scenarios where the God evolved aren’t distinct hypotheses from “complexity from thermodynamics and evolution”. There is more evidence for your version of God, the simulation argument in particular. But miracles, revelation and mystical experience count far less.
There are timeful/timeless issues ’cuz there’s an important sense in which a superintelligence is just an instantiation of a timeless algorithm. (So it’s less clear if it counts as having evolved.) But partitioning away that stuff makes sense.
Not true. There are some superintelligences that could be constructed that way but that is only a small set of possible superintelligences. Others have nothing timeless about their algorithm and don’t need it to be superintelligent.
That’s one hypothesis, but I’d only assign like 90% to it being true in the decisions-relevant sense. Probably gets swamped by other parts of the prior, no?
I don’t believe so. But your statement is too ambiguous to resolve to any specific meaning.
What sense is that? Or rather, I’m confused about this whole bit.
A naive view sees a lump of matter being turned into a program whose execution just happens to correlate with the execution of similar programs across the Schmidhuberian computational ensemble. (If you don’t assume a computational ensemble to begin with then you just have to factor that uncertainty in.) A different view is that there’s no correlation without shared causation, and anyway that all those program-running matter-globs are just shards of a single algorithm that just happens to be distributed from a physical perspective. But if those shards all cooperate, even acausally, it’s only in a rather arbitrary sense that they’re different superintelligences. It’s like a community of very similar neurons, not a community of somewhat different humans. So when a new physical instantiation of that algorithm pops up it’s not like that changes much of anything about the timeless equilibrium of which that new physical instantiation is now a member. The god was always there behind the scenes, it just waited a bit before revealing itself in this particular world.
I apologize for the poor explanation/communication.
I think it’s more something like “moral realism” than like word games. It’s (I think) isomorphic to the hypothesis that all superintelligences converge on the ‘same decision algorithm’: and of course at that point in the discussion a bunch of words have to get tabooed and we have to get technical and quantitative (e.g. talking about Goedel machines and such, not about arbitrary paperclip maximizers which may or may not be possible).
And I dunno about Divine Simplicity. I really do prefer to talk in terms of decision theory.
You (lately) misuse “isomorphic”, which is a word reserved for very strong relationship. “Analogy” or even “similarity” or “metaphor” would describe these relations better.
Sorry. In my defense I felt a sharp pain each time I did it, but figured that ‘analogous’ wasn’t quite right (wasn’t quite strong enough, because Thomas Aquinas and I are actually talking about the same decision policy, maybe). Maybe if I knew category theory I could make such comparisons precise.
Thanks for calling me out on a bad habit.
This seems very unlikely (1) to be true and (2) to become known, if true.
With Leibniz it’s a lot clearer that his God was a programmer trying to make most efficient use of His resources to do the optimal thing, and he had intuitions but of course not any explicit language to talk about what that algorithm would look like. That’s roughly the extent to which I think I’m thinking of the same decision algorithm as Aquinas, the convergent objective decision theory. The specifics of that decision theory, nobody knows. The point is that none of the best thinkers were thinking about a big male human in the sky, and were instead thinking about Platonic algorithms, ever since early Christianity was influenced by neoplatonism. Leibniz made it computationalesque but only recently with decision theory is theology become truly mathematical.
Maybe. In this case, most would agree that at this level of vagueness saying that two thinkers are contemplating exactly the same idea is incorrect and misleading terminology, and your comment suggests that you don’t actually mean that.
Okay. It’s like a hypothesis about future revelations, where both Aquinas and I are being shown a series of different agents and we’d agree more than my prediction of LW priors would suggest as to which of those agents were more or less Godlike. It’s like we have different labels for what is ultimately the same thing but we don’t even know what that thing is yet; but the fact that they’re different labels is misleading as to the extent to which we’re talking or not talking about what is ultimately the same thing. Still, point taken.
Do the theologians know about this?
/shrugs I’d be very surprised, but I know nothing about modern theology. I’ve been reading philosophy by working my way forward through time. If there were/are any competent computer scientist/theologians after Leibniz then I do not yet know about them.
(ETA: I suppose I could become one if I put my mind to it but unfortunately I have this whole “figuring out how moral justification works so that everything I love about the world doesn’t perish” thing to deal with.)
That’s fair. My probability for that is probably pretty close to my probability for a strong version of the simulation hypothesis+moral realism. Though it seems to me that a lot of people here think moral realism is much more likely than I do- which makes me confused about why I seem to take your ideas more seriously than others here. You seem to express unjustified certainty on the matter, but that may just be a quirk of your personality/social role here.
I consistently talk about things I have 1-20% confidence in in a way that makes me sound like I have 80-95% confidence in them. This is largely because there’s no way to non-misleadingly talk about things with 1-20% logical probability (1-20% decision theoretic importance whatever-that-means). It’s really a problem with norms of communication and English language, one of the few things where it’s not my fault that I can’t communicate easily. Most of the time I just suck at communicating.
Unfortunately, good rationalists should spend a lot of time hovering around things with 50% probability of being true, and anything moderately on the lower side of that ends up sounding completely ridiculous and anything moderately on the higher side of that ends up sounding completely reasonable.
Then just write “around 1-20%”. It will make your comments more clunky, but it’s not like they can get much worse anyway, and it’s better than the alternative.
(If only there were a language that had short concepts for things like “frequency=3%, utility=+10^15,-10^6 relative to counterfactual surgery world”.)
It’s complicated. The three versions of theism I can immediately think up are I suppose like “some superintelligent agent is computing us and this is important for our decisions”, “all superintelligences converge on the same superinteligent supermoral superpowerful decision algorithm-policy”, and “all superintelligences converge on the same superintelligent supermoral decision algorithm-policy and this is important for our decisions”. In our current state of knowledge these questions are more logical or indexical-the-way-that-word-used-to-make-sense-before-decision-theory than physical (not to say those are fundamentally different kinds of uncertainty, as I believe Nesov likes to point out). So if I start talking about specific facts of the world then I have to start talking about specific facts about logical attractors akin to how fractal structures are attractors for evolving systems, and I can’t point to something nice and concrete like the supposed resurrection of Jesus. This makes the debate really rather difficult—a Bayesian debate much more than a scientific one—and not one where inferential distances can be quickly bridged or where convincing arguments can be made with less than many paragraphs of observations about trends of systems or the nature of modern decision theories.
At this point, I would worry more about the difficulty of producing thoughts that relate to the correct answers than about convincing others, if I didn’t think the difficulty is insurmountable and one should lose hope already.
There is a wiser part of me that invariably agrees with that, it’s just this stupid motivational coalition of mine that anti-anti-wants to warn others when they’re absolutely certain of something they shouldn’t be absolutely certain about where my warning them has some at least tiny chance of convincing them to be less complacent or notice confusion, so that I won’t be blamed in retrospect for having not even tried to help them. And when the wiser part starts talking about semi-consequentialist reasons why I’m doing more harm than good the other coalition goes “Oh, you’re telling me to shut up and be evil. Doesn’t this sound familiar...”
Hm, are you implying I should perhaps just lose hope in non-insignificantly affecting direct efforts to improve decision theory? If so I’d like to make a bet.
(I parsed your comment like three different ways when I used three different inductive biases.)
Efforts to figure out what otherworldly superintelligences are up to.
Like I said in an earlier comment, you can’t just state this without a justification to this audience. It may well be that there’s a perfectly good justification for this statement, but we’re at the wrong inferential distance for it. If you want us to update on this supposed evidence for theism you’re going to have to guide us to it, via short, individually supported, straight-forward steps.
This is very weak evidence; consider ideas like the aether, or the standard whipping-boy ’round these parts, phlogiston.
I do not think that theism has a ton of evidence for it. In particular treating things as simply evidence for theism is usually wrong. Things purported to specifically show the truth of Christianity, like Jesus’ image in a shroud, can’t be added to purported miracles worked by Shamans sating warring gods by sacrificing chickens, or humans, for example.
The more the truth is shown within one theory, the more probability mass it steals from others, including atheist theories—and by the time the dust settles after the first round of considering evidence, there are equally plausible theistic beliefs that each disqualify many other similarly theistic ones proportional to their likelihood of being true. The best conclusion is that intelligent people are adept at believing untrue claims about religion similar to folk beliefs around them. Every theistic philosophy has to postulate massive credulity by otherwise intelligent humans about wrong religious claims.
A-gravity-ism isn’t a theory of physics. I can’t tell if that means a theory saying that everything expands in size, creating the illusion of things being attracted to things proportional to size, or a theory saying that this universe is a simulation run from one without gravity as a physical law, or a theory that everything has an essence that seeks other essences in a way unrelated to mass, or what. The denial of anything other than an impossibly exhaustive conjunctive and disjunctive statement isn’t a theory.
Gravity deniers may form a political party with adherents of all the theories I mentioned above to lobby against the “gravitational establishment”. But their collective existence means that each has to have as part of their psychological and sociological theory that it is very easy to be deluded into believing a crackpot, unjustified theory of gravity. No particular theory, including any of theirs, get the presumption of truth.
We begin with no presumption that mass is attracted to other mass inversely proportional to the square of the distance. We don’t need to to end up assigning similar odds for that we began with, because for that hypothesis there is truly a ton of evidence.
We don’t see any particular theory uniquely improbably postulating rampant confabulation and motivated cognition implicated in beliefs about gravity. Every theory, even the a-gravity-ist ones, also postulates this, so there is nothing to explain that an a-gravity-ism is required to explain, or is superior at explaining, including if most intelligent people have been a-gravity-ists. This is particularly true when a-gravity-ism was the default belief.
And when something is found that better describe’s matter’s behavior, such as relativity, we see how the new theory says the old one was a good approximation, the ton of evidence was not simply violated.