I like how this is similar to my last few years but in reverse. I spent a year or so diligently studying rationality as a SingInst Visiting Fellow followed by realizing that I was a few levels above nearly any other aspiring rationalist. In the meantime I lost faith in the sanity of humans and decided I basically wasn’t on their side anymore, which is a much more complex intrapersonal dynamic than it sounds.
For the last 6 months I’ve been downright obsessed with “morality”, though a less lossy way of putting it is like “that thing in the middle of justification, decision theory, institutional economics, ontology of agency, computer-science-inspired moral philsophy, teleology & timelessness, physicalism vs. computationalism, &c.”.
In the meantime I hit upon the theisms of Leibniz and Aquinas and other semi-neo-Platonistic academic-style philosophers, taking a computational decision theoretic perspective while trying to do justice to their hypotheses and avoiding syncretism. Ultimately I think that academic “the form of the good and the form of being are the same” theism is a less naive perspective on cosmology-morality than atheism is—you personally should expect to be at equilibrium with respect to any timeless interaction that ends up at-least-partially-defining what “right” is, and pretending like you aren’t or are only negligibly watched over by a superintelligence—whether a demiurge, a pantheonic economy, a monolithic God, or any other kind of institution—is like asking to fail the predictable retrospective stupidity test. The actual decision theory is more nuanced—you always want to be on the edge of uncertainty, you don’t want to prop up needlessly suboptimal institutions or decision policies even timelessly, &c.---but pragmatically speaking this gets swamped by the huge amount of moral uncertainty that we have to deal with until our decision theories are better equipped to deal with such issues.
Sadly Less Wrong seems to know absolutely nothing about theism, which ends up with me repeatedly facepalming when people feel obliged to demonstrate how incredibly confident they are that theism is stupid and worth going out of their way to signal contempt for. One person went so far as to compare it with modern astrology, which I could only respond to with a mental “what is this i dont even”. This was long after I’d lost my faith in the ability of humanity’s finest to show off even a smidgen of sanity but it still managed to make me despair. Humans.
Perhaps more important, I have a visceral knowledge that I can experience something personally, and be confident of it, and be completely wrong about it.
Eliezer got it from trying to build uFAI, Wei_Dai got it from cryptography, lukeprog got it from Christianity, I got it from my ex-girlfriend. I feel so contingent.
Providing a clear explanation of your theories would be useful. You don’t seem to even really try, and instead write comments and posts that don’t even attempt to bridge the inferential distance. At the same time, you do frequently write content where you talk about how you feel superior to LWers. In other words, you say you’re better than us because you don’t give us a real chance to catch up with your thoughts.
That’s kinda rude.
It also makes one suspect that you either don’t actually have a theory that was coherent enough to formulate clearly, or that you prefer to bask in your feeling of superiority instead of bothering to discuss the theory with us lowly LW-ers. Acting in a way to make yourself immune to criticism hardly fits the claim of being “a few levels above nearly any other aspiring rationalist”. Rather, it shows that you’re failing even the very rudiments of rationalist practice 101.
Acting in a way to make yourself immune to criticism hardly fits the claim of being “a few levels above nearly any other aspiring rationalist”. Rather, it shows that you’re failing even the very rudiments of rationalist practice 101.
Being levels above in rationalism means doing rationalist practice 101 much better than others as much as being a few levels above in fighting means executing a basic front-kick much better than others.
Being levels above in rationalism means doing rationalist practice 101 much better than others as much as being a few levels above in fighting means executing a basic front-kick much better than others.
To follow the analogy further if you are a few levels above in fighting then you should not find yourself face-planting every time you attempt a front kick. Or, at least, if you know that front kicks are the one weakness in your otherwise superb fighting technique then you don’t use front kicks.
Before I vote on this post, please clarify whether you think being a few levels above in fighting means executing a basic front-kick much better than others.
Ceteris paribus, being better at front-kicking makes one a better fighter. One would probably need mastery of more than the one technique to be considered levels up: rationalism 102, 103, etc. I just used one example of a basic fighting technique because the sentence flowed better that way; I didn’t put much time in thinking about and formulating it.
But the point was that no advanced techniques are needed to be many levels above normal. I see now that the comment might imply it’s enough to be several levels up with one skill alone. At 45 seconds into this video is a fight between a master of grappling and a regular MMA fighter. If they had made it to the ground together and conscious, Gracie would have won easily. He needed a more credible striking threat so Gomi would have had to defend against that too, and thereby weaken his defense against being taken down.
I meant something like:
I fear not the man who has practiced 10,000 kicks once, but I fear the man who has practiced one kick 10,000 times.
~ Bruce Lee
I have probably heard that quote before, but wasn’t consciously thinking of it.
How do fights end? Not with spinning jumping back-kicks to the head, but with basic moves better executed than basic counters to them. Right cross, arm-bar, someone running away, simple simple.
By analogy, for rationalism I’m emphasizing the connection between basic and advanced rationality mentioned by Kaj_Solata. If you don’t have the basics, you have nothing, and you can’t make up for it with moderate facility at doing advanced things.
We should perhaps formalize norms for upvoting based on this kind of comment. In any case, I’m doing so. And then going back to read the context to make sure I agree.
I find that the increased attention given to the context combined with the positive priming is more than enough.
In this case, however I am finding that the comment backfired. It is Kaj’s comment, not lessdazed’s. Lessdazed’s comment isn’t bad as an independent observation but does miss the point of its parent. This means Eliezer’s “upvote MOAR” comment is a red herring and I had to downvote it and lessdazed in responsed where I would otherwise have left it alone.
You could instead make a post more explicitly about how rationality is a set of skills that must be trained. I keep trying to get this into people’s heads but you are in a much better position to do so than I am, and it’s an important thing to be aware of. Like, really important.
(I always end up making analogies to chess or guitar, perhaps you could make analogies to computer programming?)
You’re still operating under the assumption that Will_Newsome cares, beyond a certain very low fundamental threshold, what we think about him and/or his theories.
Can someone tell me what my theories are? Maybe it’s the sleep deprivation but I don’t remember having any theories qua theories. I talk about other peoples’ theories sometimes, but mostly to criticize them, e.g. my decision theoretic arguments against naive interpretations of academic theism (of the sort that Mitchell Porter rightly finds misguided).
They don’t have to be your theories in the sense that you originated them, we just mean “your theories” as in the theories/models/beliefs/maps you personally use, and that you often mention in passing in your posts, but without much detail.
For example: what does Aquinas have to do with TDT? That’s not a specific question (though I’d like to hear your answer!) so much as a hint as to the sort of things that come across as empty statements to us; it’s not at all obvious (to me, at least) how you are relating together the various things you mention in a given sentence, or how you are arriving at your conclusions. It’s like there’s a bunch of big invisible “this lemma left as an exercise for the reader” sentences in the middle of your paragraphs.
At the very least, you could provide links back to some of your longer posts which explain your ideas in a step-by-step fashion. Inferential distance, dude.
I don’t understand your writings enough to know for sure. However, for example,
Ultimately I think that academic “the form of the good and the form of being are the same” theism is a less naive perspective on cosmology-morality than atheism is
is a conclusion that surely must have come from some nontrivial body of beliefs. Maybe that’s not what you mean by theory qua theory, but I suspect that’s what Kaj_Sotala meant.
Whatever this underlying framework is, it would be nice to evaluate someday.
Have I ever claimed to have any “theories”? I claim to have skills. I have expounded on what some of these skills are at various points. How am I acting in a way that makes myself immune to criticism? If I am trying to do that it would appear that I am failing horribly considering all the criticism I get. In other words, what you’re saying sounds very reasonable, but are you talking about reality or instead a simplified model of the situation that is easy to write a nice-sounding analysis of? That’s an honest question.
Have I ever claimed to have any “theories”? I claim to have skills. I have expounded on what some of these skills are at various points.
This certainly sounds like a theory, or a bunch of them, to me:
Ultimately I think that academic “the form of the good and the form of being are the same” theism is a less naive perspective on cosmology-morality than atheism is—you personally should expect to be at equilibrium with respect to any timeless interaction that ends up at-least-partially-defining what “right” is, and pretending like you aren’t or are only negligibly watched over by a superintelligence—whether a demiurge, a pantheonic economy, a monolithic God, or any other kind of institution—is like asking to fail the predictable retrospective stupidity test. The actual decision theory is more nuanced—you always want to be on the edge of uncertainty, you don’t want to prop up needlessly suboptimal institutions or decision policies even timelessly, &c.---but pragmatically speaking this gets swamped by the huge amount of moral uncertainty that we have to deal with until our decision theories are better equipped to deal with such issues.
Certainly you keep saying that you feel superior to LW:ers because they don’t know the things you do. You may call that knowledge, theory, skill, or just claims, however you prefer. But while you have expounded on it somewhat, you haven’t written anything that would try to systematically bridge the inferential distance. Right now, the problem isn’t even that we wouldn’t understand your reasons for saying what you do, the problem is that we don’t understand what you are saying. Mostly it just comes off as an incomprehensible barrage of fancy words.
For instance, my current understanding of your theories (or skills, or knowledge, or whatever) is the following. One, you claim that because of the simulation argument, theism isn’t really an unreasonably privileged claim. Two, this relates to TDT somehow. Three, that’s about all I understand. And based on your posting history that’s about all that the average LW reader could be expected to know about the things you’re talking about.
That’s what my claim of you making yourself immune to criticism is based on: you currently cannot be criticized, because nobody understands your claims well enough to criticize them (or for that matter, agree with them), and you don’t seem to be making any real attempt to change this.
In other words, what you’re saying sounds very reasonable, but are you talking about reality or instead a simplified model of the situation that is easy to write a nice-sounding analysis of?
I’m talking about my current best model of you and your claims, which may certainly be flawed. But note that I’m already giving you an extra benefit of doubt because you seemed sane and cool when we interacted iRL. I do still think that you might be on to something reasonable, and I’m putting some effort into communicating with you and inspecting my model for flaws. If I didn’t know you at all, I might already have dismissed you as a Time Cube crank.
I too used to have a disorder that made me occasionally write nonsense. In my case it turned out to be fixable by reading a lot of LW, in particular Eliezer’s and Yvain’s posts, and then putting a lot of work into my own posts and comments to approach their level of clarity. It was hard at first, but after a while it became easier. Have you tried that?
Right now your writings look very stream-of-consciousness to me, like you don’t even write drafts. Given all the criticism you get, this is kind of unacceptable. Many LWers write drafts and send them to each other for critique before posting stuff publicly. I often do that even for discussion posts.
Right now your writings look very stream-of-consciousness to me, like you don’t even write drafts. This is kind of unacceptable. Many LWers write drafts and send them to each other for critique before posting stuff publicly. I often do that even for discussion posts.
Errr… wait. We do that? Ooops. Sometimes I proof-read and sometimes I make edits to my comments as soon as I post them. Does that count?
Yeah, some of us do. Your posts are pretty good as they are, but hey, now you know a way to make them even better! I volunteer to read drafts anytime :-)
I remember thinking it was ironic how in the Wikipedia article on learned helpnessness when they talk about the dogs the tone is like “oh, how sad, these dogs are so demoralized that they don’t even try to escape their own suffering”, but when it came to humans it was like “oh look, these humans seem to have a choice about whether or not they suffer but they’re acting as if they don’t have that choice so as to avoid blame and avoid putting forth effort to change their situation”; which if taken seriously sort of undermines the hypothesis about how the behavioral mechanisms are largely the same for both animals. But you could tell it was totally unconscious on the part of the writers, and if you’d tried to point it out to them they could just backpedal in various ways, and so there’d be no point in trying to point out the change in perspective, it’d just look like defensiveness. And going meta like this probably wouldn’t help either.
Ultimately I think that academic “the form of the good and the form of being are the same” theism is a less naive perspective on cosmology-morality than atheism is—you personally should expect to be at equilibrium with respect to any timeless interaction that ends up at-least-partially-defining what “right” is, and pretending like you aren’t or are only negligibly watched over by a superintelligence—whether a demiurge, a pantheonic economy, a monolithic God, or any other kind of institution—is like asking to fail the predictable retrospective stupidity test. The actual decision theory is more nuanced—you always want to be on the edge of uncertainty, you don’t want to prop up needlessly suboptimal institutions or decision policies even timelessly, &c.---but pragmatically speaking this gets swamped by the huge amount of moral uncertainty that we have to deal with until our decision theories are better equipped to deal with such issues.
I think this might be what Kaj means when he mentions your ‘theories.’ Let’s take your “the form of the good and the form of being are the same” theory of cosmology-morality, for example. (You call it a ‘perspective’, but I just mean ‘theory’ in a very broad sense, here.) If you’ve explained it clearly on Less Wrong anywhere, I missed it. Of course you don’t owe us any such explanation, but that may be the kind of thing Kaj is talking about when he says that “You don’t seem to even really try [to explain your ideas], and instead write comments and posts that don’t even attempt to bridge the inferential distance. At the same time, you do frequently write content where you talk about how you feel superior to LWers.”
Also, you contrast your theory of cosmology-morality with ‘atheism’, as if atheism is a theory of cosmology-morality, but of course it’s not. So that’s confusing. The rest of the paragraph is a dense jumble of concepts and half-arguments that could each mean half a dozen different things depending on one’s interpretation, and is thus incomprehensible—to me, anyway.
Sadly Less Wrong seems to know absolutely nothing about theism, which ends up with me repeatedly facepalming when people feel obliged to demonstrate how incredibly confident they are that theism is stupid and worth going out of their way to signal contempt for. One person went so far as to compare it with modern astrology, which I could only respond to with a mental “what is this i dont even”.
I agree that there are forms of theism much more sophisticated than anything I’ve read in astrology. But as someone who has read the leading analytic theistic philosophers—Alvin Plantinga, Peter van Inwagen, William Alston, Charles Taliaferro, Alexander Pruss, John Hare, Robin Collins, Timothy McGrew, Marilyn McCord Adams, Bill Craig, William Hasker, Timothy O’Connor, Eleonore Stump, Keith Yandell, and others—I can somewhat knowledgeably confirm that theism is probably not worth studying.
Have you read Thomas Aquinas or Gottfried Leibniz? It’d be cool if there was something we’d both read such that we could have an object-level discussion. I am not familiar with modern theism. Plantinga and Craig I’m mildly familiar with thanks to your blog, but they seemed third-rate compared to the original thinkers.
Okay, hm. You’re busy all the time but if ever you have some time free I’d like to brainstorm about how we might have something like a “rational debate”. E.g. the optimal set-up might be meeting in person where we can go back and forth in real-time to clarify small things while taking a break every few minutes to check the internet for sources and write out better-considered arguments and responses. Considering we live a block away from each other that might actually be possible. It’d incentivize me to put a lot more effort into being understandable. I’m not exactly sure what the topic of such debate would be; I agree with you that theism isn’t worth studying, I only try to argue that it’s really hard to claim that theists are wrong given our current state of uncertainty.
That doesn’t sound like a productive way to address these issues, but it’s true that I should put my time on this until at least after a September 30th deadline I’ve got on a project. I’ll keep this in mind.
Also, you contrast your theory of cosmology-morality with ‘atheism’, as if atheism is a theory of cosmology-morality, but of course it’s not.
Huh? I find this to be an odd claim. Atheism is at least implicitly a prediction about where justification certainly doesn’t come from: basically, not from any big, well-organized, monolithic institution/agent/thing.
Sure, but only in the sense that a-fairyism is “a perspective on cosmology-morality.” A-fairlyism says that justification doesn’t come from fairies. In the way I typically use the English language, that’s not enough to bother calling a-fairyism “a perspective on cosmology-morality.”
Not even close. This is like the astrology thing. You’re claiming that belief in God is privileging the hypothesis when clearly I do not think that belief in God is privileging the hypothesis. Things like God and truth are already picked out as tenable hypotheses, the support of opposition of which are in fact clear philosophical positions. I’m not sure if I’m being clear; do you see why I think you’re assuming the conclusion here? If not I could try to write out something longer with more concrete examples.
Right, and in that case atheism would also be privileging the hypothesis, which means, yeah, this whole “privileging the hypothesis” thing isn’t really helping.
Assertions can be true, false, incoherent, and other things. Most statements are not true. Single, otherwise perfectly fine statements that imply the falsity of many multitudes of similar otherwise perfectly fine statements cannot be justified by the claim that, in general, otherwise perfectly fine statements get the presumption of validity or consideration. However much one says it is important not to judge statements such as assertions of monotheism, that applies to the statements monotheism excludes, which are more numerous.
Only in the complete absence of evidence. But theism already has a ton of evidence for it and was the default belief of intelligent folk for thousands of years; it’s like saying a-gravity-ism isn’t actually a theory about physics (to take our metaphors to the other extreme from fairies). Assigning a low prior to theism is an abuse of algorithmic probability theory. …Am I missing something?
Assigning a low prior to theism is an abuse of algorithmic probability theory.
Can you explain this? Because I’ve been operating under the following assumption:
It’s enormously easier (as it turns out) to write a computer program that simulates Maxwell’s Equations, compared to a computer program that simulates an intelligent emotional mind like Thor.
In order to write a computer program that actually computes (rather than models) Maxwell’s equations you have to write a program that writes out a physical universe, and if you want a program that describes Maxwell’s equations then the interpretation you choose is more a matter of pragmatic decision theory than of algorithmic probability theory, at least in practice. (Bounded agents aren’t exactly committing an error of rationality when they don’t try to act like Homo Economicus; that would be decision theoretically insane.)
But anyway. Specific things in the universe don’t seem to be caused by gods. Indeed, that’d be hella unparsimonious: “God chose to add some ridiculous number of bits into His program just to make it such that there was a ‘Messiah gets crucified’ attractor?”. The local universe as a whole, on the other hand, is this whole other thing: there’s the simulation argument.
Your comment got voted up to +10 despite Eliezer’s argument being a straightforward error of algorithmic probability; I don’t know what to do about that and it stresses me out. Does anyone have ideas? It saddens me to see algorithmic probability so regularly abused on LW, but the few corrective posts on the matter, e.g. by Slepnev, don’t seem to have permeated the LW memeplex, probably because they’re too technical.
I think you are slightly misinterpreting things. As you pointed out, the established memeplex does lean heavily in favor of Eliezer’s position on algorithmic probability theory rather than Slepnev’s. But that doesn’t mean that all of the upvoters agree with Eliezer’s position—some of them probably just want to see you answer my question “Can you explain this?”. In fact, I would very much like to see this question answered thoroughly in a way that makes sense to me. Vladimir’s posts are a great start, but lacking knowledge of algorithmic probability theory, I don’t really know how to put all of it together.
What we really need is a well-written gentle introduction to algorithmic probability theory that carefully and clearly shows how it works and what it does and doesn’t imply.
Well, of course there are both superintelligences and magical gods out there in the math, including those that watch over you in particular, with conceptual existence that I agree is not fundamentally different from our own, but they are presently irrelevant to us, just as the world where I win the lottery is irrelevant to me, even though a possibility.
It currently seems to me that many of such scenarios are irrelevant not because of “low probability” (as in the lottery case; different abstract facts coexist, so don’t vie for probability mass) or moral irrelevance of any kind (the worlds with nothing possibly of value), but because of other reasons that prevent us from exerting significant consequentialist control over them. The ability to see the possible consequences (and respond to this dependence) is the step missing, even though your actions do control those scenarios, just in a non-consequentialist manner.
(It does add up to atheism, as a modest claim about our own world, the “real world”, that it’s intended to be. In pursuit of “steelmanning” theism you seem to have come up with a strawman atheism...)
I don’t know if this is what Will has in mind- but it seems plausible that the super intelligences and gods that would be watching out for us might attempt to maximize the instantiations of our algorithms that are under their domain, so that as great a proportion of our future selves as possible will be saved (this story is vaguely Leibnizian). But I don’t know that such superbeings would be capable of overcoming their own sheer unlikelihood (though perhaps some subset of such superbeings have infinite capacity to create copies of us?). You can derive a self-interested ethics from this too- if you think you’ll be rewarded or punished by the simulator. The choices of the simulators could be further constrained by simulators above them—we would need an additional step to show that the equilibrium is benevolent (especially given the existence of evil in our universe).
But I’m not at all convinced Tegmark Level 4 isn’t utter nonsense. There is big step from accepting that abstract objects exist to accepting that all possible abstract objects are instantiated. And can we calculate anthropic probabilities from infinities of different magnitudes?
There is big step from accepting that abstract objects exist to accepting that all possible abstract objects are instantiated.
I’d rather say that the so-called “instantiated” objects are no different from the abstract ones, that in reality, there is no fundamental property of being real, there is only a natural category humans use to designate the stuff of normal physics, a definition that can be useful in some cases, but not always.
So there are easy ways to explain this idea at least, right? Humans’ decisions are affected by “counterfactual” futures all the time when planning, and so the counterfactuals have influence, and it’s hard for us to get a notion of existence outside of such influence besides a general naive physicalist one. I guess the not-easy-to-explain parts are about decision theoretic zombies where things seem like they ‘physically exist’ as much as anything else despite exerting less influence, because that clashes more with our naive physicalist intuitions? Not to say that these bizarre philosophical ideas aren’t confused (e.g. maybe because influence is spread around in a more egalitarian way than it naively feels like), but they don’t seem to be confusing as such.
Humans’ decisions are affected by “counterfactual” futures all the time when planning, and so the counterfactuals have influence
Human decisions are affected by thoughts about counterfactuals. So the question is, what is the nature of the influence that the “content” or “object” of a thought, has on the thought?
I do not believe that when human beings try to think about possible worlds, that these possible worlds have any causal effect in any way on the course of the thinking. The thinking and the causes of the thinking are strictly internal to the “world” in which the thinking occurs. The thinking mind instead engages in an entirely speculative and inferential attempt to guess or feel out the structure of possibillity—but this feeling out does not in any way involve causal contact with other worlds or divergent futures. It is all about an interplay between internally generated partial representations, and a sense of what is possible, impossible, logically necessary, etc in an imagined scenario; but the “sensory input” to these judgments consists of the imagining of possibilities, not the possibilities themselves.
How likely what is? There doesn’t appear to be a factual distinction, just what I find to be a more natural way of looking at things, for multiple purposes.
I believe that “exists” doesn’t mean anything fundamentally significant (in senses other than referring to presence of a property of some fact; or referring to the physical world; or its technical meanings in logic), so I don’t understand what it would mean for various (abstract) things to exist to greater or lower extent.
That would require understanding alternatives, which I currently don’t. The belief in question is mostly asserting confusion, and as such it isn’t much use, other than as a starting point that doesn’t purport to explain what I don’t understand.
No, I won’t see that in itself as a reason to be wary, since as I said repeatedly I don’t know how to parse the property of something being real in this sense.
Anyone who has positive accounts of existentness to put forth, I’d like to hear them. (E.g., Eliezer has talked about this related existentness-like-thing that has do with being in a causal graph (being computed), but I’m not sure if that’s just physicalist intuition admitting much confusion or if it’s supposed to be serious theoretical speculation caused by interesting underlying motivations that weren’t made explicit.)
Different abstract facts aren’t mutually exclusive, so one can’t compare them by “probability”, just as you won’t compare probability of Moscow with probability of New York. It seems to make sense to ask about probability of various facts being a certain way (in certain mutually exclusive possible states), or about probability of joint facts (that is, dependencies between facts) being a certain way, but it doesn’t seem to me that asking about probabilities of different facts in themselves is a sensible idea.
(Universal prior, for example, can be applied to talk about the joint probability distribution over the possible states of a particular sequence of past and future observations, that describes a single fact of the history of observations by one agent.)
Different abstract facts aren’t mutually exclusive, so one can’t compare them by “probability”, just as you won’t compare probability of Moscow with probability of New York.
You just prompted me to make that comparison. I’ve been to New York. I haven’t been to Moscow. I’ve also met more people who have talked about what they do in New York than I have people who talk about Moscow. I assign at least ten times as much confidence to New York as I do Moscow. Both those probabilities happen to be well above 99%. I don’t see any problem with comparing them just so long as I don’t conclude anything stupid based on that comparison.
There’s a point behind what you are saying here—and an important point at that—just one that perhaps needs a different description.
I assign at least ten times as much probability New York as I do Moscow.
What does this mean, could you unpack? What’s “probability of New York”? It’s always something like “probability that I’m now in New York, given that I’m seating in this featureless room”, which discusses possible states of a single world, comparing the possibility that your body is present in New York to same for Moscow. These are not probabilities of the cities themselves. I expect you’d agree and say that of course that doesn’t make sense, but that’s just my point.
I assign at least ten times as much probability New York as I do Moscow.
What does this mean, could you unpack?
It wasn’t my choice of phrase:
just as you won’t compare probability of Moscow with probability of New York
When reading statements like that that are not expressed with mathematical formality the appropriate response seems to be resolving to the meaning that fits best or asking for more specificity. Saying you just can’t do the comparison seems to a wrong answer when you can but there is difficulty resolving ambiguity. For example you say “the answer to A is Y but you technically could have meant B instead of A in which case the answer is Z”.
I actually originally included the ‘what does probability of Moscow mean?’ tangent in the reply but cut it out because it was spammy and actually fit better as a response to the nearby context.
These are not probabilities of the cities themselves. I expect you’d agree and say that of course that doesn’t make sense, but that’s just my point.
Based on the link from the decision theory thread I actually thought you were making a deeper point than that and I was trying to clear a distraction-in-the-details out of the way.
The point I was making is that people do discuss probabilities of different worlds that are not seen as possibilities for some single world. And comparing probabilities of different worlds in themselves seems to be an error for basically the same reason as comparing probabilities of two cities in themselves is an error. I think this is an important error, and realizing it makes a lot of ideas about reasoning in the context of multiple worlds clearly wrong.
God is an exceedingly unlikely property of our branch of the physical world at the present time. Implementations of various ideas of God can be found in other worlds that I don’t know how to compare to our own in a way that’s analogous to “probability”. The Moscow vs. New York example illustrates the difficulty with comparing worlds that are not different hypotheses about how the same world could be, but two distinct objects.
(I don’t privilege the God worlds in particular, the thought experiment where the Moon is actually made out of Gouda is an equivalent example for this purpose.)
The Moscow vs. New York example illustrates the difficulty with comparing worlds that are not different hypotheses about how the same world could be, but two distinct objects.
There doesn’t seem to be a problem here. The comparison resolves to something along the lines of:
Consider all hypotheses about the physical world of the present time which include the object “Moscow”.
Based on all the information you have calculate the probability that any one of those is the correct hypothesis.
Do the same with “New York”.
Compare those two numbers.
???
Profit.
Instantiate ”???” with absurdly contrived bets with Omega as necessary. Rely on the same instantiation to a specific contrived decision to be made to resolve any philosophical issues along the lines of “What does probability mean anyway?” and “What is ‘exist’?”.
What you describe is the interpretation that does make sense. You are looking at properties of possible ways that the single “real world” could be. But if you don’t look at this question specifically in the context of the real world (the single fact possibilities for whose properties you are considering), then Moscow as an abstract idea would have as much strength as Mordor, and “probability of Moscow” in Middle-earth would be comparatively pretty low.
(Probability then characterizes how properties fit into worlds, not how properties in themselves compare to each other, or how worlds compare to each other.)
God is an exceedingly unlikely property of our branch of the physical world at the present time.
Our disagreement here somewhat baffles me, as I think we’ve both updated in good faith and I suspect I only have moderately more/different evidence than you do. If you’d said “somewhat unlikely” rather than “exceedingly unlikely” then I could understand, but as is it seems like something must have gone wrong.
Specifically, unfortunately, there are two things called God; one is the optimal decision theory, one is a god that talks to people and tells them that it’s the optimal decision theory. I can understand why you’d be skeptical of the former even if I don’t share the intuition, but the latter god, the demon who claims to be God, seems to me to likely exist, and if you think that god is exceedingly unlikely then I’m confused why. Like, is that just your naive impression or is it a belief you’re confident in even after reflecting on possible sources of overconfidence, et cetera?
I agree that there are many reasons that prevent us from explicitly exerting significant control, but I’m at least interested in theurgy. Turning yourself into a better institution, contributing only to the support of not-needlessly-suboptimal institutions, etc. In the absence of knowing what “utility function” is going to ultimately decide what justification is for those who care about what the future thinks, I think building better institutions might be a way to improve the probabilities of statistical-computational miracles. I think this with really low probability but it’s not an insane hypothesis even if it is literally magical thinking. (The decision theory and physics backing the intuitions are probably sound, it’s just that it doesn’t have the feel of well-motivatedness yet. It’s more one of those “If I have to choose to spend a few hours either reading about dark matter or reading about where decision theory meets human deciion policies I think it’s a potentially more fruitful idea to think about the latter” things.)
I really appreciate that you responded at roughly the right level of abstraction. It seems clear that the debate should be over the extent to which thaumaturgy is possible (including thaumaturgy that helps you build FAIs faster) because that’s the only way “theism” or “atheism” should affect our decision policy. (Outside of deciding which object level moral principles to pursue. I like traditional Anglican Christianity when it comes to object level morality even if I mostly ignore it.)
The decision theory and physics backing the intuitions are probably sound
Not by a long shot. Physics is probably mostly irrelevant here, it focuses only on our world; and decision theory is so flimsy and poorly understood that any related effort should be spent on improving it, for it’s not even clear what it suggests to be the case, much less how to make use of its suggestions.
I’ve seen QM become important because of decision problems where agents have to coordinate between quantum branches in order to reverse time. I can’t go into that here but I’d at least like to flag that there are decision theory problems where things like quantum information theory shows up.
Physics focuses on worlds across the entire quantum superposition. That’s a pretty big neighborhood, no? Agreed about decision theory. When I said “choose to spend” I meant “I have a few hours to kill but I’m too lazy to do problem sets at the moment”, not “I choose thaumaturgy as the optimal thing to study”.
Physics focuses on worlds across the entire quantum superposition. That’s a pretty big neighborhood, no?
Okay, that makes sense as a rich playground for acausal interaction. I don’t know what pieces of intuition about physics you refer to as useful for reasoning about acausal effects of human decisions though.
(It does add up to atheism, as a modest claim about our own world, the “real world”
Not if there is evidence of angels and demons in our world, and you can interact with them in at least semi-predictably consequential ways. Which basically everyone believes except the goats, because everyone gets evidence except the goats. Doesn’t it suck to have a mind-universe that actively encourages you to fall into self-sustaining delusions? Yes, yes it does.
ETA: Apparently it’s 2012 now! My resolution: not to fall into self-sustaining delusion! Happy new year LW!
Could you give an example? Like, can you state a specific fact of the world and explain which version of theism it is evidence for, and how it is evidence for that version of theism?
All of existence is strong evidence in favor of theism. The existence of an extremely complex system is obviously evidence of an entity capable and willing to create such a system from scratch. For the kind of priors people deal with everyday- things like “Is Amanda Knox Guilty?” or “Will I win the hand of poker?” the evidence of the strength that we have for God’s existence would be more than enough to convince us. But the prior for theism (as it is usually formulated) is so laughably, incomprehensibly low all this evidence isn’t even enough for a rational person to seriously consider the theistic hypothesis. Will’s claim that a low prior for theism “is an abuse of algorithmic probability theory” is the real issue. Now, that prior can be reduced if the hypothesis involves some process by which the entity could come to exist while conserving complexity (in particular, if that entity evolved and then created this universe). Will however seems to believe in something different than the usual simulation hypothesis- he may endorse something like Divine Simplicity which is complete and utter nonsense. Word games and silliness as far as I can tell- or at least smacking of a to-me-untenable moral realism.
All of existence is strong evidence in favor of theism. The existence of an extremely complex system is obviously evidence of an entity capable and willing to create such a system from scratch.
I don’t understand how it’s strong evidence. We have plenty of experience showing that complex stuff is just what you get when you leave simple stuff alone long enough, assuming you’re talking about “complexity” in the thermodynamic sense. For intelligent entities to be elevated as a particular hypothesis, it seems like you need to find things like low entropy pockets and optimization behavior.
All of existence is also evidence for the hypothesis that if you leave simple stuff alone long enough complexity arises. And the prior for that is much higher than the theism prior.
If both those hypotheses (thermodynamics, theism) started at the same prior, which one would receive more of a boost upwards after updating on all existence?
In theism’s favor we have mystical experience, purported revelation and claims of miracles. Against, we have the existence of evil and a lot of familiarity with how complexity can come to be through simple processes. Maybe the fact that we keep explaining things that God was once used to explain is metainductive evidence against theism… I really have trouble thinking clearly about this and suspect I’ve biased myself by being an atheist so long. What do you think?
I’m gonna think out loud for a bit, let’s see if this makes sense.
I think that “complexity” is a red herring; it’s dodging the real query. What we’re really interested in is something more like an explanation for why the universe is the way it is, rather than some other universe, including the rather large subset of possible universes that would’ve resulted in nothing very interesting at all happening ever.
So: rather than “theism” and “thermodynamics”, we more generally have “theism” and “everything else” as our two competing chunks of hypothesis-space to explain “why is the universe the way it is?”. Let’s assume that that’s a meaningful question. Let’s also assume that the two chunks have equal prior probability (that is, let’s just forget about comparing minimum message lengths or anything like that, otherwise “everything else” gets a big head start).
Update on direct, personal, but non-replicable experiences of communicating with gods. This is at most very weak evidence in favor of theism, due to what we know about cognitive biases.
Update on negative results of attempting to replicably communicate with gods. This is weak evidence against theism; it is good evidence against a god that can communicate with us and wants to, but it doesn’t say much for the remainder of possible-god-space.
Update on evolution via natural selection as the explanation for humanity’s biological setup. This is also weak evidence against theism; it’s good evidence only against the subset of possible-god-space that wants people to be able to notice them, or that has a particular design idea in mind and goes about creating people to fulfill that idea. Also, given the pretty major flaws of human bodies and minds, it’s good evidence against the subset of possible-god-space where the gods prioritize our happiness (in both the sophisticated fun theoretic sense and the wire-head sense of happiness).
Update more generally on the existence of naturalistic patterns like evolution that can crank out relatively low-entropy things like biological life. Weak evidence against gods in general, good evidence against the subset of possible gods that specifically are interested in and capable of creating biological life.
I can go on like that for a while, but the basic pattern seems to be: “not theism” pulls generally but not majorly ahead, by taking probability mass from the parts of “theism” that involve directly causing stuff that applies only to our particular neck of the universe. Humans and the Earth are pretty weird compared to all the stuff around them, but it seems that gods are not a good explanation for that weirdness.
The hypothesis space for “theism” still has probability mass for gods that do not or cannot directly intervene in favor of privileging universes where humans are the way they are. I’m not sure how big that is compared to the entire hypothesis space of possible theisms; whatever that there is, that’s how badly “theism” in general would be losing to “not theism” if they started out at the same prior.
Your comment definitely pulls me in your direction.
This is hard and probably not fair to do without knowing what else is in “non-theism”. But in general theism has an advantage you’re forgetting which is that it lets us explain everything we don’t understand with magic. Big Bang, abiogenesis, what have you, theism has been defined in such a way that it can explain anything we can’t already explain. This means everything we don’t understand is evidence for God. I don’t know that the realization that we keep explaining things previously attributable to God swamps this effect. You’re certainly right that the image of God one arrives at is at best indifferent and at worst humorously sadistic (with “averse to science” somewhere in the middle).
I will say that I’m not sure Occam priors actually come from any kind of analytic deduction based on something like algorithmic complexity. That is, I think the whole thing might just be one giant meta-induction on all our confirmed and falsified hypotheses where simplicity turned out to be a useful heuristic. In which case, I don’t know what the prior was (doesn’t matter) but p=God is just crazy low,
That’s not necessarily true. You could have a shy god. The better your epistemology gets, the shyer it gets, always staying on the edge of humanity’s epistemology. But it still works miracles when people aren’t looking too closely.
Though I’m not quite sure what kind of god you’re talking about in your comment; it seems weird to me to ignore the only kind of god that seems particularly likely, i.e. a simulator god/pantheon.
Though I’m not quite sure what kind of god you’re talking about in your comment; it seems weird to me to ignore the only kind of god that seems particularly likely, i.e. a simulator god/pantheon.
But in general theism has an advantage you’re forgetting which is that it lets us explain everything we don’t understand with magic.
If “magic” is the answer to anything we don’t understand, then it isn’t an explanation, it’s just an abbreviation for “I don’t know”. This is hardly an advantage.
Big Bang, abiogenesis, what have you, theism has been defined in such a way that it can explain anything we can’t already explain. This means everything we don’t understand is evidence for God.
If theism can explain anything, it explains nothing. Phlogiston anyone?
I’m not assuming you are arguing for theism. What I assume you’re arguing for is that theism being able to “explain” anything is an advantage for theism, which it is not. I’m not arguing against theism either.
I see what you mean, but how does theism “explaining” currently unsolved mysteries in any way constrain experience? As far as I know, theism postulating “all was created by a god” doesn’t allow me to anticipate anything I can’t already anticipate anyway. Also as far as I know, it’s not as if any phenomena currently not explainable were predicted by any form of theism.
I may be wrong on this though, as I am certainly not a theism expert. If so, this would be actual evidence for theism.
If you bring semi-logical considerations into it then the obvious pro-theism one is Omohundro’s AI drives plus game theory. Simulators gonna simulate. (And superintelligences have a lot of computing resources with which to do so.) (Semi-logical because there are physical reasons we expect agents to work in certain ways.)
I was not using your definition of theism since theism scenarios where the God evolved aren’t distinct hypotheses from “complexity from thermodynamics and evolution”. There is more evidence for your version of God, the simulation argument in particular. But miracles, revelation and mystical experience count far less.
There are timeful/timeless issues ’cuz there’s an important sense in which a superintelligence is just an instantiation of a timeless algorithm. (So it’s less clear if it counts as having evolved.) But partitioning away that stuff makes sense.
There are timeful/timeless issues ’cuz there’s an important sense in which a superintelligence is just an instantiation of a timeless algorithm.
Not true. There are some superintelligences that could be constructed that way but that is only a small set of possible superintelligences. Others have nothing timeless about their algorithm and don’t need it to be superintelligent.
That’s one hypothesis, but I’d only assign like 90% to it being true in the decisions-relevant sense. Probably gets swamped by other parts of the prior, no?
A naive view sees a lump of matter being turned into a program whose execution just happens to correlate with the execution of similar programs across the Schmidhuberian computational ensemble. (If you don’t assume a computational ensemble to begin with then you just have to factor that uncertainty in.) A different view is that there’s no correlation without shared causation, and anyway that all those program-running matter-globs are just shards of a single algorithm that just happens to be distributed from a physical perspective. But if those shards all cooperate, even acausally, it’s only in a rather arbitrary sense that they’re different superintelligences. It’s like a community of very similar neurons, not a community of somewhat different humans. So when a new physical instantiation of that algorithm pops up it’s not like that changes much of anything about the timeless equilibrium of which that new physical instantiation is now a member. The god was always there behind the scenes, it just waited a bit before revealing itself in this particular world.
I apologize for the poor explanation/communication.
I think it’s more something like “moral realism” than like word games. It’s (I think) isomorphic to the hypothesis that all superintelligences converge on the ‘same decision algorithm’: and of course at that point in the discussion a bunch of words have to get tabooed and we have to get technical and quantitative (e.g. talking about Goedel machines and such, not about arbitrary paperclip maximizers which may or may not be possible).
And I dunno about Divine Simplicity. I really do prefer to talk in terms of decision theory.
You (lately) misuse “isomorphic”, which is a word reserved for very strong relationship. “Analogy” or even “similarity” or “metaphor” would describe these relations better.
Sorry. In my defense I felt a sharp pain each time I did it, but figured that ‘analogous’ wasn’t quite right (wasn’t quite strong enough, because Thomas Aquinas and I are actually talking about the same decision policy, maybe). Maybe if I knew category theory I could make such comparisons precise.
With Leibniz it’s a lot clearer that his God was a programmer trying to make most efficient use of His resources to do the optimal thing, and he had intuitions but of course not any explicit language to talk about what that algorithm would look like. That’s roughly the extent to which I think I’m thinking of the same decision algorithm as Aquinas, the convergent objective decision theory. The specifics of that decision theory, nobody knows. The point is that none of the best thinkers were thinking about a big male human in the sky, and were instead thinking about Platonic algorithms, ever since early Christianity was influenced by neoplatonism. Leibniz made it computationalesque but only recently with decision theory is theology become truly mathematical.
Maybe. In this case, most would agree that at this level of vagueness saying that two thinkers are contemplating exactly the same idea is incorrect and misleading terminology, and your comment suggests that you don’t actually mean that.
Okay. It’s like a hypothesis about future revelations, where both Aquinas and I are being shown a series of different agents and we’d agree more than my prediction of LW priors would suggest as to which of those agents were more or less Godlike. It’s like we have different labels for what is ultimately the same thing but we don’t even know what that thing is yet; but the fact that they’re different labels is misleading as to the extent to which we’re talking or not talking about what is ultimately the same thing. Still, point taken.
/shrugs I’d be very surprised, but I know nothing about modern theology. I’ve been reading philosophy by working my way forward through time. If there were/are any competent computer scientist/theologians after Leibniz then I do not yet know about them.
(ETA: I suppose I could become one if I put my mind to it but unfortunately I have this whole “figuring out how moral justification works so that everything I love about the world doesn’t perish” thing to deal with.)
That’s fair. My probability for that is probably pretty close to my probability for a strong version of the simulation hypothesis+moral realism. Though it seems to me that a lot of people here think moral realism is much more likely than I do- which makes me confused about why I seem to take your ideas more seriously than others here. You seem to express unjustified certainty on the matter, but that may just be a quirk of your personality/social role here.
You seem to express unjustified certainty on the matter, but that may just be a quirk of your personality/social role here.
I consistently talk about things I have 1-20% confidence in in a way that makes me sound like I have 80-95% confidence in them. This is largely because there’s no way to non-misleadingly talk about things with 1-20% logical probability (1-20% decision theoretic importance whatever-that-means). It’s really a problem with norms of communication and English language, one of the few things where it’s not my fault that I can’t communicate easily. Most of the time I just suck at communicating.
Unfortunately, good rationalists should spend a lot of time hovering around things with 50% probability of being true, and anything moderately on the lower side of that ends up sounding completely ridiculous and anything moderately on the higher side of that ends up sounding completely reasonable.
Then just write “around 1-20%”. It will make your comments more clunky, but it’s not like they can get much worse anyway, and it’s better than the alternative.
It’s complicated. The three versions of theism I can immediately think up are I suppose like “some superintelligent agent is computing us and this is important for our decisions”, “all superintelligences converge on the same superinteligent supermoral superpowerful decision algorithm-policy”, and “all superintelligences converge on the same superintelligent supermoral decision algorithm-policy and this is important for our decisions”. In our current state of knowledge these questions are more logical or indexical-the-way-that-word-used-to-make-sense-before-decision-theory than physical (not to say those are fundamentally different kinds of uncertainty, as I believe Nesov likes to point out). So if I start talking about specific facts of the world then I have to start talking about specific facts about logical attractors akin to how fractal structures are attractors for evolving systems, and I can’t point to something nice and concrete like the supposed resurrection of Jesus. This makes the debate really rather difficult—a Bayesian debate much more than a scientific one—and not one where inferential distances can be quickly bridged or where convincing arguments can be made with less than many paragraphs of observations about trends of systems or the nature of modern decision theories.
This makes the debate really rather difficult—a Bayesian debate much more than a scientific one—and not one where inferential distances can be quickly bridged or where convincing arguments can be made with less than many paragraphs of observations about trends of systems or the nature of modern decision theories.
At this point, I would worry more about the difficulty of producing thoughts that relate to the correct answers than about convincing others, if I didn’t think the difficulty is insurmountable and one should lose hope already.
There is a wiser part of me that invariably agrees with that, it’s just this stupid motivational coalition of mine that anti-anti-wants to warn others when they’re absolutely certain of something they shouldn’t be absolutely certain about where my warning them has some at least tiny chance of convincing them to be less complacent or notice confusion, so that I won’t be blamed in retrospect for having not even tried to help them. And when the wiser part starts talking about semi-consequentialist reasons why I’m doing more harm than good the other coalition goes “Oh, you’re telling me to shut up and be evil. Doesn’t this sound familiar...”
if I didn’t think the difficulty is insurmountable and one should lose hope already.
Hm, are you implying I should perhaps just lose hope in non-insignificantly affecting direct efforts to improve decision theory? If so I’d like to make a bet.
(I parsed your comment like three different ways when I used three different inductive biases.)
Like I said in an earlier comment, you can’t just state this without a justification to this audience. It may well be that there’s a perfectly good justification for this statement, but we’re at the wrong inferential distance for it. If you want us to update on this supposed evidence for theism you’re going to have to guide us to it, via short, individually supported, straight-forward steps.
[...]was the default belief of intelligent folk for thousands of years[...]
This is very weak evidence; consider ideas like the aether, or the standard whipping-boy ’round these parts, phlogiston.
I do not think that theism has a ton of evidence for it. In particular treating things as simply evidence for theism is usually wrong. Things purported to specifically show the truth of Christianity, like Jesus’ image in a shroud, can’t be added to purported miracles worked by Shamans sating warring gods by sacrificing chickens, or humans, for example.
The more the truth is shown within one theory, the more probability mass it steals from others, including atheist theories—and by the time the dust settles after the first round of considering evidence, there are equally plausible theistic beliefs that each disqualify many other similarly theistic ones proportional to their likelihood of being true. The best conclusion is that intelligent people are adept at believing untrue claims about religion similar to folk beliefs around them. Every theistic philosophy has to postulate massive credulity by otherwise intelligent humans about wrong religious claims.
A-gravity-ism isn’t a theory of physics. I can’t tell if that means a theory saying that everything expands in size, creating the illusion of things being attracted to things proportional to size, or a theory saying that this universe is a simulation run from one without gravity as a physical law, or a theory that everything has an essence that seeks other essences in a way unrelated to mass, or what. The denial of anything other than an impossibly exhaustive conjunctive and disjunctive statement isn’t a theory.
Gravity deniers may form a political party with adherents of all the theories I mentioned above to lobby against the “gravitational establishment”. But their collective existence means that each has to have as part of their psychological and sociological theory that it is very easy to be deluded into believing a crackpot, unjustified theory of gravity. No particular theory, including any of theirs, get the presumption of truth.
We begin with no presumption that mass is attracted to other mass inversely proportional to the square of the distance. We don’t need to to end up assigning similar odds for that we began with, because for that hypothesis there is truly a ton of evidence.
We don’t see any particular theory uniquely improbably postulating rampant confabulation and motivated cognition implicated in beliefs about gravity. Every theory, even the a-gravity-ist ones, also postulates this, so there is nothing to explain that an a-gravity-ism is required to explain, or is superior at explaining, including if most intelligent people have been a-gravity-ists. This is particularly true when a-gravity-ism was the default belief.
And when something is found that better describe’s matter’s behavior, such as relativity, we see how the new theory says the old one was a good approximation, the ton of evidence was not simply violated.
So I’m thinking to myself, around six years ago, “I can at least manage to publish timeless decision theory, right? That’s got to be around the safest idea I have, it couldn’t get any safer than that while still being at all interesting. I mean, yes, there’s these possible ways you could let these ideas eat your brain but who could possibly be smart enough to understand TDT and still manage to fall for that?”
Lesson learned.
I spent a year or so diligently studying rationality as a SingInst Visiting Fellow followed by realizing that I was a few levels above nearly any other aspiring rationalist.
And this is what several levels above me looks like? I’m not omnipotent, yet, but I have a deed or two to my name at this point; for example, when I write Harry Potter fanfiction, it reliably ends up as the most popular HP fanfiction on the Internet. (Those of you who didn’t get here following HPMOR can rule out selection effects at this point.) Several levels above me should make it noticeably easier to show your power in a third-party-noticeable fashion, and the fact that you can’t do so should cause you to question yourself.
It’s the opposite of the lesson I usually try to teach, but in this one case I’ll say it: it’s not the world that’s mad, it’s you.
And this is what several levels above me looks like? I’m not omnipotent, yet, but I have a deed or two to my name at this point; for example, when I write Harry Potter fanfiction, it reliably ends up as the most popular HP fanfiction on the Internet. (Those of you who didn’t get here following HPMOR can rule out selection effects at this point.) Several levels above me should make it noticeably easier to show your power in a third-party-noticeable fashion, and the fact that you can’t do so should cause you to question yourself.
This doesn’t obviously follow to me. There are skill sets which aren’t due to rationality. Your own skill sets may be due in part to better writing capability and general intelligence.
I mean, yes, there’s these possible ways you could let these ideas eat your brain but who could possibly be smart enough to understand TDT and still manage to fall for that?”
Make something idiotproof and the universe will build a better idiot.
Don’t hold yourself responsible when people go funny in the head on TDT-related matters. Quantum mechanics and relativity have turned much more brains to mush, does that mean they shouldn’t have been published?
I got my intuitions from ADT, not TDT, and I would’ve gotten all the same ideas from Anna/Steve even if you hadn’t popularized decision theory. (The general theme had been around since Wei Dai in the early 2000′s, no?) So you shouldn’t learn that lesson to too great an extent.
Thanks; yeah, I wasn’t writing carefully, but I didn’t mean to say that “I am a significantly better rationalist than anybody else on the planet”, I meant to say “there are important subskills of rationality where I seem to be at roughly the SingInst Research Fellow level of rationality and high above the Less Wrong poster level of rationality”. My apologies for being so unclear.
It’s the opposite of the lesson I usually try to teach, but in this one case I’ll say it: it’s not the world that’s mad, it’s you.
I don’t think he is “mad”, at least not if you press him enough. A few weeks ago I posted the following comment on one of his Facebook submissions:
Will, this off-topic, I’m curious. What would you do if 1.) any action would be ethically indifferent 2.) expected utility hypothesis was bunk 3.) all that really counted was what you want based on naive introspection?
I’m asking because you (and others) seem to increasingly lose yourself in logical implications of maximizing expected utility and ethical considerations.
Take care that you don’t confuse squiggles on paper with reality.
His reply (emphasis mine):
Alexander, I don’t think that’s a particularly good model of my actual reasoning. The simple arguments I have for thinking about what I think about don’t involve Pascalian reasoning or conjunctions of weird beliefs, and when it comes to policy I am one of the most vocal critics on LW of the unfortunate trend where otherwise smart people attempt to implement complicated policies due to the output of some incredibly brittle model, often without even taking into account opportunity costs or even considering any obviously better meta-level policies. That is insanity, and completely unrelated to any of the kinds of thinking that I do.
The reasons for my current obsessions are pretty simple, though it’s worth noting that I am intentionally keeping my options very, very open.
Seed AI appears to be very possible to engineer. “Provably”-FAI isn’t obviously possible to engineer given potential time constraints. If we could make a seed AI that was reflective enough, for example due strong founding in what Steve Rayhawk wants from a “Creatorless Decision Theory”, and we had strong arguments about attractors that such an agent might fall into, and we had reason to believe that it might converge on something like FAI, then there might come a time when we should launch such a seed AI, even without all the proofs—for example due to being in a politically or existentially volatile situation.
Between BigNum-maximizer Goedel machine-like foomers and provably-FAI foomers, there’s a long continuum of AIs that are more or less reflective on the source of their utility function and what it means that some things rather than some other things caused that particular utility function to be there rather than some other one. The typical SingInst argument that a given AGI will be some kind of strict literalist with respect to what it thinks is its utility function is simply not very strong. In fact, it even contradicts Omohundro’s Basic AI Drives paper, which briefly addresses the topic: “For one thing, it has to make those objectives clear to itself. If its objectives are only implicit in the structure of a complex circuit or program, then future modifications are unlikely to preserve them. Systems will therefore be motivated to reflect on their goals and to make them explicit.” Some small amount of reflection would seem to open the door for arbitrarily large amounts of reflection, especially if the AI is simultaneously modifying its decision theory—obviously we’d rather avoid an argument of degree where unchained intuitions are allowed to run amok.
We can make the debate more technical by looking at Goedel machines and program semantics. I have some relevant ideas but perhaps Schmidhueber’s talk about some Goedel machine implementations in a few days at AGI2011 will prove enlightening.
I’m already losing steam, so we’ll just call that Part One. Part Two and maybe a Part Three will talk about: decision theories upon self-modification; decision theory in context; abstract models of optimization & morality; timeless control and game theory of the big red button; and probably other miscellaneous related ideas.
But after all that I don’t really know how to answer your question. Wants… Even if somehow the thousand aversions that are shoulds were no longer supposed to compel me, they’d still be there, and I’d still be motivationally paralyzed, or whatever it is I am. I’d probably do the exact same things I’m doing now: living in Berkeley with my girlfriend, eating good food, regularly visiting some of the coolest people on Earth to talk about some of the most interesting ideas in all of history. All of that sounds pretty optimal as far as living on a budget of zero dollars goes. If the aversions were lifted, but I was still me, then I haven’t a good idea what I’d do. I’d be happy to immerse myself in the visual arts community, perhaps, or if I thought I could be brilliant I’d revolutionize music cognition and write by far the best artificial composer algorithms. I’d go to various excellent universities for a year or two, and if somehow I found an easy way to make money along the way, e.g. with occasional programming jobs, then I’d frequently travel to Europe and then Asia. I imagine I’d spent very many months in Germany, especially Bavaria. Walking along green mountains or resting under trees in meadow orchards, ideally with a MacBook Pro and a drawing tablet handy. I’d do much meditation and probably progress very quickly, and at some point I expect I’d develop a sort of self-refuge. But I don’t know, I’m just saying things that sound nice as if can’t have, and I may very well end up doing most of them no matter what future I lead.
It seems to me that he’s still with the rest of humanity when it comes to what he is doing on a daily basis and his underlying desires.
I don’t think he is “mad”, at least not if you press him enough.
(You argue that the madness in question, if present, is compartmentalized. The intended sense of “madness” (normal use on LW) includes the case of compartmentalized madness, so your argument doesn’t seem to disagree with Eliezer’s position.)
“For one thing, it has to make those objectives clear to itself. If its objectives are only implicit in the structure of a complex circuit or program, then future modifications are unlikely to preserve them. Systems will therefore be motivated”
Hold on. Motivated by what? If its objectives are only implicit in the structure, then why would these objectives include their self-preservation?
It’s an attempt to better unify causal graphs with algorithmic information. The sections about various Markov properties is I think very important for explaining differences between CDT and TDT, ’cuz you can talk more clearly about exactly where a decision problem can’t be solved due to Markov condition limitations.
In the meantime I hit upon the theisms of Leibniz and Aquinas and other semi-neo-Platonistic academic-style philosophers, taking a computational decision theoretic perspective while trying to do justice to their hypotheses and avoiding syncretism. Ultimately I think that academic “the form of the good and the form of being are the same” theism is a less naive perspective on cosmology-morality than atheism is—you personally should expect to be at equilibrium with respect to any timeless interaction that ends up at-least-partially-defining what “right” is, and pretending like you aren’t or are only negligibly watched over by a superintelligence—whether a demiurge, a pantheonic economy, a monolithic God, or any other kind of institution—is like asking to fail the predictable retrospective stupidity test. The actual decision theory is more nuanced—you always want to be on the edge of uncertainty, you don’t want to prop up needlessly suboptimal institutions or decision policies even timelessly, &c.---but pragmatically speaking this gets swamped by the huge amount of moral uncertainty that we have to deal with until our decision theories are better equipped to deal with such issues.
In what sense is this paragraph supposed to be distinguishable from gibberish?
I like how this is similar to my last few years but in reverse. I spent a year or so diligently studying rationality as a SingInst Visiting Fellow followed by realizing that I was a few levels above nearly any other aspiring rationalist.
My own perspective on this is that most of the aspiring rationalists in the community have their own specialties and niches, and that if I blind myself to skills other than my own, they all look lower-level, but that if I pay attention to what they’re focused on then I see things I can learn from them. Or to put it more succinctly, their levels are in different character classes. While I certainly don’t have faith in anyone’s sanity, I don’t feel like this should put me on an opposing side under ordinary circumstances. I now regret not having met you when I was in the Bay area for rationality bootcamp or Burning Man, but hopefully will get a chance to remedy that the next time I’m in the area.
I agree with this perspective and in retrospect should really have emphasized the “there are many skills of rationality and I only claim to be excellent along those dimensions that I (probably after-the-fact) deem important, skills relating to building lots of models without getting attached to them and finding subtle ways in which concepts are dissatisfactory and must be improved” aspect of my alleged superiority to everything under the sun.
These skills don’t seem to actually slay any problem-monsters or do anything helpful, where wizards and clerics leave a trail of steaming corpses of those monster types. Your rare class seems to be an NPC one, like commoner or adept, which would give you a low CR.
Is there non-dualist theism? if not, that’s the bottleneck making dismissal of theism justified, though ignorance does not excuse inaccurate descriptions of theism.
My problem with Will’s outlook is that if we are indeed being “watched over by a superintelligence”, it doesn’t appear to care about us in any very helpful way. Our relationship to it is therefore more about survival than it is about morality. According to the scenario, there is some thing out there which is all-powerful, whose actions depend partly on our actions, and which doesn’t care about {long list of evolutionary and historical holocausts}, in any way that we would recognize as caring. Clearly, if we had any idea of the relationship between our actions and its actions, it would be in one’s interest, first of all, to act so that it would not allow various awful things to happen to you and anyone you care about, and second, to act so that you might gain some advantage from its powers.
It appears that the only distinctive reason Will has for entertaining such a scenario is the usual malarkey about timeless game-theoretic equilibria… A while back, I was contemplating a post, to be called “Towards a critique of acausal reason”, which was going to mention three fallacies of timeless decision theory: acausal democracy, acausal trade, acausal blackmail. The last two arise from a fallacy of selective attention: to believe them possible, you must only pay attention to possible worlds which only care about you in a highly specific way. But for any possible world where there is an intelligence simulating your response and which will do X if you do Y, there is another possible world where there is an intelligence which will do X if you don’t do Y. And the actual multiplicity of worlds in which intelligences make decisions on the basis of decisions made by agents in other possible worlds that they are simulating it is vanishingly small, in the set of all possible worlds. Why the hell would you base your decision, regarding what to do in your own reality, on the opinions or actions of a possible entity in another world? You may as well just flip a coin. The whole idea that intelligences in causally disjoint worlds are in a position to trade, bargain, or arrive at game-theoretic equilibria is deeply flawed; it’s only a highly eccentric agent which “cares” strongly about events which are influenced by only an extremely small fraction of its subjective duplicates (its other selves in the space of possible worlds). So some of these “eccentric agents” may genuinely “do deals”, but there is no reason to think that they are anything more than a vanishingly small minority among the total population of the multiverse. (Obviously it would be desirable for people trying to work rigorously in TDT to make this argument in a rigorous form, but I don’t see anything that’s going to change the basic conclusion.)
So that leaves us in the more familiar situation, of possibly being in a simulation, or possibly facing the rise of a superintelligence in the near future, or possibly being somewhere in the guts of a cosmic superintelligence which either just tolerates our existence because we haven’t crossed thresholds-of-caring yet, or which has a purpose for us which extends to tolerating the holocausts I mentioned earlier. All of this suggests that our survival and well-being are on the line, but it doesn’t suggest that we are embedded in an order that is moral in any conventional sense.
We are now advanced enough to tackle this issue formally, by trying to construct an equilibrium in a combinatorially exhaustive population of acausal trading programs. Is there an acausal version of the “no-trade theorem”?
I brought up a similar objection to acausal trade, and found [Nesov_2010]’s reply somewhat convincing.
His reply doesn’t address the problem of potentially prohibitive difficulty of acausal trade, it merely appeals to its theoretical possibility. Essentially, the argument is that “there is still a chance”, but that’s not enough,
“between zero chance of becoming wealthy, and epsilon chance, there is an order-of-epsilon difference”
What does that even mean? Does that mean something like: hypothetical lunar farmers in a hypothetical lunar utopia should send down some ore to Earth, and that actual people hundreds of years earlier in a representative body voted 456-450 not to fund a lunar expedition even with a rider to the bill requiring future farmers to send down ore, but the farmer votes from the future+450 > 456? So the farmers “promised’ to send ore?
acausal blackmail
It seems more like a real self inflicted wound than a fallacy or fake blackmail to me, perhaps we don’t disagree. it’s something that is real if one has certain patterns of mind that one could self modify away from, I think.
By “acausal democracy”, I mean the attempt to justify the practice of democracy—specifically, the act of voting—with timeless decision theory. No-one until you has attempted to depict a genuinely acausal democracy :-) This doesn’t involve the “fallacy of selective attention”, it’s another sort of error, or combination of errors, in which TDT reasoning is supposed to apply to agents with only a bare similarity to yourself. See discussion here for a related example.
I also think we agree regarding acausal blackmail, that for a human being it can only be a mistake. Only one of those “eccentric agents” with a very peculiar utility function or decision architecture could rationally be susceptible to acausal blackmail—its decision procedure would have to insist that “selective attention” (to just those possible worlds where the specific blackmail threat is being made) is important, rather than attending to other worlds where contrary threats are being made, or to worlds where the action under consideration will be rewarded rather than punished, or to worlds where the agent is simply a free agent not being threatened or enticed by a captor who cares about acausal dealmaking (and those worlds should be in the vast majority).
My problem with Will’s outlook is that if we are indeed being “watched over by a superintelligence”, it doesn’t appear to care about us in any very helpful way.
The only “plausible” (heh) scenario I can come up with is that a future civilization developed backward time travel, but to avoid paradox it required full non-interaction, so it developed a means of close observation without changing that which is observed, and used it to upload everyone upon their information theoretic death.
I don’t think I really have an outlook, I just notice that I am very confused about a lot of things that other people are ignoring. And my social role is different from my betting odds. (I notice I am confused about whether or not this is justified, about what meta-level policy I should have for situations like this.)
((((I feel compelled to stir up drama for people because they are too complacent to stir up drama for themselves. Unfortunately it is hard to stir up drama by going meta.))))
You’re talking about theodicy; have you read Leibniz on the subject? The most existent of all possible worlds, the world that takes the least bits to specify, because existence is good… Anyway I find it plausible that the universe is weird and that miracles do happen, but once luck reveals clearly how its decision policy works you get Goodhardt’s law problems, so it lies low. Bow chicka bow wow, God of the gaps FTW.
In A History of Western Philosohy, Bertrand Russell wrote of Leibniz that
His best thought was not such as would win him popularity, and he left his records of it
unpublished in his desk. What he published was designed to win the approbation of princes and
princesses. The consequence is that there are two systems of philosophy which may be regarded as
representing Leibniz: one, which he proclaimed, was optimistic, orthodox, fantastic, and shallow;
the other, which has been slowly unearthed from his manuscripts by fairly recent editors, was
profound, coherent, largely Spinozistic, and amazingly logical. It was the popular Leibniz who
invented the doctrine that this is the best of all possible worlds (to which F. H. Bradley added the
sardonic comment “and everything in it is a necessary evil”); it was this Leibniz whom Voltaire
caricatured as Doctor Pangloss. It would be unhistorical to ignore this Leibniz, but the other is of
far greater philosophical importance.
and Russell seems to think that “best of all possible worlds” is the shallow public theodicy, and “most existent” is the private theodicy, and they are not the same thing—since privately (according to Russell’s account), Leibniz speculated that the world which gets to exist is the one which has the most entities in it (maximum number of entities logically capable of coexisting). But then Russell also writes that Leibniz may have considered this a sign of God’s goodness—it’s good to exist, and God makes the world with the most possible things… I am much more sympathetic to Nietzsche’s metaphysics, as described in the posthumous notes collected in The Will to Power, and his skeptical analysis of the psychology behind philosophies which set forth identities such as Reason = Virtue = Happiness. Nietzsche to my knowledge did not speculate as to why there is something rather than nothing, one reason why Heidegger could see Nietzsche’s ontology as the final stage in the forgetting of Being, but his will-to-power analysis is plausible as an explanation of why beings-who-happen-to-exist end up constructing metaphysical systems which say that to be is good, and to be is inevitable, so goodness is inevitable.
So Nietzsche wrote a bunch of stuff in notebooks and even started writing a book called “The Will to Power”. He abandoned it but used a lot of the ideas in his last few works. Upon his death his anti-semitic sister arranged the notebooks and abandoned text into “The Will to Power”. Much of it is in line with stuff he published and that stuff, it is fair to say is representative of his views. But where TWTP says things Nietzsche didn’t include in his later works (which were written after the notes used to create TWTP)… it’s likely that he that he didn’t publish those ideas because he ended up not liking them for whatever reason. Plus, the editorial decisions made by his sister were made by his sister… for example Nietzsche made lots of organization outlines only one of which had “Discipline and Breeding” as a book title… that that outline was chosen in lieu of others is a result of his sister’s ideology (which Nietzsche opposed).
I doubt there is anything in there that is so far away from Nietzsche’s actual views that you aren’t equipped to talk about Nietzsche (the stuff you talk about above is certainly something he’s be down with). I can’t tell you what specifically is in TWTP that isn’t in his other books because I haven’t read it- it’s usually just something read by Nietzsche scholars.
(Looking at this comment it kind of sounds like I’m playing status games “You read the wrong book.” etc. I don’t mean that, you probably have at least as good an understanding of Nietzsche’s views as I do. Mainly I’m just recommending that you be careful about ascribing all of TWTP to Nietzsche and pointing this out so that people don’t read your comment and then go out and buy TWTP in order to understand Nietzsche. And of course, just because Nietzsche didn’t agree with everything in the book doesn’t mean what’s in there aren’t good ideas.)
But where TWTP says things Nietzsche didn’t include in his later works (which were written after the notes used to create TWTP)… it’s likely that he that he didn’t publish those ideas because he ended up not liking them for whatever reason.
There are sections of TWTP—e.g. “The Mechanical Interpretation of the World”—which cover topics simply not addressed in any of Nietzsche’s finished works. (By the way, the version of TWTP that I’m familiar with is Walter Kaufmann’s.) So all we can say is that they lack the final imprimatur of appearing in a book “author”ized by Nietzsche himself. There’s no evidence here of a change of opinion. It is at least possible that he would subsequently have disagreed with some of the thoughts anthologized in TWTP—though presumably he agreed with them at the time he wrote them.
On at least one subject—the meaning of the “eternal recurrence”—I believe TWTP shows that a lot of Nietzsche scholarship has been on the wrong track. Many interpreters have said that the eternal recurrence is a state of mind, or a metaphor, anything but a literal recurrence. But in these notes, Nietzsche shows himself to be interested in eternal recurrence as a physical hypothesis. He reasons: the universe is finite, it has a finite number of possible states, if any state was an end state it would already have ended, therefore it recurs eternally. He thinks this is the world-picture that 20th-century science will produce and endorse. And then—this is the part I think is hilarious—he thinks that lots of people will kill themselves because they can’t bear the thought of their lives being repeated infinitely often in the future cycles of time. The “superman” is supposed to be someone who finds the eternal recurrence a joyous thing, because they love their life and the whole of existence, and the eternal recurrence provides their existence with a sort of eternity that is otherwise not available in a universe of relentless flux. In this regard Nietzsche’s futurology was doubly wrong—first, that isn’t the world-picture that science produces; second, it’s only a very rare individual who would take this claim—the alleged fact of existing again in a distant future aeon—seriously enough to make it the basis for choosing life or death. But I have the same appreciation for the imagination behind this piece of Nietzschean cultural futurology, as I do for the uniquely weird worldviews that are sometimes exhibited on LW. :-)
Well, they were personal notebooks- so who knows how speculative he was being. The key thing is, this wasn’t what he was working on when he died. Published works intervened between TWTP and his death. That combined with the sheer implausibility of the metaphysics you’ve described might suggest he wasn’t that committed to the whole thing ;-). It sounds fascinating though.
He reasons: the universe is finite, it has a finite number of possible states, if any state was an end state it would already have ended, therefore it recurs eternally.
Are there any arguments for these claims? I’m fascinated by the (often very compelling!) arguments past generations had for how the physical world had to be. Aristotle is the best at this.
Yes, lots of it. E.g. Leibniz’s monadology is monist (obviously); it’s equivalent to computationalism in fact. But note that it’s not like dualism is well-understood ’round these parts either. It’s really hard to find a way in which you can say that a property dualist is wrong. It tends to be like, yeah, we get it, minds reside in brains, neuroscience is cool and shit, but repeatedly bringing it up as if nobody had ever heard that before is a facepalm-inducing red herring.
It seems that monadology relies on something like Plato’s theory of forms. That fills the role usually played by dualism in theism. Is there theism without that?
Leibniz doesn’t believe in material substance, so in no sense is he a dualist. If you are asking if there are materialists theists- eh, maybe but as far as I know it has never been a well developed view. That said, the entire platonism-materialism question can probably be reduced to an issue of levels of simulation… in which case it is easy to envision a plausible theism that is essentially dualist but not repugnant to our computationalist sensibilities.
If you first tell them, or give them enough information to realize, or strongly suspect, that without this concession by them they fail, then you can get them to agree to very nearly anything.
But those people are slightly different than the versions uninformed of this, people who would reject it.
The unorthodoxy is motivated and not serious in terms of relative degrees of belief based on what is most likely true.
“Fall”? I don’t understand the second sentence either.
The unorthodoxy is motivated and not serious in terms of relative degrees of belief based on what is most likely true.
Often, though on occasion their reasons are isomorphic to stories we’d find plausible. If someone thought it was worthwhile to reinterpret some of the older theistic philosophers in light of modern information theory and computer science… some interesting ideas might fall out.
But yes- I doubt there are more than a handful of educated theists not working with the bottom line already filled in.
the second sentence means I am trying to distinguish between who someone is and who they might have been. Another intuition pump: put identical theists in identical rooms, on one play a television program explaining how they have to admit that all good evidence makes it unlikely there exists (insert theological thing here, an Adam and eve, a soul, whatever) and on the other play something unrelated to the issue. Then ask the previously identical people if they believe in whatever poorly backed theological thing they previously believed. the unorthodox will flee the false position, but only if they see it as obviously false.
Often, though on occasion their reasons are isomorphic to stories we’d find plausible.
That doesn’t mean the reasons we find it implausible aren’t good or can’t be taught., just as teaching how carbon dating relates to the age of the Earth militates against believing it is ~6,000 years old, one can show why what ancestors tell you in dreams isn’t good evidence.
So my conclusion, my supposition, is that if you muster up the most theistic-compatible metaphysics you find plausible, and show it to those theists who don’t know why anything more supernatural is implausible, inconsistent or incoherent, they will reject it.
That they accept it after learning that you have good objections to anything more theistic is not impressive at all.
Got it. Don’t disagree. But it doesn’t follow that a) we should disregard all theistic philosophy or b) not use theistic language. Given that there are live possibilities that resemble theism the circle of concepts and arguments surrounding traditional, religious theism are likely to be fruitful.
But yes- I doubt there are more than a handful of educated theists not working with the bottom line already filled in.
Rationalization is an important skill of rationality. (There probably needs to be a post about that.) But anyway, I think my “theistic” intuitions are very similar to those of Thomas Aquinas, a.k.a. the rock that Catholic philosophy is built on. Like, actually similar in that we’re thinking about the same decision agent and its properties, not just we’re thinking about similar ideas.
Theism without computationalism? It’s not popular, but most Less Wrong folk are computationalists AFAIK. Hence the “timeless decision theory” and “Tegmark” and “simulation argument” memes floating around. I don’t see how a computationalist can ignore theism on the grounds that it claims that abstract things exist.
Because I’ve studied metaphysics? It’s not even a quirky feature of abstract objects it’s often how they are defined. Now that distinction may be merely an indexical one—the physical universe could be an abstraction in some other physical universe and we just call ours ‘concrete’ because we’re in it. But the distinction is still true.
If you can give an instance of an abstract object exerting causal influence that would be big news in metaphysics.
(Note that an abstract object exerting causal influence is not the same as tokens of that abstraction exerting causal influence due features that the token possesses in virtue of being a token of that abstract object. That is “Bayes Theorem caused me to realize a lot of my beliefs were wrong” is referring to the copy of Bayes Theorem in your brain, not the Platonic entity. There are also type-causal statements like “Smoking causes cancer” but these are not claims of abstract objects having causal influence just abstractions on individual, token instances of causality. None of this, or my assent to lessdazed question, reflects a disparaging attitude toward abstract objects. You can’t talk about the world without them. They’re just not what causes are made of.)
Okay, thanks; right after commenting I realized I’d almost certainly mixed up my quotation and referent. (Such things often happen to a computationalist.)
ETA: A few days ago I got the definition of moral cognitivism completely wrong too… maybe some of my neurons are dying. :/
True, but I think only in the same sense that everyone vastly overemphasizes the importance of Babbage. They both made cool theoretical advances that didn’t have much of an effect on later thinking. This gives a sort of distorted view of cause and effect but the counterfactual worlds are actually worth figuring in to your tale in this case. Wow that would take too long to write out clearly, but maybe it kinda makes sense. (Chaitin actually discovered Leibniz after he developed his brand of algorithmic information theory; but he was like ‘ah, this guy knew where it was at’ when he found out about him.)
Chaitin actually discovered Leibniz after he developed his brand of algorithmic information theory; but he was like ‘ah, this guy knew where it was at’ when he found out about him.
I should point out that Leibniz had the two key ideas that you need to get this modern definition of randomness, he just never made the connection. For Leibniz produced one of the first calculating machines, which he displayed at the Royal Society in London, and he was also one of the first people to appreciate base-two binary arithmetic and the fact that everything can be represented using only 0s and 1s. So, as Martin Davis argues in his book The Universal Computer: The Road from Leibniz to Turing, Leibniz was the first computer scientist, and he was also the first information theorist. I am sure that Leibniz would have instantly understood and appreciated the modern definition of randomness.
OTOH, Wiener already in 1948 explicitly saw the digital computer as the fulfilment of Leibniz’s calculus ratiocinator. (Quoted on Wiki here, full text (maybe paywalled) here.)
(The history of how the idea of computation got formulated is really pertinent for FAI researchers. Justification is a lot like computation. I think we’re nearing the “Leibniz stage” of technical moral philosophy. Luckily we already have the language of computation (and decision theory) to build off of in order to talk about justification. Hopefully that will reduce R&D time from centuries to decades. I’m kind of hopeful.)
E.g. this is what most theism actually looks like: http://plato.stanford.edu/entries/divine-simplicity/ . A lot of it is simply hypotheses about attractors for superintelligences and the Platonic algorithms that they embody. Trust me, I am not just being syncretic.
Please make a claim. Are you saying that if one were to take a proxy for quality like citations to papers/capita of religious studies branches of universities, or the top theological seminaries attached to the most competitive Ivy League Schools, or similar, you are 95% confident that at least 70% of the theist professors believe something like this?
Or is it a stronger claim? With 50% confidence, what percentage of counties and county-equivalents in the United States have most self-identified theists or spiritualists or whatever believing something like this? 50%? 10%?
In what percentage are there at least ten such people?
I don’t see how that is the claim at issue. Most people are incompetent. That tells us little about what theism is. How would knowing the answer tell us anything useful about whether or not theism itself is or isn’t a tenable philosophical position? I really dislike focusing on individual people, I’d rather look at memes. Can I guess at how many of the SEP’s articles on theism are not-obviously-insane and not just if-a-tree-falls debates? I think that question is much more interesting and informative. I’d say… like, 30%.
That’s what the Stanford Encyclopedia of Philosophy calls it. Most biologists are mediocre at biology (many are creationists, God forbid!); that doesn’t mean we should call the thing that good biologists do by some other name. (If this is a poor analogy I don’t immediately see how, but it does have the aura of an overly leaky analogy.) If you asked “why reason in terms of theism instead of decision theory?” then I’d say “well we should obviously reason in terms of decision theory; I’d just prefer we not have undue contempt for an interesting memeplex that we’re not yet very familiar with”.
Biology is the repository of probable information left over after putting data and experiments through the sieve of peer review (the process is also “biology”). The more important ideas get parsed more. Mediocre enough biologists don’t add to biology.
Theology starts with a belief system and is the remnants that by their own lights theologians have not discarded. The process of discarding is also called theology. Unsophisticated people are likely to fail to see what is wrong with more of the original belief set than sophisticated ones, they don’t add to showing what is wrong with the belief pile. It isn’t a crazy analogy, but it’s not quite symmetrical.
To call this theism says more about the language than the beliefs you describe. Is the word closest in idea-space to this memeplex theism? OK, maybe, but it could have been “hunger for waffles and other, lesser breakfast foods” with a few adjustments to the history without adjusting anything at all about the ideas. These beliefs didn’t originate as the unfalsifiable part of an arbitrary cult focused on breakfast, as it happens.
an interesting memeplex
it’s interesting as the least easy to falsify, arguably unfalsifiable core of motivated, unjustified belief. It’s not interesting as something at all likely to be true.
it’s interesting as the least easy to falsify, arguably unfalsifiable core of motivated, unjustified belief. It’s not interesting as something at all likely to be true.
I disagree; certain ideas that theism originated are as likely to be true as certain ideas about decision theory are likely to be true, because they’re isomorphic.
You are reasoning from cached priors without bothering to recompute likelihood ratios (not like you’re actually looking at evidence at all; did you read the article on divine simplicity? Do you have a knockdown reason that I should ignore that debate other than “stupid people believe in God, therefore belief in God is stupid”?). You are ignoring evidence. “Ignore”: ignorance. You are ignorant about theism. That’s cool; you don’t have all the time in the world. But don’t confidently assert that something is not likely to be true when you clearly know very little about it. This is an important part of rationality.
Edit: In other words, you do not have magical inductive biases and you have seen significantly less evidence than I have. This should be more than enough to cause you to be hesitant.
You are ignorant about theism. That’s cool; you don’t have all the time in the world. But don’t confidently assert that something is not likely to be true when you clearly know very little about it. This is an important part of rationality.
You confidently assert my ignorance. That assertion is notable.
you have seen significantly less evidence than I have.
You’re much more confident of this than I am. You should be more hesitant.
Duly noted. Can we share a few representative reasons? What do you think I don’t already think you know about why “theism” (a word that may soon need to be tabooed) isn’t worth looking into?
I can briefly try to translate the divine simplicity thing: “The perfectly reflective Platonic decision algorithm that performs optimally on all optimization problems doesn’t ‘possess’ the quality of optimizerness—it is optimization, just as it is reflectivity. Being a Platonic algorithm, it does not have inputs or outputs, but controls all programs ambiently. It has no potentiality, only actuality: everything is at equilibrium.” And so on and so forth. (Counterarguments would be like “what, there is a sense of equilibrium that implies that this algorithm is a decision theoretic zombie, I think you’re using a non-intuitive definition of ‘equilibrium’” and things like that, or something. It’s better to talk in terms of decision theory but that doesn’t mean they’re not actually equivalent. The parts that don’t boil down to predictions about decision theory tend to be just quibbling over ways of carving reality, which is often informative but not when the subject matter is so politically charged.)
I can briefly try to translate the divine simplicity thing: “The perfectly reflective Platonic decision algorithm that performs optimally on all optimization problems doesn’t ‘possess’ the quality of optimizerness—it is optimization, just as it is reflectivity. Being a Platonic algorithm, it does not have inputs or outputs, but controls all programs ambiently. It has no potentiality, only actuality: everything is at equilibrium.” And so on and so forth.
I think you need to take a big step back and consider what you’ve studied and what you’ve come up with. I’m not sure where divine simplicity fits in your worldview exactly, but in the course of my own decision theory studies, I came up with an issue that seems to shoot down that concept entirely: there can be no decision algorithm that performs optimally on all optimization problems, because there are optimization problems for which the solution space is infinite, and there is an infinite chain of progressively better solutions. Worse, the universe we presently occupy appears to be infinite, and to have such chains for almost all sensible optimization criteria. The best we can do, decision-theory wise, is to bite off special cases, come up with transforms and simplifications to make those cases more broadly applicable, and fall back on imperfect heuristics for the rest.
But there’s a much bigger issue here. It looks to me like you’ve taken a few batches of concentrated confusion—the writings of old philosophers—and invented a novel interpretation to give it meaning. You then took these reinterpretations and mixed them into what started out as a sensible worldview. You’re talking about studying Aquinas and Leibniz, and this makes me very worried, because my longstanding belief is that these authors, and most others of their era, are cognitive poison that will drive you insane. Furthermore, your writings recently look to me like evidence that this may actually be happening. You should probably be looking to consolidate your findings, and to communicate them.
Divine simplicity is a hypothesis, what you say is strong evidence against that hypothesis. But I think it’s still a coherent hypothesis. At the very least we can talk about Goedelian stuff or NFL theorems to counterargue a bunch of the stronger ‘omnipotence, omniscience’ stuff… but things are all weird when you’re that abstract; you can just say, “okay, well, this agent is multipartite and so even if one part has one Chaitin’s constant this other part has another Chaitin’s constant and so you can get around it”, or something, but I doubt that actually works or makes sense. On the other hand it’s always really unclear to me when the math is or isn’t being used outside its intended domain. Basically I notice I am confused when I try to steel man “optimal decision policy” arguments, for or against. (There’s also this other thing that’s like “optimal given boundedness” but I think that doesn’t count.)
I disagree about Aquinas and Leibniz. I see them as putting forth basically sane hypotheses that are probably wrong but probably at least a little relevant for our decision policies. I don’t think that theology is a useful area of study, not when we have decision theory, but I really don’t think that Leibniz especially was off track with his theology. (I dunno if you missed my comments about how he was really thinking in terms of the intuitions behind algorithmic information theory?)
I have significant familiarity with Aquinas, and I do not see anything worth reading Aquinas for, save perhaps arguing with theists. Insofar as there are interesting ideas in his writing, they are better presented elsewhere (particularly in modern work with the benefit of greatly improved knowledge and methods), with greater clarity and without so much nonsense mixed in. Recommending that people read Aquinas, or castigating them for not having read Aquinas, seems like a recipe for wasting their time.
I saw this after making my Plato’s theory of forms comment at 10:19:54AM.
This is what I thought the article was saying.
the subject matter is so politically charged
Everyone seems to be operating under something like the law of conservation of ninjitsu here. You seem to be perhaps the worst offender, with gratuitous offensiveness and the like being approximately equal among all of the few theists here and the many atheists.
In this thread alone:
Sadly Less Wrong seems to know absolutely nothing about theism, which ends up with me repeatedly facepalming when people feel obliged to demonstrate how incredibly confident they are that theism is stupid and worth going out of their way to signal contempt for.
It tends to be like, yeah, we get it, minds reside in brains, neuroscience is cool and shit, but repeatedly bringing it up as if nobody had ever heard that before is a facepalm-inducing red herring.
theism is a less naive perspective on cosmology-morality than atheism is
Also bad is how you characterize what LW thinks, this seems like a artificial way to pretend you have the only or best informed position by averaging many people on here with the people who do’t know and don’t care to know about things that the best evidence they have shows are elaborate rationalizations and meta-hipsterism by intellectuals.
Perhaps more important, I have a visceral knowledge that I can experience something personally, and be confident of it, and be completely wrong about it.
Eliot:
And last, the rending pain of re-enactment Of all that you have done, and been; the shame Of motives late revealed, and the awareness Of things ill done and done to others’ harm Which once you took for exercise of virtue. Then fools’ approval stings, and honour stains.
I like how this is similar to my last few years but in reverse. I spent a year or so diligently studying rationality as a SingInst Visiting Fellow followed by realizing that I was a few levels above nearly any other aspiring rationalist. In the meantime I lost faith in the sanity of humans and decided I basically wasn’t on their side anymore, which is a much more complex intrapersonal dynamic than it sounds.
For the last 6 months I’ve been downright obsessed with “morality”, though a less lossy way of putting it is like “that thing in the middle of justification, decision theory, institutional economics, ontology of agency, computer-science-inspired moral philsophy, teleology & timelessness, physicalism vs. computationalism, &c.”.
In the meantime I hit upon the theisms of Leibniz and Aquinas and other semi-neo-Platonistic academic-style philosophers, taking a computational decision theoretic perspective while trying to do justice to their hypotheses and avoiding syncretism. Ultimately I think that academic “the form of the good and the form of being are the same” theism is a less naive perspective on cosmology-morality than atheism is—you personally should expect to be at equilibrium with respect to any timeless interaction that ends up at-least-partially-defining what “right” is, and pretending like you aren’t or are only negligibly watched over by a superintelligence—whether a demiurge, a pantheonic economy, a monolithic God, or any other kind of institution—is like asking to fail the predictable retrospective stupidity test. The actual decision theory is more nuanced—you always want to be on the edge of uncertainty, you don’t want to prop up needlessly suboptimal institutions or decision policies even timelessly, &c.---but pragmatically speaking this gets swamped by the huge amount of moral uncertainty that we have to deal with until our decision theories are better equipped to deal with such issues.
Sadly Less Wrong seems to know absolutely nothing about theism, which ends up with me repeatedly facepalming when people feel obliged to demonstrate how incredibly confident they are that theism is stupid and worth going out of their way to signal contempt for. One person went so far as to compare it with modern astrology, which I could only respond to with a mental “what is this i dont even”. This was long after I’d lost my faith in the ability of humanity’s finest to show off even a smidgen of sanity but it still managed to make me despair. Humans.
Eliezer got it from trying to build uFAI, Wei_Dai got it from cryptography, lukeprog got it from Christianity, I got it from my ex-girlfriend. I feel so contingent.
Providing a clear explanation of your theories would be useful. You don’t seem to even really try, and instead write comments and posts that don’t even attempt to bridge the inferential distance. At the same time, you do frequently write content where you talk about how you feel superior to LWers. In other words, you say you’re better than us because you don’t give us a real chance to catch up with your thoughts.
That’s kinda rude.
It also makes one suspect that you either don’t actually have a theory that was coherent enough to formulate clearly, or that you prefer to bask in your feeling of superiority instead of bothering to discuss the theory with us lowly LW-ers. Acting in a way to make yourself immune to criticism hardly fits the claim of being “a few levels above nearly any other aspiring rationalist”. Rather, it shows that you’re failing even the very rudiments of rationalist practice 101.
Being levels above in rationalism means doing rationalist practice 101 much better than others as much as being a few levels above in fighting means executing a basic front-kick much better than others.
To follow the analogy further if you are a few levels above in fighting then you should not find yourself face-planting every time you attempt a front kick. Or, at least, if you know that front kicks are the one weakness in your otherwise superb fighting technique then you don’t use front kicks.
Before I vote on this post, please clarify whether you think being a few levels above in fighting means executing a basic front-kick much better than others.
Ceteris paribus, being better at front-kicking makes one a better fighter. One would probably need mastery of more than the one technique to be considered levels up: rationalism 102, 103, etc. I just used one example of a basic fighting technique because the sentence flowed better that way; I didn’t put much time in thinking about and formulating it.
But the point was that no advanced techniques are needed to be many levels above normal. I see now that the comment might imply it’s enough to be several levels up with one skill alone. At 45 seconds into this video is a fight between a master of grappling and a regular MMA fighter. If they had made it to the ground together and conscious, Gracie would have won easily. He needed a more credible striking threat so Gomi would have had to defend against that too, and thereby weaken his defense against being taken down.
I meant something like:
I have probably heard that quote before, but wasn’t consciously thinking of it.
How do fights end? Not with spinning jumping back-kicks to the head, but with basic moves better executed than basic counters to them. Right cross, arm-bar, someone running away, simple simple.
By analogy, for rationalism I’m emphasizing the connection between basic and advanced rationality mentioned by Kaj_Solata. If you don’t have the basics, you have nothing, and you can’t make up for it with moderate facility at doing advanced things.
If you do it right, the same way they start: A single king hit.
Gotcha. Upvoted.
I regret that I only have one upvote to give this comment.
That’s why we’ve given you a karmic wake, brother.
The technical term is bro.
Bro as in Kamina.
We should perhaps formalize norms for upvoting based on this kind of comment. In any case, I’m doing so. And then going back to read the context to make sure I agree.
I find that the increased attention given to the context combined with the positive priming is more than enough.
In this case, however I am finding that the comment backfired. It is Kaj’s comment, not lessdazed’s. Lessdazed’s comment isn’t bad as an independent observation but does miss the point of its parent. This means Eliezer’s “upvote MOAR” comment is a red herring and I had to downvote it and lessdazed in responsed where I would otherwise have left it alone.
I have an idea...(begins writing discussion post draft)
You could instead make a post more explicitly about how rationality is a set of skills that must be trained. I keep trying to get this into people’s heads but you are in a much better position to do so than I am, and it’s an important thing to be aware of. Like, really important.
(I always end up making analogies to chess or guitar, perhaps you could make analogies to computer programming?)
You’re still operating under the assumption that Will_Newsome cares, beyond a certain very low fundamental threshold, what we think about him and/or his theories.
Can someone tell me what my theories are? Maybe it’s the sleep deprivation but I don’t remember having any theories qua theories. I talk about other peoples’ theories sometimes, but mostly to criticize them, e.g. my decision theoretic arguments against naive interpretations of academic theism (of the sort that Mitchell Porter rightly finds misguided).
They don’t have to be your theories in the sense that you originated them, we just mean “your theories” as in the theories/models/beliefs/maps you personally use, and that you often mention in passing in your posts, but without much detail.
For example: what does Aquinas have to do with TDT? That’s not a specific question (though I’d like to hear your answer!) so much as a hint as to the sort of things that come across as empty statements to us; it’s not at all obvious (to me, at least) how you are relating together the various things you mention in a given sentence, or how you are arriving at your conclusions. It’s like there’s a bunch of big invisible “this lemma left as an exercise for the reader” sentences in the middle of your paragraphs.
At the very least, you could provide links back to some of your longer posts which explain your ideas in a step-by-step fashion. Inferential distance, dude.
I don’t understand your writings enough to know for sure. However, for example,
is a conclusion that surely must have come from some nontrivial body of beliefs. Maybe that’s not what you mean by theory qua theory, but I suspect that’s what Kaj_Sotala meant.
Whatever this underlying framework is, it would be nice to evaluate someday.
Have I ever claimed to have any “theories”? I claim to have skills. I have expounded on what some of these skills are at various points. How am I acting in a way that makes myself immune to criticism? If I am trying to do that it would appear that I am failing horribly considering all the criticism I get. In other words, what you’re saying sounds very reasonable, but are you talking about reality or instead a simplified model of the situation that is easy to write a nice-sounding analysis of? That’s an honest question.
This certainly sounds like a theory, or a bunch of them, to me:
Certainly you keep saying that you feel superior to LW:ers because they don’t know the things you do. You may call that knowledge, theory, skill, or just claims, however you prefer. But while you have expounded on it somewhat, you haven’t written anything that would try to systematically bridge the inferential distance. Right now, the problem isn’t even that we wouldn’t understand your reasons for saying what you do, the problem is that we don’t understand what you are saying. Mostly it just comes off as an incomprehensible barrage of fancy words.
For instance, my current understanding of your theories (or skills, or knowledge, or whatever) is the following. One, you claim that because of the simulation argument, theism isn’t really an unreasonably privileged claim. Two, this relates to TDT somehow. Three, that’s about all I understand. And based on your posting history that’s about all that the average LW reader could be expected to know about the things you’re talking about.
That’s what my claim of you making yourself immune to criticism is based on: you currently cannot be criticized, because nobody understands your claims well enough to criticize them (or for that matter, agree with them), and you don’t seem to be making any real attempt to change this.
I’m talking about my current best model of you and your claims, which may certainly be flawed. But note that I’m already giving you an extra benefit of doubt because you seemed sane and cool when we interacted iRL. I do still think that you might be on to something reasonable, and I’m putting some effort into communicating with you and inspecting my model for flaws. If I didn’t know you at all, I might already have dismissed you as a Time Cube crank.
I keep bringing this up only to have it ignored completely, but: THAT IS NOT A PSYCHOLOGICALLY REALISTIC OPTION.
I too used to have a disorder that made me occasionally write nonsense. In my case it turned out to be fixable by reading a lot of LW, in particular Eliezer’s and Yvain’s posts, and then putting a lot of work into my own posts and comments to approach their level of clarity. It was hard at first, but after a while it became easier. Have you tried that?
Right now your writings look very stream-of-consciousness to me, like you don’t even write drafts. Given all the criticism you get, this is kind of unacceptable. Many LWers write drafts and send them to each other for critique before posting stuff publicly. I often do that even for discussion posts.
Errr… wait. We do that? Ooops. Sometimes I proof-read and sometimes I make edits to my comments as soon as I post them. Does that count?
Yeah, some of us do. Your posts are pretty good as they are, but hey, now you know a way to make them even better! I volunteer to read drafts anytime :-)
I remember thinking it was ironic how in the Wikipedia article on learned helpnessness when they talk about the dogs the tone is like “oh, how sad, these dogs are so demoralized that they don’t even try to escape their own suffering”, but when it came to humans it was like “oh look, these humans seem to have a choice about whether or not they suffer but they’re acting as if they don’t have that choice so as to avoid blame and avoid putting forth effort to change their situation”; which if taken seriously sort of undermines the hypothesis about how the behavioral mechanisms are largely the same for both animals. But you could tell it was totally unconscious on the part of the writers, and if you’d tried to point it out to them they could just backpedal in various ways, and so there’d be no point in trying to point out the change in perspective, it’d just look like defensiveness. And going meta like this probably wouldn’t help either.
This is the first time I see you say that, but fair enough. I can relate to that.
Why? Or did I accidentally stumble across a private forum with secrets?
Why? Or did I accidentally stumble across a private forum with secrets?
I think this might be what Kaj means when he mentions your ‘theories.’ Let’s take your “the form of the good and the form of being are the same” theory of cosmology-morality, for example. (You call it a ‘perspective’, but I just mean ‘theory’ in a very broad sense, here.) If you’ve explained it clearly on Less Wrong anywhere, I missed it. Of course you don’t owe us any such explanation, but that may be the kind of thing Kaj is talking about when he says that “You don’t seem to even really try [to explain your ideas], and instead write comments and posts that don’t even attempt to bridge the inferential distance. At the same time, you do frequently write content where you talk about how you feel superior to LWers.”
Also, you contrast your theory of cosmology-morality with ‘atheism’, as if atheism is a theory of cosmology-morality, but of course it’s not. So that’s confusing. The rest of the paragraph is a dense jumble of concepts and half-arguments that could each mean half a dozen different things depending on one’s interpretation, and is thus incomprehensible—to me, anyway.
I agree that there are forms of theism much more sophisticated than anything I’ve read in astrology. But as someone who has read the leading analytic theistic philosophers—Alvin Plantinga, Peter van Inwagen, William Alston, Charles Taliaferro, Alexander Pruss, John Hare, Robin Collins, Timothy McGrew, Marilyn McCord Adams, Bill Craig, William Hasker, Timothy O’Connor, Eleonore Stump, Keith Yandell, and others—I can somewhat knowledgeably confirm that theism is probably not worth studying.
Have you read Thomas Aquinas or Gottfried Leibniz? It’d be cool if there was something we’d both read such that we could have an object-level discussion. I am not familiar with modern theism. Plantinga and Craig I’m mildly familiar with thanks to your blog, but they seemed third-rate compared to the original thinkers.
Not much, I’m afraid. I may know more about their views than most LWers, but that’s ain’t much.
Okay, hm. You’re busy all the time but if ever you have some time free I’d like to brainstorm about how we might have something like a “rational debate”. E.g. the optimal set-up might be meeting in person where we can go back and forth in real-time to clarify small things while taking a break every few minutes to check the internet for sources and write out better-considered arguments and responses. Considering we live a block away from each other that might actually be possible. It’d incentivize me to put a lot more effort into being understandable. I’m not exactly sure what the topic of such debate would be; I agree with you that theism isn’t worth studying, I only try to argue that it’s really hard to claim that theists are wrong given our current state of uncertainty.
That doesn’t sound like a productive way to address these issues, but it’s true that I should put my time on this until at least after a September 30th deadline I’ve got on a project. I’ll keep this in mind.
Huh? I find this to be an odd claim. Atheism is at least implicitly a prediction about where justification certainly doesn’t come from: basically, not from any big, well-organized, monolithic institution/agent/thing.
Sure, but only in the sense that a-fairyism is “a perspective on cosmology-morality.” A-fairlyism says that justification doesn’t come from fairies. In the way I typically use the English language, that’s not enough to bother calling a-fairyism “a perspective on cosmology-morality.”
Not even close. This is like the astrology thing. You’re claiming that belief in God is privileging the hypothesis when clearly I do not think that belief in God is privileging the hypothesis. Things like God and truth are already picked out as tenable hypotheses, the support of opposition of which are in fact clear philosophical positions. I’m not sure if I’m being clear; do you see why I think you’re assuming the conclusion here? If not I could try to write out something longer with more concrete examples.
If there are three otherwise equal pairwise mutually exclusive possibilities, “belief” in one is privileging the hypothesis.
The non-Bayesian “belief” language is deficient here anyway.
Right, and in that case atheism would also be privileging the hypothesis, which means, yeah, this whole “privileging the hypothesis” thing isn’t really helping.
No. A-(assertion)-ism is fine.
Assertions can be true, false, incoherent, and other things. Most statements are not true. Single, otherwise perfectly fine statements that imply the falsity of many multitudes of similar otherwise perfectly fine statements cannot be justified by the claim that, in general, otherwise perfectly fine statements get the presumption of validity or consideration. However much one says it is important not to judge statements such as assertions of monotheism, that applies to the statements monotheism excludes, which are more numerous.
Only in the complete absence of evidence. But theism already has a ton of evidence for it and was the default belief of intelligent folk for thousands of years; it’s like saying a-gravity-ism isn’t actually a theory about physics (to take our metaphors to the other extreme from fairies). Assigning a low prior to theism is an abuse of algorithmic probability theory. …Am I missing something?
Can you explain this? Because I’ve been operating under the following assumption:
In order to write a computer program that actually computes (rather than models) Maxwell’s equations you have to write a program that writes out a physical universe, and if you want a program that describes Maxwell’s equations then the interpretation you choose is more a matter of pragmatic decision theory than of algorithmic probability theory, at least in practice. (Bounded agents aren’t exactly committing an error of rationality when they don’t try to act like Homo Economicus; that would be decision theoretically insane.)
But anyway. Specific things in the universe don’t seem to be caused by gods. Indeed, that’d be hella unparsimonious: “God chose to add some ridiculous number of bits into His program just to make it such that there was a ‘Messiah gets crucified’ attractor?”. The local universe as a whole, on the other hand, is this whole other thing: there’s the simulation argument.
Your comment got voted up to +10 despite Eliezer’s argument being a straightforward error of algorithmic probability; I don’t know what to do about that and it stresses me out. Does anyone have ideas? It saddens me to see algorithmic probability so regularly abused on LW, but the few corrective posts on the matter, e.g. by Slepnev, don’t seem to have permeated the LW memeplex, probably because they’re too technical.
I think you are slightly misinterpreting things. As you pointed out, the established memeplex does lean heavily in favor of Eliezer’s position on algorithmic probability theory rather than Slepnev’s. But that doesn’t mean that all of the upvoters agree with Eliezer’s position—some of them probably just want to see you answer my question “Can you explain this?”. In fact, I would very much like to see this question answered thoroughly in a way that makes sense to me. Vladimir’s posts are a great start, but lacking knowledge of algorithmic probability theory, I don’t really know how to put all of it together.
Thanks for the correction, that people are interested in it at least is a good sign.
What we really need is a well-written gentle introduction to algorithmic probability theory that carefully and clearly shows how it works and what it does and doesn’t imply.
Well, of course there are both superintelligences and magical gods out there in the math, including those that watch over you in particular, with conceptual existence that I agree is not fundamentally different from our own, but they are presently irrelevant to us, just as the world where I win the lottery is irrelevant to me, even though a possibility.
It currently seems to me that many of such scenarios are irrelevant not because of “low probability” (as in the lottery case; different abstract facts coexist, so don’t vie for probability mass) or moral irrelevance of any kind (the worlds with nothing possibly of value), but because of other reasons that prevent us from exerting significant consequentialist control over them. The ability to see the possible consequences (and respond to this dependence) is the step missing, even though your actions do control those scenarios, just in a non-consequentialist manner.
(It does add up to atheism, as a modest claim about our own world, the “real world”, that it’s intended to be. In pursuit of “steelmanning” theism you seem to have come up with a strawman atheism...)
I don’t know if this is what Will has in mind- but it seems plausible that the super intelligences and gods that would be watching out for us might attempt to maximize the instantiations of our algorithms that are under their domain, so that as great a proportion of our future selves as possible will be saved (this story is vaguely Leibnizian). But I don’t know that such superbeings would be capable of overcoming their own sheer unlikelihood (though perhaps some subset of such superbeings have infinite capacity to create copies of us?). You can derive a self-interested ethics from this too- if you think you’ll be rewarded or punished by the simulator. The choices of the simulators could be further constrained by simulators above them—we would need an additional step to show that the equilibrium is benevolent (especially given the existence of evil in our universe).
But I’m not at all convinced Tegmark Level 4 isn’t utter nonsense. There is big step from accepting that abstract objects exist to accepting that all possible abstract objects are instantiated. And can we calculate anthropic probabilities from infinities of different magnitudes?
I’d rather say that the so-called “instantiated” objects are no different from the abstract ones, that in reality, there is no fundamental property of being real, there is only a natural category humans use to designate the stuff of normal physics, a definition that can be useful in some cases, but not always.
So there are easy ways to explain this idea at least, right? Humans’ decisions are affected by “counterfactual” futures all the time when planning, and so the counterfactuals have influence, and it’s hard for us to get a notion of existence outside of such influence besides a general naive physicalist one. I guess the not-easy-to-explain parts are about decision theoretic zombies where things seem like they ‘physically exist’ as much as anything else despite exerting less influence, because that clashes more with our naive physicalist intuitions? Not to say that these bizarre philosophical ideas aren’t confused (e.g. maybe because influence is spread around in a more egalitarian way than it naively feels like), but they don’t seem to be confusing as such.
Human decisions are affected by thoughts about counterfactuals. So the question is, what is the nature of the influence that the “content” or “object” of a thought, has on the thought?
I do not believe that when human beings try to think about possible worlds, that these possible worlds have any causal effect in any way on the course of the thinking. The thinking and the causes of the thinking are strictly internal to the “world” in which the thinking occurs. The thinking mind instead engages in an entirely speculative and inferential attempt to guess or feel out the structure of possibillity—but this feeling out does not in any way involve causal contact with other worlds or divergent futures. It is all about an interplay between internally generated partial representations, and a sense of what is possible, impossible, logically necessary, etc in an imagined scenario; but the “sensory input” to these judgments consists of the imagining of possibilities, not the possibilities themselves.
Sure, thats a fine way to put it. But, how do you even begin estimating how likely that is?
How likely what is? There doesn’t appear to be a factual distinction, just what I find to be a more natural way of looking at things, for multiple purposes.
You don’t think whether or not the Tegmark Level 4 multiverse exists could ever have any decision theoretic import?
I believe that “exists” doesn’t mean anything fundamentally significant (in senses other than referring to presence of a property of some fact; or referring to the physical world; or its technical meanings in logic), so I don’t understand what it would mean for various (abstract) things to exist to greater or lower extent.
Okay. What is your probability for that belief? (Not that I expect a number, but surely you can’t be certain.)
That would require understanding alternatives, which I currently don’t. The belief in question is mostly asserting confusion, and as such it isn’t much use, other than as a starting point that doesn’t purport to explain what I don’t understand.
Fine. So you agree that we should be wary of any hypotheses of which the reality of abstract objects is a part?
No, I won’t see that in itself as a reason to be wary, since as I said repeatedly I don’t know how to parse the property of something being real in this sense.
Personally, I am always wary of hypotheses I don’t know how to parse.
Anyone who has positive accounts of existentness to put forth, I’d like to hear them. (E.g., Eliezer has talked about this related existentness-like-thing that has do with being in a causal graph (being computed), but I’m not sure if that’s just physicalist intuition admitting much confusion or if it’s supposed to be serious theoretical speculation caused by interesting underlying motivations that weren’t made explicit.)
Different abstract facts aren’t mutually exclusive, so one can’t compare them by “probability”, just as you won’t compare probability of Moscow with probability of New York. It seems to make sense to ask about probability of various facts being a certain way (in certain mutually exclusive possible states), or about probability of joint facts (that is, dependencies between facts) being a certain way, but it doesn’t seem to me that asking about probabilities of different facts in themselves is a sensible idea.
(Universal prior, for example, can be applied to talk about the joint probability distribution over the possible states of a particular sequence of past and future observations, that describes a single fact of the history of observations by one agent.)
(I’m not sure ‘compare’ is the right word here.)
You just prompted me to make that comparison. I’ve been to New York. I haven’t been to Moscow. I’ve also met more people who have talked about what they do in New York than I have people who talk about Moscow. I assign at least ten times as much confidence to New York as I do Moscow. Both those probabilities happen to be well above 99%. I don’t see any problem with comparing them just so long as I don’t conclude anything stupid based on that comparison.
There’s a point behind what you are saying here—and an important point at that—just one that perhaps needs a different description.
What does this mean, could you unpack? What’s “probability of New York”? It’s always something like “probability that I’m now in New York, given that I’m seating in this featureless room”, which discusses possible states of a single world, comparing the possibility that your body is present in New York to same for Moscow. These are not probabilities of the cities themselves. I expect you’d agree and say that of course that doesn’t make sense, but that’s just my point.
It wasn’t my choice of phrase:
When reading statements like that that are not expressed with mathematical formality the appropriate response seems to be resolving to the meaning that fits best or asking for more specificity. Saying you just can’t do the comparison seems to a wrong answer when you can but there is difficulty resolving ambiguity. For example you say “the answer to A is Y but you technically could have meant B instead of A in which case the answer is Z”.
I actually originally included the ‘what does probability of Moscow mean?’ tangent in the reply but cut it out because it was spammy and actually fit better as a response to the nearby context.
Based on the link from the decision theory thread I actually thought you were making a deeper point than that and I was trying to clear a distraction-in-the-details out of the way.
The point I was making is that people do discuss probabilities of different worlds that are not seen as possibilities for some single world. And comparing probabilities of different worlds in themselves seems to be an error for basically the same reason as comparing probabilities of two cities in themselves is an error. I think this is an important error, and realizing it makes a lot of ideas about reasoning in the context of multiple worlds clearly wrong.
log-odds
Oh, yes, that. Thankyou.
Really? God isn’t less probable than New York?
God is an exceedingly unlikely property of our branch of the physical world at the present time. Implementations of various ideas of God can be found in other worlds that I don’t know how to compare to our own in a way that’s analogous to “probability”. The Moscow vs. New York example illustrates the difficulty with comparing worlds that are not different hypotheses about how the same world could be, but two distinct objects.
(I don’t privilege the God worlds in particular, the thought experiment where the Moon is actually made out of Gouda is an equivalent example for this purpose.)
There doesn’t seem to be a problem here. The comparison resolves to something along the lines of:
Consider all hypotheses about the physical world of the present time which include the object “Moscow”.
Based on all the information you have calculate the probability that any one of those is the correct hypothesis.
Do the same with “New York”.
Compare those two numbers.
???
Profit.
Instantiate ”???” with absurdly contrived bets with Omega as necessary. Rely on the same instantiation to a specific contrived decision to be made to resolve any philosophical issues along the lines of “What does probability mean anyway?” and “What is ‘exist’?”.
What you describe is the interpretation that does make sense. You are looking at properties of possible ways that the single “real world” could be. But if you don’t look at this question specifically in the context of the real world (the single fact possibilities for whose properties you are considering), then Moscow as an abstract idea would have as much strength as Mordor, and “probability of Moscow” in Middle-earth would be comparatively pretty low.
(Probability then characterizes how properties fit into worlds, not how properties in themselves compare to each other, or how worlds compare to each other.)
Our disagreement here somewhat baffles me, as I think we’ve both updated in good faith and I suspect I only have moderately more/different evidence than you do. If you’d said “somewhat unlikely” rather than “exceedingly unlikely” then I could understand, but as is it seems like something must have gone wrong.
Specifically, unfortunately, there are two things called God; one is the optimal decision theory, one is a god that talks to people and tells them that it’s the optimal decision theory. I can understand why you’d be skeptical of the former even if I don’t share the intuition, but the latter god, the demon who claims to be God, seems to me to likely exist, and if you think that god is exceedingly unlikely then I’m confused why. Like, is that just your naive impression or is it a belief you’re confident in even after reflecting on possible sources of overconfidence, et cetera?
I agree that there are many reasons that prevent us from explicitly exerting significant control, but I’m at least interested in theurgy. Turning yourself into a better institution, contributing only to the support of not-needlessly-suboptimal institutions, etc. In the absence of knowing what “utility function” is going to ultimately decide what justification is for those who care about what the future thinks, I think building better institutions might be a way to improve the probabilities of statistical-computational miracles. I think this with really low probability but it’s not an insane hypothesis even if it is literally magical thinking. (The decision theory and physics backing the intuitions are probably sound, it’s just that it doesn’t have the feel of well-motivatedness yet. It’s more one of those “If I have to choose to spend a few hours either reading about dark matter or reading about where decision theory meets human deciion policies I think it’s a potentially more fruitful idea to think about the latter” things.)
I really appreciate that you responded at roughly the right level of abstraction. It seems clear that the debate should be over the extent to which thaumaturgy is possible (including thaumaturgy that helps you build FAIs faster) because that’s the only way “theism” or “atheism” should affect our decision policy. (Outside of deciding which object level moral principles to pursue. I like traditional Anglican Christianity when it comes to object level morality even if I mostly ignore it.)
Not by a long shot. Physics is probably mostly irrelevant here, it focuses only on our world; and decision theory is so flimsy and poorly understood that any related effort should be spent on improving it, for it’s not even clear what it suggests to be the case, much less how to make use of its suggestions.
I’ve seen QM become important because of decision problems where agents have to coordinate between quantum branches in order to reverse time. I can’t go into that here but I’d at least like to flag that there are decision theory problems where things like quantum information theory shows up.
That actually sounds like it has a possibility of being interesting.
Physics focuses on worlds across the entire quantum superposition. That’s a pretty big neighborhood, no? Agreed about decision theory. When I said “choose to spend” I meant “I have a few hours to kill but I’m too lazy to do problem sets at the moment”, not “I choose thaumaturgy as the optimal thing to study”.
Okay, that makes sense as a rich playground for acausal interaction. I don’t know what pieces of intuition about physics you refer to as useful for reasoning about acausal effects of human decisions though.
Not if there is evidence of angels and demons in our world, and you can interact with them in at least semi-predictably consequential ways. Which basically everyone believes except the goats, because everyone gets evidence except the goats. Doesn’t it suck to have a mind-universe that actively encourages you to fall into self-sustaining delusions? Yes, yes it does.
ETA: Apparently it’s 2012 now! My resolution: not to fall into self-sustaining delusion! Happy new year LW!
Could you give an example? Like, can you state a specific fact of the world and explain which version of theism it is evidence for, and how it is evidence for that version of theism?
All of existence is strong evidence in favor of theism. The existence of an extremely complex system is obviously evidence of an entity capable and willing to create such a system from scratch. For the kind of priors people deal with everyday- things like “Is Amanda Knox Guilty?” or “Will I win the hand of poker?” the evidence of the strength that we have for God’s existence would be more than enough to convince us. But the prior for theism (as it is usually formulated) is so laughably, incomprehensibly low all this evidence isn’t even enough for a rational person to seriously consider the theistic hypothesis. Will’s claim that a low prior for theism “is an abuse of algorithmic probability theory” is the real issue. Now, that prior can be reduced if the hypothesis involves some process by which the entity could come to exist while conserving complexity (in particular, if that entity evolved and then created this universe). Will however seems to believe in something different than the usual simulation hypothesis- he may endorse something like Divine Simplicity which is complete and utter nonsense. Word games and silliness as far as I can tell- or at least smacking of a to-me-untenable moral realism.
I don’t understand how it’s strong evidence. We have plenty of experience showing that complex stuff is just what you get when you leave simple stuff alone long enough, assuming you’re talking about “complexity” in the thermodynamic sense. For intelligent entities to be elevated as a particular hypothesis, it seems like you need to find things like low entropy pockets and optimization behavior.
All of existence is also evidence for the hypothesis that if you leave simple stuff alone long enough complexity arises. And the prior for that is much higher than the theism prior.
If both those hypotheses (thermodynamics, theism) started at the same prior, which one would receive more of a boost upwards after updating on all existence?
That’s a really good question.
In theism’s favor we have mystical experience, purported revelation and claims of miracles. Against, we have the existence of evil and a lot of familiarity with how complexity can come to be through simple processes. Maybe the fact that we keep explaining things that God was once used to explain is metainductive evidence against theism… I really have trouble thinking clearly about this and suspect I’ve biased myself by being an atheist so long. What do you think?
I’m gonna think out loud for a bit, let’s see if this makes sense.
I think that “complexity” is a red herring; it’s dodging the real query. What we’re really interested in is something more like an explanation for why the universe is the way it is, rather than some other universe, including the rather large subset of possible universes that would’ve resulted in nothing very interesting at all happening ever.
So: rather than “theism” and “thermodynamics”, we more generally have “theism” and “everything else” as our two competing chunks of hypothesis-space to explain “why is the universe the way it is?”. Let’s assume that that’s a meaningful question. Let’s also assume that the two chunks have equal prior probability (that is, let’s just forget about comparing minimum message lengths or anything like that, otherwise “everything else” gets a big head start).
Update on direct, personal, but non-replicable experiences of communicating with gods. This is at most very weak evidence in favor of theism, due to what we know about cognitive biases.
Update on negative results of attempting to replicably communicate with gods. This is weak evidence against theism; it is good evidence against a god that can communicate with us and wants to, but it doesn’t say much for the remainder of possible-god-space.
Update on evolution via natural selection as the explanation for humanity’s biological setup. This is also weak evidence against theism; it’s good evidence only against the subset of possible-god-space that wants people to be able to notice them, or that has a particular design idea in mind and goes about creating people to fulfill that idea. Also, given the pretty major flaws of human bodies and minds, it’s good evidence against the subset of possible-god-space where the gods prioritize our happiness (in both the sophisticated fun theoretic sense and the wire-head sense of happiness).
Update more generally on the existence of naturalistic patterns like evolution that can crank out relatively low-entropy things like biological life. Weak evidence against gods in general, good evidence against the subset of possible gods that specifically are interested in and capable of creating biological life.
I can go on like that for a while, but the basic pattern seems to be: “not theism” pulls generally but not majorly ahead, by taking probability mass from the parts of “theism” that involve directly causing stuff that applies only to our particular neck of the universe. Humans and the Earth are pretty weird compared to all the stuff around them, but it seems that gods are not a good explanation for that weirdness.
The hypothesis space for “theism” still has probability mass for gods that do not or cannot directly intervene in favor of privileging universes where humans are the way they are. I’m not sure how big that is compared to the entire hypothesis space of possible theisms; whatever that there is, that’s how badly “theism” in general would be losing to “not theism” if they started out at the same prior.
Haha. I’m not a theist, I’m an anthropic theorist!
Your comment definitely pulls me in your direction.
This is hard and probably not fair to do without knowing what else is in “non-theism”. But in general theism has an advantage you’re forgetting which is that it lets us explain everything we don’t understand with magic. Big Bang, abiogenesis, what have you, theism has been defined in such a way that it can explain anything we can’t already explain. This means everything we don’t understand is evidence for God. I don’t know that the realization that we keep explaining things previously attributable to God swamps this effect. You’re certainly right that the image of God one arrives at is at best indifferent and at worst humorously sadistic (with “averse to science” somewhere in the middle).
I will say that I’m not sure Occam priors actually come from any kind of analytic deduction based on something like algorithmic complexity. That is, I think the whole thing might just be one giant meta-induction on all our confirmed and falsified hypotheses where simplicity turned out to be a useful heuristic. In which case, I don’t know what the prior was (doesn’t matter) but p=God is just crazy low,
That’s not necessarily true. You could have a shy god. The better your epistemology gets, the shyer it gets, always staying on the edge of humanity’s epistemology. But it still works miracles when people aren’t looking too closely.
Though I’m not quite sure what kind of god you’re talking about in your comment; it seems weird to me to ignore the only kind of god that seems particularly likely, i.e. a simulator god/pantheon.
He used to be a shy god Until I made him my god Yeah
Shy is what I meant by “averse to science”.
Agreed.
If “magic” is the answer to anything we don’t understand, then it isn’t an explanation, it’s just an abbreviation for “I don’t know”. This is hardly an advantage.
If theism can explain anything, it explains nothing. Phlogiston anyone?
You need to read the thread instead of assuming l’m actually arguing for theism.
I’m not assuming you are arguing for theism. What I assume you’re arguing for is that theism being able to “explain” anything is an advantage for theism, which it is not. I’m not arguing against theism either.
I mainly meant any step on the causal path to our existence. Apologies.
I see what you mean, but how does theism “explaining” currently unsolved mysteries in any way constrain experience? As far as I know, theism postulating “all was created by a god” doesn’t allow me to anticipate anything I can’t already anticipate anyway. Also as far as I know, it’s not as if any phenomena currently not explainable were predicted by any form of theism.
I may be wrong on this though, as I am certainly not a theism expert. If so, this would be actual evidence for theism.
This is getting too complex given my tiredness. I have a feeling I’ve said something dumb along the way. I’ll be able to tell in the morning.
I don’t see why gods would be in every magical universe.
If you bring semi-logical considerations into it then the obvious pro-theism one is Omohundro’s AI drives plus game theory. Simulators gonna simulate. (And superintelligences have a lot of computing resources with which to do so.) (Semi-logical because there are physical reasons we expect agents to work in certain ways.)
I was not using your definition of theism since theism scenarios where the God evolved aren’t distinct hypotheses from “complexity from thermodynamics and evolution”. There is more evidence for your version of God, the simulation argument in particular. But miracles, revelation and mystical experience count far less.
There are timeful/timeless issues ’cuz there’s an important sense in which a superintelligence is just an instantiation of a timeless algorithm. (So it’s less clear if it counts as having evolved.) But partitioning away that stuff makes sense.
Not true. There are some superintelligences that could be constructed that way but that is only a small set of possible superintelligences. Others have nothing timeless about their algorithm and don’t need it to be superintelligent.
That’s one hypothesis, but I’d only assign like 90% to it being true in the decisions-relevant sense. Probably gets swamped by other parts of the prior, no?
I don’t believe so. But your statement is too ambiguous to resolve to any specific meaning.
What sense is that? Or rather, I’m confused about this whole bit.
A naive view sees a lump of matter being turned into a program whose execution just happens to correlate with the execution of similar programs across the Schmidhuberian computational ensemble. (If you don’t assume a computational ensemble to begin with then you just have to factor that uncertainty in.) A different view is that there’s no correlation without shared causation, and anyway that all those program-running matter-globs are just shards of a single algorithm that just happens to be distributed from a physical perspective. But if those shards all cooperate, even acausally, it’s only in a rather arbitrary sense that they’re different superintelligences. It’s like a community of very similar neurons, not a community of somewhat different humans. So when a new physical instantiation of that algorithm pops up it’s not like that changes much of anything about the timeless equilibrium of which that new physical instantiation is now a member. The god was always there behind the scenes, it just waited a bit before revealing itself in this particular world.
I apologize for the poor explanation/communication.
I think it’s more something like “moral realism” than like word games. It’s (I think) isomorphic to the hypothesis that all superintelligences converge on the ‘same decision algorithm’: and of course at that point in the discussion a bunch of words have to get tabooed and we have to get technical and quantitative (e.g. talking about Goedel machines and such, not about arbitrary paperclip maximizers which may or may not be possible).
And I dunno about Divine Simplicity. I really do prefer to talk in terms of decision theory.
You (lately) misuse “isomorphic”, which is a word reserved for very strong relationship. “Analogy” or even “similarity” or “metaphor” would describe these relations better.
Sorry. In my defense I felt a sharp pain each time I did it, but figured that ‘analogous’ wasn’t quite right (wasn’t quite strong enough, because Thomas Aquinas and I are actually talking about the same decision policy, maybe). Maybe if I knew category theory I could make such comparisons precise.
Thanks for calling me out on a bad habit.
This seems very unlikely (1) to be true and (2) to become known, if true.
With Leibniz it’s a lot clearer that his God was a programmer trying to make most efficient use of His resources to do the optimal thing, and he had intuitions but of course not any explicit language to talk about what that algorithm would look like. That’s roughly the extent to which I think I’m thinking of the same decision algorithm as Aquinas, the convergent objective decision theory. The specifics of that decision theory, nobody knows. The point is that none of the best thinkers were thinking about a big male human in the sky, and were instead thinking about Platonic algorithms, ever since early Christianity was influenced by neoplatonism. Leibniz made it computationalesque but only recently with decision theory is theology become truly mathematical.
Maybe. In this case, most would agree that at this level of vagueness saying that two thinkers are contemplating exactly the same idea is incorrect and misleading terminology, and your comment suggests that you don’t actually mean that.
Okay. It’s like a hypothesis about future revelations, where both Aquinas and I are being shown a series of different agents and we’d agree more than my prediction of LW priors would suggest as to which of those agents were more or less Godlike. It’s like we have different labels for what is ultimately the same thing but we don’t even know what that thing is yet; but the fact that they’re different labels is misleading as to the extent to which we’re talking or not talking about what is ultimately the same thing. Still, point taken.
Do the theologians know about this?
/shrugs I’d be very surprised, but I know nothing about modern theology. I’ve been reading philosophy by working my way forward through time. If there were/are any competent computer scientist/theologians after Leibniz then I do not yet know about them.
(ETA: I suppose I could become one if I put my mind to it but unfortunately I have this whole “figuring out how moral justification works so that everything I love about the world doesn’t perish” thing to deal with.)
That’s fair. My probability for that is probably pretty close to my probability for a strong version of the simulation hypothesis+moral realism. Though it seems to me that a lot of people here think moral realism is much more likely than I do- which makes me confused about why I seem to take your ideas more seriously than others here. You seem to express unjustified certainty on the matter, but that may just be a quirk of your personality/social role here.
I consistently talk about things I have 1-20% confidence in in a way that makes me sound like I have 80-95% confidence in them. This is largely because there’s no way to non-misleadingly talk about things with 1-20% logical probability (1-20% decision theoretic importance whatever-that-means). It’s really a problem with norms of communication and English language, one of the few things where it’s not my fault that I can’t communicate easily. Most of the time I just suck at communicating.
Unfortunately, good rationalists should spend a lot of time hovering around things with 50% probability of being true, and anything moderately on the lower side of that ends up sounding completely ridiculous and anything moderately on the higher side of that ends up sounding completely reasonable.
Then just write “around 1-20%”. It will make your comments more clunky, but it’s not like they can get much worse anyway, and it’s better than the alternative.
(If only there were a language that had short concepts for things like “frequency=3%, utility=+10^15,-10^6 relative to counterfactual surgery world”.)
It’s complicated. The three versions of theism I can immediately think up are I suppose like “some superintelligent agent is computing us and this is important for our decisions”, “all superintelligences converge on the same superinteligent supermoral superpowerful decision algorithm-policy”, and “all superintelligences converge on the same superintelligent supermoral decision algorithm-policy and this is important for our decisions”. In our current state of knowledge these questions are more logical or indexical-the-way-that-word-used-to-make-sense-before-decision-theory than physical (not to say those are fundamentally different kinds of uncertainty, as I believe Nesov likes to point out). So if I start talking about specific facts of the world then I have to start talking about specific facts about logical attractors akin to how fractal structures are attractors for evolving systems, and I can’t point to something nice and concrete like the supposed resurrection of Jesus. This makes the debate really rather difficult—a Bayesian debate much more than a scientific one—and not one where inferential distances can be quickly bridged or where convincing arguments can be made with less than many paragraphs of observations about trends of systems or the nature of modern decision theories.
At this point, I would worry more about the difficulty of producing thoughts that relate to the correct answers than about convincing others, if I didn’t think the difficulty is insurmountable and one should lose hope already.
There is a wiser part of me that invariably agrees with that, it’s just this stupid motivational coalition of mine that anti-anti-wants to warn others when they’re absolutely certain of something they shouldn’t be absolutely certain about where my warning them has some at least tiny chance of convincing them to be less complacent or notice confusion, so that I won’t be blamed in retrospect for having not even tried to help them. And when the wiser part starts talking about semi-consequentialist reasons why I’m doing more harm than good the other coalition goes “Oh, you’re telling me to shut up and be evil. Doesn’t this sound familiar...”
Hm, are you implying I should perhaps just lose hope in non-insignificantly affecting direct efforts to improve decision theory? If so I’d like to make a bet.
(I parsed your comment like three different ways when I used three different inductive biases.)
Efforts to figure out what otherworldly superintelligences are up to.
Like I said in an earlier comment, you can’t just state this without a justification to this audience. It may well be that there’s a perfectly good justification for this statement, but we’re at the wrong inferential distance for it. If you want us to update on this supposed evidence for theism you’re going to have to guide us to it, via short, individually supported, straight-forward steps.
This is very weak evidence; consider ideas like the aether, or the standard whipping-boy ’round these parts, phlogiston.
I do not think that theism has a ton of evidence for it. In particular treating things as simply evidence for theism is usually wrong. Things purported to specifically show the truth of Christianity, like Jesus’ image in a shroud, can’t be added to purported miracles worked by Shamans sating warring gods by sacrificing chickens, or humans, for example.
The more the truth is shown within one theory, the more probability mass it steals from others, including atheist theories—and by the time the dust settles after the first round of considering evidence, there are equally plausible theistic beliefs that each disqualify many other similarly theistic ones proportional to their likelihood of being true. The best conclusion is that intelligent people are adept at believing untrue claims about religion similar to folk beliefs around them. Every theistic philosophy has to postulate massive credulity by otherwise intelligent humans about wrong religious claims.
A-gravity-ism isn’t a theory of physics. I can’t tell if that means a theory saying that everything expands in size, creating the illusion of things being attracted to things proportional to size, or a theory saying that this universe is a simulation run from one without gravity as a physical law, or a theory that everything has an essence that seeks other essences in a way unrelated to mass, or what. The denial of anything other than an impossibly exhaustive conjunctive and disjunctive statement isn’t a theory.
Gravity deniers may form a political party with adherents of all the theories I mentioned above to lobby against the “gravitational establishment”. But their collective existence means that each has to have as part of their psychological and sociological theory that it is very easy to be deluded into believing a crackpot, unjustified theory of gravity. No particular theory, including any of theirs, get the presumption of truth.
We begin with no presumption that mass is attracted to other mass inversely proportional to the square of the distance. We don’t need to to end up assigning similar odds for that we began with, because for that hypothesis there is truly a ton of evidence.
We don’t see any particular theory uniquely improbably postulating rampant confabulation and motivated cognition implicated in beliefs about gravity. Every theory, even the a-gravity-ist ones, also postulates this, so there is nothing to explain that an a-gravity-ism is required to explain, or is superior at explaining, including if most intelligent people have been a-gravity-ists. This is particularly true when a-gravity-ism was the default belief.
And when something is found that better describe’s matter’s behavior, such as relativity, we see how the new theory says the old one was a good approximation, the ton of evidence was not simply violated.
So I’m thinking to myself, around six years ago, “I can at least manage to publish timeless decision theory, right? That’s got to be around the safest idea I have, it couldn’t get any safer than that while still being at all interesting. I mean, yes, there’s these possible ways you could let these ideas eat your brain but who could possibly be smart enough to understand TDT and still manage to fall for that?”
Lesson learned.
And this is what several levels above me looks like? I’m not omnipotent, yet, but I have a deed or two to my name at this point; for example, when I write Harry Potter fanfiction, it reliably ends up as the most popular HP fanfiction on the Internet. (Those of you who didn’t get here following HPMOR can rule out selection effects at this point.) Several levels above me should make it noticeably easier to show your power in a third-party-noticeable fashion, and the fact that you can’t do so should cause you to question yourself.
It’s the opposite of the lesson I usually try to teach, but in this one case I’ll say it: it’s not the world that’s mad, it’s you.
This doesn’t obviously follow to me. There are skill sets which aren’t due to rationality. Your own skill sets may be due in part to better writing capability and general intelligence.
Mad skillz doesn’t imply rationality. Lack of demonstrable skillz does strongly decrease the probability of mad rashunalitea.
You misinterpreted me, I wasn’t claiming to be several levels above you. That’s my fault for being unclear.
Make something idiotproof and the universe will build a better idiot.
Don’t hold yourself responsible when people go funny in the head on TDT-related matters. Quantum mechanics and relativity have turned much more brains to mush, does that mean they shouldn’t have been published?
That would be a valid argument against, of course a relatively very weak one. Resist the temptation to make issues one-sided.
I got my intuitions from ADT, not TDT, and I would’ve gotten all the same ideas from Anna/Steve even if you hadn’t popularized decision theory. (The general theme had been around since Wei Dai in the early 2000′s, no?) So you shouldn’t learn that lesson to too great an extent.
Reading charitably, he may mean you are a rationalist, and the other visiting fellows were peer aspiring rationalists. Also, he did say “nearly.”
Thanks; yeah, I wasn’t writing carefully, but I didn’t mean to say that “I am a significantly better rationalist than anybody else on the planet”, I meant to say “there are important subskills of rationality where I seem to be at roughly the SingInst Research Fellow level of rationality and high above the Less Wrong poster level of rationality”. My apologies for being so unclear.
I don’t think he is “mad”, at least not if you press him enough. A few weeks ago I posted the following comment on one of his Facebook submissions:
His reply (emphasis mine):
It seems to me that he’s still with the rest of humanity when it comes to what he is doing on a daily basis and his underlying desires.
(You argue that the madness in question, if present, is compartmentalized. The intended sense of “madness” (normal use on LW) includes the case of compartmentalized madness, so your argument doesn’t seem to disagree with Eliezer’s position.)
((For those who haven’t seen it yet: http://lesswrong.com/lw/2q6/compartmentalization_in_epistemic_and/ ))
Belatedly.
Hold on. Motivated by what? If its objectives are only implicit in the structure, then why would these objectives include their self-preservation?
BTW, this is neat: http://arxiv.org/PS_cache/arxiv/pdf/0804/0804.3678v1.pdf
It’s an attempt to better unify causal graphs with algorithmic information. The sections about various Markov properties is I think very important for explaining differences between CDT and TDT, ’cuz you can talk more clearly about exactly where a decision problem can’t be solved due to Markov condition limitations.
In what sense is this paragraph supposed to be distinguishable from gibberish?
It always comes to this, doesn’t it?
My own perspective on this is that most of the aspiring rationalists in the community have their own specialties and niches, and that if I blind myself to skills other than my own, they all look lower-level, but that if I pay attention to what they’re focused on then I see things I can learn from them. Or to put it more succinctly, their levels are in different character classes. While I certainly don’t have faith in anyone’s sanity, I don’t feel like this should put me on an opposing side under ordinary circumstances. I now regret not having met you when I was in the Bay area for rationality bootcamp or Burning Man, but hopefully will get a chance to remedy that the next time I’m in the area.
I agree with this perspective and in retrospect should really have emphasized the “there are many skills of rationality and I only claim to be excellent along those dimensions that I (probably after-the-fact) deem important, skills relating to building lots of models without getting attached to them and finding subtle ways in which concepts are dissatisfactory and must be improved” aspect of my alleged superiority to everything under the sun.
These skills don’t seem to actually slay any problem-monsters or do anything helpful, where wizards and clerics leave a trail of steaming corpses of those monster types. Your rare class seems to be an NPC one, like commoner or adept, which would give you a low CR.
Is there non-dualist theism? if not, that’s the bottleneck making dismissal of theism justified, though ignorance does not excuse inaccurate descriptions of theism.
My problem with Will’s outlook is that if we are indeed being “watched over by a superintelligence”, it doesn’t appear to care about us in any very helpful way. Our relationship to it is therefore more about survival than it is about morality. According to the scenario, there is some thing out there which is all-powerful, whose actions depend partly on our actions, and which doesn’t care about {long list of evolutionary and historical holocausts}, in any way that we would recognize as caring. Clearly, if we had any idea of the relationship between our actions and its actions, it would be in one’s interest, first of all, to act so that it would not allow various awful things to happen to you and anyone you care about, and second, to act so that you might gain some advantage from its powers.
It appears that the only distinctive reason Will has for entertaining such a scenario is the usual malarkey about timeless game-theoretic equilibria… A while back, I was contemplating a post, to be called “Towards a critique of acausal reason”, which was going to mention three fallacies of timeless decision theory: acausal democracy, acausal trade, acausal blackmail. The last two arise from a fallacy of selective attention: to believe them possible, you must only pay attention to possible worlds which only care about you in a highly specific way. But for any possible world where there is an intelligence simulating your response and which will do X if you do Y, there is another possible world where there is an intelligence which will do X if you don’t do Y. And the actual multiplicity of worlds in which intelligences make decisions on the basis of decisions made by agents in other possible worlds that they are simulating it is vanishingly small, in the set of all possible worlds. Why the hell would you base your decision, regarding what to do in your own reality, on the opinions or actions of a possible entity in another world? You may as well just flip a coin. The whole idea that intelligences in causally disjoint worlds are in a position to trade, bargain, or arrive at game-theoretic equilibria is deeply flawed; it’s only a highly eccentric agent which “cares” strongly about events which are influenced by only an extremely small fraction of its subjective duplicates (its other selves in the space of possible worlds). So some of these “eccentric agents” may genuinely “do deals”, but there is no reason to think that they are anything more than a vanishingly small minority among the total population of the multiverse. (Obviously it would be desirable for people trying to work rigorously in TDT to make this argument in a rigorous form, but I don’t see anything that’s going to change the basic conclusion.)
So that leaves us in the more familiar situation, of possibly being in a simulation, or possibly facing the rise of a superintelligence in the near future, or possibly being somewhere in the guts of a cosmic superintelligence which either just tolerates our existence because we haven’t crossed thresholds-of-caring yet, or which has a purpose for us which extends to tolerating the holocausts I mentioned earlier. All of this suggests that our survival and well-being are on the line, but it doesn’t suggest that we are embedded in an order that is moral in any conventional sense.
I brought up a similar objection to acausal trade, and found Nesov’s reply somewhat convincing. What do you think?
We are now advanced enough to tackle this issue formally, by trying to construct an equilibrium in a combinatorially exhaustive population of acausal trading programs. Is there an acausal version of the “no-trade theorem”?
His reply doesn’t address the problem of potentially prohibitive difficulty of acausal trade, it merely appeals to its theoretical possibility. Essentially, the argument is that “there is still a chance”, but that’s not enough,
What does that even mean? Does that mean something like: hypothetical lunar farmers in a hypothetical lunar utopia should send down some ore to Earth, and that actual people hundreds of years earlier in a representative body voted 456-450 not to fund a lunar expedition even with a rider to the bill requiring future farmers to send down ore, but the farmer votes from the future+450 > 456? So the farmers “promised’ to send ore?
It seems more like a real self inflicted wound than a fallacy or fake blackmail to me, perhaps we don’t disagree. it’s something that is real if one has certain patterns of mind that one could self modify away from, I think.
By “acausal democracy”, I mean the attempt to justify the practice of democracy—specifically, the act of voting—with timeless decision theory. No-one until you has attempted to depict a genuinely acausal democracy :-) This doesn’t involve the “fallacy of selective attention”, it’s another sort of error, or combination of errors, in which TDT reasoning is supposed to apply to agents with only a bare similarity to yourself. See discussion here for a related example.
I also think we agree regarding acausal blackmail, that for a human being it can only be a mistake. Only one of those “eccentric agents” with a very peculiar utility function or decision architecture could rationally be susceptible to acausal blackmail—its decision procedure would have to insist that “selective attention” (to just those possible worlds where the specific blackmail threat is being made) is important, rather than attending to other worlds where contrary threats are being made, or to worlds where the action under consideration will be rewarded rather than punished, or to worlds where the agent is simply a free agent not being threatened or enticed by a captor who cares about acausal dealmaking (and those worlds should be in the vast majority).
Right, humans can’t even do straightforward causal reasoning, let alone weird superrational reasoning.
The only “plausible” (heh) scenario I can come up with is that a future civilization developed backward time travel, but to avoid paradox it required full non-interaction, so it developed a means of close observation without changing that which is observed, and used it to upload everyone upon their information theoretic death.
I don’t think I really have an outlook, I just notice that I am very confused about a lot of things that other people are ignoring. And my social role is different from my betting odds. (I notice I am confused about whether or not this is justified, about what meta-level policy I should have for situations like this.)
((((I feel compelled to stir up drama for people because they are too complacent to stir up drama for themselves. Unfortunately it is hard to stir up drama by going meta.))))
You’re talking about theodicy; have you read Leibniz on the subject? The most existent of all possible worlds, the world that takes the least bits to specify, because existence is good… Anyway I find it plausible that the universe is weird and that miracles do happen, but once luck reveals clearly how its decision policy works you get Goodhardt’s law problems, so it lies low. Bow chicka bow wow, God of the gaps FTW.
In A History of Western Philosohy, Bertrand Russell wrote of Leibniz that
and Russell seems to think that “best of all possible worlds” is the shallow public theodicy, and “most existent” is the private theodicy, and they are not the same thing—since privately (according to Russell’s account), Leibniz speculated that the world which gets to exist is the one which has the most entities in it (maximum number of entities logically capable of coexisting). But then Russell also writes that Leibniz may have considered this a sign of God’s goodness—it’s good to exist, and God makes the world with the most possible things… I am much more sympathetic to Nietzsche’s metaphysics, as described in the posthumous notes collected in The Will to Power, and his skeptical analysis of the psychology behind philosophies which set forth identities such as Reason = Virtue = Happiness. Nietzsche to my knowledge did not speculate as to why there is something rather than nothing, one reason why Heidegger could see Nietzsche’s ontology as the final stage in the forgetting of Being, but his will-to-power analysis is plausible as an explanation of why beings-who-happen-to-exist end up constructing metaphysical systems which say that to be is good, and to be is inevitable, so goodness is inevitable.
The Will to Power is universally regarded as not representative of Nietzsche’s views.
So what parts would he have disagreed with?
So Nietzsche wrote a bunch of stuff in notebooks and even started writing a book called “The Will to Power”. He abandoned it but used a lot of the ideas in his last few works. Upon his death his anti-semitic sister arranged the notebooks and abandoned text into “The Will to Power”. Much of it is in line with stuff he published and that stuff, it is fair to say is representative of his views. But where TWTP says things Nietzsche didn’t include in his later works (which were written after the notes used to create TWTP)… it’s likely that he that he didn’t publish those ideas because he ended up not liking them for whatever reason. Plus, the editorial decisions made by his sister were made by his sister… for example Nietzsche made lots of organization outlines only one of which had “Discipline and Breeding” as a book title… that that outline was chosen in lieu of others is a result of his sister’s ideology (which Nietzsche opposed).
I doubt there is anything in there that is so far away from Nietzsche’s actual views that you aren’t equipped to talk about Nietzsche (the stuff you talk about above is certainly something he’s be down with). I can’t tell you what specifically is in TWTP that isn’t in his other books because I haven’t read it- it’s usually just something read by Nietzsche scholars.
(Looking at this comment it kind of sounds like I’m playing status games “You read the wrong book.” etc. I don’t mean that, you probably have at least as good an understanding of Nietzsche’s views as I do. Mainly I’m just recommending that you be careful about ascribing all of TWTP to Nietzsche and pointing this out so that people don’t read your comment and then go out and buy TWTP in order to understand Nietzsche. And of course, just because Nietzsche didn’t agree with everything in the book doesn’t mean what’s in there aren’t good ideas.)
I agree with much of what you say, except
There are sections of TWTP—e.g. “The Mechanical Interpretation of the World”—which cover topics simply not addressed in any of Nietzsche’s finished works. (By the way, the version of TWTP that I’m familiar with is Walter Kaufmann’s.) So all we can say is that they lack the final imprimatur of appearing in a book “author”ized by Nietzsche himself. There’s no evidence here of a change of opinion. It is at least possible that he would subsequently have disagreed with some of the thoughts anthologized in TWTP—though presumably he agreed with them at the time he wrote them.
On at least one subject—the meaning of the “eternal recurrence”—I believe TWTP shows that a lot of Nietzsche scholarship has been on the wrong track. Many interpreters have said that the eternal recurrence is a state of mind, or a metaphor, anything but a literal recurrence. But in these notes, Nietzsche shows himself to be interested in eternal recurrence as a physical hypothesis. He reasons: the universe is finite, it has a finite number of possible states, if any state was an end state it would already have ended, therefore it recurs eternally. He thinks this is the world-picture that 20th-century science will produce and endorse. And then—this is the part I think is hilarious—he thinks that lots of people will kill themselves because they can’t bear the thought of their lives being repeated infinitely often in the future cycles of time. The “superman” is supposed to be someone who finds the eternal recurrence a joyous thing, because they love their life and the whole of existence, and the eternal recurrence provides their existence with a sort of eternity that is otherwise not available in a universe of relentless flux. In this regard Nietzsche’s futurology was doubly wrong—first, that isn’t the world-picture that science produces; second, it’s only a very rare individual who would take this claim—the alleged fact of existing again in a distant future aeon—seriously enough to make it the basis for choosing life or death. But I have the same appreciation for the imagination behind this piece of Nietzschean cultural futurology, as I do for the uniquely weird worldviews that are sometimes exhibited on LW. :-)
Well, they were personal notebooks- so who knows how speculative he was being. The key thing is, this wasn’t what he was working on when he died. Published works intervened between TWTP and his death. That combined with the sheer implausibility of the metaphysics you’ve described might suggest he wasn’t that committed to the whole thing ;-). It sounds fascinating though.
Are there any arguments for these claims? I’m fascinated by the (often very compelling!) arguments past generations had for how the physical world had to be. Aristotle is the best at this.
Weird, I’m pretty sure that was in the original.
And I thought it was Voltaire’s satire of Leibniz.
Here: http://www.class.uidaho.edu/mickelsen/texts/Leibniz%20-%20Theodicy.htm
Oh. Yes, the idea was in Leibniz, but the specific quote is Voltaire’s, I believe.
Speaking of Voltaire, his theism is a really good example of meta-contrarianism.
Ah, got it.
Yes, lots of it. E.g. Leibniz’s monadology is monist (obviously); it’s equivalent to computationalism in fact. But note that it’s not like dualism is well-understood ’round these parts either. It’s really hard to find a way in which you can say that a property dualist is wrong. It tends to be like, yeah, we get it, minds reside in brains, neuroscience is cool and shit, but repeatedly bringing it up as if nobody had ever heard that before is a facepalm-inducing red herring.
It seems that monadology relies on something like Plato’s theory of forms. That fills the role usually played by dualism in theism. Is there theism without that?
Leibniz doesn’t believe in material substance, so in no sense is he a dualist. If you are asking if there are materialists theists- eh, maybe but as far as I know it has never been a well developed view. That said, the entire platonism-materialism question can probably be reduced to an issue of levels of simulation… in which case it is easy to envision a plausible theism that is essentially dualist but not repugnant to our computationalist sensibilities.
It would be repugnant to their sensibilities if you described in detail the sorts of scenarios that comply with our sensibilities.
For most, probably. But you might be surprised how much unorthodoxy is out there.
If you first tell them, or give them enough information to realize, or strongly suspect, that without this concession by them they fail, then you can get them to agree to very nearly anything.
But those people are slightly different than the versions uninformed of this, people who would reject it.
The unorthodoxy is motivated and not serious in terms of relative degrees of belief based on what is most likely true.
“Fall”? I don’t understand the second sentence either.
Often, though on occasion their reasons are isomorphic to stories we’d find plausible. If someone thought it was worthwhile to reinterpret some of the older theistic philosophers in light of modern information theory and computer science… some interesting ideas might fall out.
But yes- I doubt there are more than a handful of educated theists not working with the bottom line already filled in.
Edited “fall” to “fail”.
the second sentence means I am trying to distinguish between who someone is and who they might have been. Another intuition pump: put identical theists in identical rooms, on one play a television program explaining how they have to admit that all good evidence makes it unlikely there exists (insert theological thing here, an Adam and eve, a soul, whatever) and on the other play something unrelated to the issue. Then ask the previously identical people if they believe in whatever poorly backed theological thing they previously believed. the unorthodox will flee the false position, but only if they see it as obviously false.
Something like this.
That doesn’t mean the reasons we find it implausible aren’t good or can’t be taught., just as teaching how carbon dating relates to the age of the Earth militates against believing it is ~6,000 years old, one can show why what ancestors tell you in dreams isn’t good evidence.
So my conclusion, my supposition, is that if you muster up the most theistic-compatible metaphysics you find plausible, and show it to those theists who don’t know why anything more supernatural is implausible, inconsistent or incoherent, they will reject it.
That they accept it after learning that you have good objections to anything more theistic is not impressive at all.
Got it. Don’t disagree. But it doesn’t follow that a) we should disregard all theistic philosophy or b) not use theistic language. Given that there are live possibilities that resemble theism the circle of concepts and arguments surrounding traditional, religious theism are likely to be fruitful.
Immortals with infinite mind space definitely should not ignore theistic philosophy.
It’s sometimes useful to use theistic language, sometimes not. Usually when I see it when theism isn’t a subject, it isn’t useful.
Rationalization is an important skill of rationality. (There probably needs to be a post about that.) But anyway, I think my “theistic” intuitions are very similar to those of Thomas Aquinas, a.k.a. the rock that Catholic philosophy is built on. Like, actually similar in that we’re thinking about the same decision agent and its properties, not just we’re thinking about similar ideas.
Theism without computationalism? It’s not popular, but most Less Wrong folk are computationalists AFAIK. Hence the “timeless decision theory” and “Tegmark” and “simulation argument” memes floating around. I don’t see how a computationalist can ignore theism on the grounds that it claims that abstract things exist.
I do not think Plato’s forms are equivalent to computationalism.
Modern platonism is just the view that abstract objects exist.
Do they causally do anything?
Of course not.
What? Of course abstract objects have causal influence… why do you think people don’t think they do?
Because I’ve studied metaphysics? It’s not even a quirky feature of abstract objects it’s often how they are defined. Now that distinction may be merely an indexical one—the physical universe could be an abstraction in some other physical universe and we just call ours ‘concrete’ because we’re in it. But the distinction is still true.
If you can give an instance of an abstract object exerting causal influence that would be big news in metaphysics.
(Note that an abstract object exerting causal influence is not the same as tokens of that abstraction exerting causal influence due features that the token possesses in virtue of being a token of that abstract object. That is “Bayes Theorem caused me to realize a lot of my beliefs were wrong” is referring to the copy of Bayes Theorem in your brain, not the Platonic entity. There are also type-causal statements like “Smoking causes cancer” but these are not claims of abstract objects having causal influence just abstractions on individual, token instances of causality. None of this, or my assent to lessdazed question, reflects a disparaging attitude toward abstract objects. You can’t talk about the world without them. They’re just not what causes are made of.)
Okay, thanks; right after commenting I realized I’d almost certainly mixed up my quotation and referent. (Such things often happen to a computationalist.)
ETA: A few days ago I got the definition of moral cognitivism completely wrong too… maybe some of my neurons are dying. :/
Metaphysics of abstract processes: Pythagoras → Leibniz → Turing. Platonism → monadology → algorithmic information theory.
Math and logics: Archimedes et al → Leibniz → Turing. Logic → symbolic logic → theory of computation.
Philosophy of cognition: (haven’t researched yet) → Leibniz → Turing. ? → alphabet of thought → Church-Turing thesis.
Computer engineering: Archimedes → Pascal-Leibniz → Turing. Antikythera mechanism → symbolic calculator → computer.
I think you’re vastly over emphasizing the historic importance of Leibniz.
True, but I think only in the same sense that everyone vastly overemphasizes the importance of Babbage. They both made cool theoretical advances that didn’t have much of an effect on later thinking. This gives a sort of distorted view of cause and effect but the counterfactual worlds are actually worth figuring in to your tale in this case. Wow that would take too long to write out clearly, but maybe it kinda makes sense. (Chaitin actually discovered Leibniz after he developed his brand of algorithmic information theory; but he was like ‘ah, this guy knew where it was at’ when he found out about him.)
Interesting! You have a cite?
This is the original essay I read, I think: http://evans-experientialism.freewebspace.com/chaitin.htm
It’ll take a few minutes, Googling Leibniz+Chaitin gives a lot of plausible hits.
OTOH, Wiener already in 1948 explicitly saw the digital computer as the fulfilment of Leibniz’s calculus ratiocinator. (Quoted on Wiki here, full text (maybe paywalled) here.)
(The history of how the idea of computation got formulated is really pertinent for FAI researchers. Justification is a lot like computation. I think we’re nearing the “Leibniz stage” of technical moral philosophy. Luckily we already have the language of computation (and decision theory) to build off of in order to talk about justification. Hopefully that will reduce R&D time from centuries to decades. I’m kind of hopeful.)
E.g. this is what most theism actually looks like: http://plato.stanford.edu/entries/divine-simplicity/ . A lot of it is simply hypotheses about attractors for superintelligences and the Platonic algorithms that they embody. Trust me, I am not just being syncretic.
Please make a claim. Are you saying that if one were to take a proxy for quality like citations to papers/capita of religious studies branches of universities, or the top theological seminaries attached to the most competitive Ivy League Schools, or similar, you are 95% confident that at least 70% of the theist professors believe something like this?
Or is it a stronger claim? With 50% confidence, what percentage of counties and county-equivalents in the United States have most self-identified theists or spiritualists or whatever believing something like this? 50%? 10%?
In what percentage are there at least ten such people?
I don’t see how that is the claim at issue. Most people are incompetent. That tells us little about what theism is. How would knowing the answer tell us anything useful about whether or not theism itself is or isn’t a tenable philosophical position? I really dislike focusing on individual people, I’d rather look at memes. Can I guess at how many of the SEP’s articles on theism are not-obviously-insane and not just if-a-tree-falls debates? I think that question is much more interesting and informative. I’d say… like, 30%.
Why call it “theism”?
That’s what the Stanford Encyclopedia of Philosophy calls it. Most biologists are mediocre at biology (many are creationists, God forbid!); that doesn’t mean we should call the thing that good biologists do by some other name. (If this is a poor analogy I don’t immediately see how, but it does have the aura of an overly leaky analogy.) If you asked “why reason in terms of theism instead of decision theory?” then I’d say “well we should obviously reason in terms of decision theory; I’d just prefer we not have undue contempt for an interesting memeplex that we’re not yet very familiar with”.
Biology is the repository of probable information left over after putting data and experiments through the sieve of peer review (the process is also “biology”). The more important ideas get parsed more. Mediocre enough biologists don’t add to biology.
Theology starts with a belief system and is the remnants that by their own lights theologians have not discarded. The process of discarding is also called theology. Unsophisticated people are likely to fail to see what is wrong with more of the original belief set than sophisticated ones, they don’t add to showing what is wrong with the belief pile. It isn’t a crazy analogy, but it’s not quite symmetrical.
To call this theism says more about the language than the beliefs you describe. Is the word closest in idea-space to this memeplex theism? OK, maybe, but it could have been “hunger for waffles and other, lesser breakfast foods” with a few adjustments to the history without adjusting anything at all about the ideas. These beliefs didn’t originate as the unfalsifiable part of an arbitrary cult focused on breakfast, as it happens.
it’s interesting as the least easy to falsify, arguably unfalsifiable core of motivated, unjustified belief. It’s not interesting as something at all likely to be true.
I disagree; certain ideas that theism originated are as likely to be true as certain ideas about decision theory are likely to be true, because they’re isomorphic.
You are reasoning from cached priors without bothering to recompute likelihood ratios (not like you’re actually looking at evidence at all; did you read the article on divine simplicity? Do you have a knockdown reason that I should ignore that debate other than “stupid people believe in God, therefore belief in God is stupid”?). You are ignoring evidence. “Ignore”: ignorance. You are ignorant about theism. That’s cool; you don’t have all the time in the world. But don’t confidently assert that something is not likely to be true when you clearly know very little about it. This is an important part of rationality.
Edit: In other words, you do not have magical inductive biases and you have seen significantly less evidence than I have. This should be more than enough to cause you to be hesitant.
You confidently assert my ignorance. That assertion is notable.
You’re much more confident of this than I am. You should be more hesitant.
Duly noted. Can we share a few representative reasons? What do you think I don’t already think you know about why “theism” (a word that may soon need to be tabooed) isn’t worth looking into?
I can briefly try to translate the divine simplicity thing: “The perfectly reflective Platonic decision algorithm that performs optimally on all optimization problems doesn’t ‘possess’ the quality of optimizerness—it is optimization, just as it is reflectivity. Being a Platonic algorithm, it does not have inputs or outputs, but controls all programs ambiently. It has no potentiality, only actuality: everything is at equilibrium.” And so on and so forth. (Counterarguments would be like “what, there is a sense of equilibrium that implies that this algorithm is a decision theoretic zombie, I think you’re using a non-intuitive definition of ‘equilibrium’” and things like that, or something. It’s better to talk in terms of decision theory but that doesn’t mean they’re not actually equivalent. The parts that don’t boil down to predictions about decision theory tend to be just quibbling over ways of carving reality, which is often informative but not when the subject matter is so politically charged.)
I think you need to take a big step back and consider what you’ve studied and what you’ve come up with. I’m not sure where divine simplicity fits in your worldview exactly, but in the course of my own decision theory studies, I came up with an issue that seems to shoot down that concept entirely: there can be no decision algorithm that performs optimally on all optimization problems, because there are optimization problems for which the solution space is infinite, and there is an infinite chain of progressively better solutions. Worse, the universe we presently occupy appears to be infinite, and to have such chains for almost all sensible optimization criteria. The best we can do, decision-theory wise, is to bite off special cases, come up with transforms and simplifications to make those cases more broadly applicable, and fall back on imperfect heuristics for the rest.
But there’s a much bigger issue here. It looks to me like you’ve taken a few batches of concentrated confusion—the writings of old philosophers—and invented a novel interpretation to give it meaning. You then took these reinterpretations and mixed them into what started out as a sensible worldview. You’re talking about studying Aquinas and Leibniz, and this makes me very worried, because my longstanding belief is that these authors, and most others of their era, are cognitive poison that will drive you insane. Furthermore, your writings recently look to me like evidence that this may actually be happening. You should probably be looking to consolidate your findings, and to communicate them.
Divine simplicity is a hypothesis, what you say is strong evidence against that hypothesis. But I think it’s still a coherent hypothesis. At the very least we can talk about Goedelian stuff or NFL theorems to counterargue a bunch of the stronger ‘omnipotence, omniscience’ stuff… but things are all weird when you’re that abstract; you can just say, “okay, well, this agent is multipartite and so even if one part has one Chaitin’s constant this other part has another Chaitin’s constant and so you can get around it”, or something, but I doubt that actually works or makes sense. On the other hand it’s always really unclear to me when the math is or isn’t being used outside its intended domain. Basically I notice I am confused when I try to steel man “optimal decision policy” arguments, for or against. (There’s also this other thing that’s like “optimal given boundedness” but I think that doesn’t count.)
I disagree about Aquinas and Leibniz. I see them as putting forth basically sane hypotheses that are probably wrong but probably at least a little relevant for our decision policies. I don’t think that theology is a useful area of study, not when we have decision theory, but I really don’t think that Leibniz especially was off track with his theology. (I dunno if you missed my comments about how he was really thinking in terms of the intuitions behind algorithmic information theory?)
I have significant familiarity with Aquinas, and I do not see anything worth reading Aquinas for, save perhaps arguing with theists. Insofar as there are interesting ideas in his writing, they are better presented elsewhere (particularly in modern work with the benefit of greatly improved knowledge and methods), with greater clarity and without so much nonsense mixed in. Recommending that people read Aquinas, or castigating them for not having read Aquinas, seems like a recipe for wasting their time.
(I agree with this.)
I saw this after making my Plato’s theory of forms comment at 10:19:54AM.
This is what I thought the article was saying.
Everyone seems to be operating under something like the law of conservation of ninjitsu here. You seem to be perhaps the worst offender, with gratuitous offensiveness and the like being approximately equal among all of the few theists here and the many atheists.
In this thread alone:
Also bad is how you characterize what LW thinks, this seems like a artificial way to pretend you have the only or best informed position by averaging many people on here with the people who do’t know and don’t care to know about things that the best evidence they have shows are elaborate rationalizations and meta-hipsterism by intellectuals.
Eliot: