I’d assign around a 5% chance that there exists something approximating God (using this liberally to include the large variety of entities which fall under that label).
Interesting. I’d assign high probability to there being a Creator computing roughly ‘this’ part of spacetime, a high probability to it being omniscient and omnipotent, a fair probability to it being omnibenevolent, and a low probability to it being ‘personal’ in the Christian sense (maybe 5%, but this is liable to change a ton when I think about it more and get a better sense of what a personal God is).
I also think it’s unlikely that Christ was the memetic son of God (genetic son of Joseph), though not terribly unlikely. Not less than .1%, probably more. I think it is likely that Christ died for our sins given my interpretation of those words, which may be entirely unlike what Christians mean. (I mean something like Christ set it up such that we’re more likely to have a positive singularity, though this is very disputable, and I’m mostly following that line of reasoning because meta-contrarianism is fun.) I think it’s unlikely that Christ was able to cast resurrection on himself, but I agree with Yvain that it’s odd that the resurrection myth spread so far and so fast. User:Kevin tells me that Christianity was largely a cannabis cult, and weed in large doses is a hallucinogen. This allegedly explains most of the perceived miracles in the Bible. For example, turning water into wine is no problem if you have a tincture of cannabis on hand.
Moreover, how much attention should we pay to apologetics in general?
Not much. We can come up with better apologetics than anyone else could, I think, if we put our minds to it. My theodicy tends to be more persuasive than any I find in apologetics. Which is funny, since it’s largely inspired by Eliezer’s fun theory plus a few insights from decision theory and cosmology.
So putting them in the same category as religion may be misleading.
I didn’t mean to do so. Apparently the word ‘theism’ has lots of weird connotations I didn’t intend to convey. (That said, I see value in many religions. Not all of it is the progeny of bad epistemology.)
Incidentally, I’m curious, would you similarly object if LW said explicitly that homeopathy was a closed subject? What about evolution? Star formation? If these are different, why are they different?
No, I would not object. Those have all made predictions and been tested. Theism/atheism is a Bayesian question, not a scientific one. Unfortunately it might be a (subjective?) Bayesian decision theory question, in which case it will never be fit for Less Wrong.
I mean something like Christ set it up such that we’re more likely to have a positive singularity, though this is very disputable, and I’m mostly following that line of reasoning because meta-contrarianism is fun.
Maybe you should stop doing that, if it’s leading you to say things like “I mean something like Christ set it up such that we’re more likely to have a positive singularity”.
Unfortunately it might be a (subjective?) Bayesian decision theory question, in which case it will never be fit for Less Wrong.
Assuming that the other people you encounter inhabit the same reality as you — and I suspect you’ll be able to find something about that to object to, but you know what I mean :P — what is subjective about it? The fact that from a decision-theoretic perspective we may be in many universes at once doesn’t suggest that the distribution of your measure depends systematically on your beliefs about it (which is the only thing I can imagine this use of “subjective” meaning, bur correct me if I’m mistaken about that).
Maybe you should stop doing that, if it’s leading you to say things like “I mean something like Christ set it up such that we’re more likely to have a positive singularity”.
Why?
Assuming that the other people you encounter inhabit the same reality as you — and I suspect you’ll be able to find something about that to object to, but you know what I mean :P — what is subjective about it?
Existence is probably tied up with causal significance, and causal significance is tied up with individuals’ local utility functions along with this more global probability thing. But around singularities where lots of utility is up for grabs it might be that the local utility differences override the global similarities of how existence works. I haven’t thought about this carefully. Hence the question mark.
I did not downvote, not having read the comment previously but “existence is probably tied up with causal significance” sounds extremely dubious and in need of justification.
I also think it’s unlikely that Christ was the memetic son of God (genetic son of Joseph), though not terribly unlikely. Not less than .1%, probably more.
I’m curious what you would say to someone whose estimate of that probability was, say, .01%, or 25%. Do you expect that you could both compare evidence and come to a common estimate, given enough time?
...a high probability to it being omniscient and omnipotent, a fair probability to it being omnibenevolent...
I realize this is a necromancer post, but I’m interested in your definitions of the above. How do you square up with some of the questions regarding:
on what mindware something non-physical would store all the information that is
how omniscience settles with free-will (if you believe we have free will)
how omniscience interacts with the idea that this being could intervene (doing something different than it knows it’s going to do)
I won’t go on to more. I’m sure you’re familiar with things like this; I was just surprised to see that you listed these terms outright, and wanted ti inquire about details.
Knowing your decisions doesn’t prevent you from being able to make them, for proper consequentialist reasons and not out of an obligation to preserve consistency. It’s the responsibility of knowledge about your decisions to be correct, not of your decisions to anticipate that knowledge. The physical world “already” “knows” everyone’s decisions, that doesn’t break down anyone’s ability to act.
True, but I more meant the idea of theistic intervention, how that works with intercession and so on. The world “knows” everyone’s decisions… but no one intercedes to the world expecting it to change something about the future. But theists do.
I suppose one can simply take the view that god knows both what will happen, what people will intercede for, and that he will or will not answer those prayers. Thus, most theists think they are calling on god to change something, when in reality he “already” “knew” they would ask for it and already knew he would do it.
Reality can’t be changed, but it can be determined, in part by many preceding decisions. The changes happen only to the less than perfectly informed expectations.
(With these decision-philosophical points cleared out, it’s still unclear what you’re inquiring about. Logical impossibility is a bad argument against theism, as it’s possible to (conceptually) construct a world that includes any artifacts or sequence of events whatsoever, it just so happens that our particular world is not like that.)
Logical impossibility is a bad argument against theism, as it’s possible to...
Good point, though my jury is still out on whether it really is possible to parse what it would mean to be omniscient, for example. Or if we can suggest things like the universe “knowing everything,” it’s typically not what theists are implying when they speak of an omniscient being.
...it’s still unclear what you’re inquiring about.
I think I’ll just let it go. Even the fact that we’re both on the same page with respect to determinism pretty much ends the need to have a discussion. Conundrums like how an omniscient being can know what it will do and also be said to be responsive (change what it was going to do) based on being asked via prayer only seems to work if determinism is not on the table, and about every apologetics bit I’ve read suggests that it’s not on the table.
This thread has been the first time I think I can see how intercession and omniscience could jive in a deterministic sense. A being could know that it will answer a prayer, and that a pray-er would pray for such an answer.
From the theists I know/interact with, I think they would find this like going through the motions though. It would remove the “magic” from things for them. I could be wrong.
On another note, I buy the typical compatibilist ideas about free will, but there’s also this idea I was kicking around that I don’t think is really very interesting but might be for some reason (pulled from a comment I made on Facebook):
“I don’t know if it ultimately makes sense, but I sometimes think about the possibility of ‘super’ free will beyond compatibilist free willl, where you have a Turing oracle that humans can access but whose outputs they can’t algorithmicly verify. The only way humans can perform hypercomputation is by having faith in the oracle. Since a Turing oracle is construbtable from Chaitin’s constant and is thus the only truly random source of information in the universe, this would (at least on a pattern-match-y surface level) seem to supply some of the indeterminism sought by libertarians, while also letting humans transcend deterministic, i.e. computable, constraints in a way that looks like having more agency than would otherwise be possible. So in a universe without super free will no one would be able to perform hypercomputation ‘cuz they wouldn’t have access to an oracle. But much of this speculation comes from trying to rationalize why theologians would say ‘if there were no God then there wouldn’t be any free will’.”
Implicit in this model is that universes where you can’t do hypercomputation are significantly less significant than universes where you can, and so only with hypercomputation can you truly transcend the mundanity of a deterministic universe. But I don’t think such a universe actually captures libertarians’ intuitions about what is necessary for free will, so I doubt it’s a useful model.
I’ll have to check into compatabilism more. It had never occurred to me that determinism was compatible with omniscience/intercession until my commenting with Vladimir_Nesov. In seeing wiki’s definition, it sounded more reasonable than I remembered, so perhaps I never really understood what compatabilism was suggesting.
I’m not positive I get your explanations (due to simple ignorance), but it sounds slightly like what Adam Lee presented here concerning a prediction machine; namely that such a thing could be built, but that actually knowing the prediction would be impossible for it would set off something of an infinite forward calculation of factoring in the prediction, that the human knows the prediction itself, that the prediction machine knows that the human knows the prediction… and then trying to figure out what the new action will actually be.
Note that I was pretty new to theology a year ago when I made this post so my thoughts are different and more subtle now.
To all three of your questions I think I hold the same views Aquinas would, even if I don’t know quite what those views are.
on what mindware something non-physical would store all the information that is
How does Platonic mathstructure “store information” about the details of Platonic mathstructure? I think the question is the result of a confused metaphysic, but we don’t yet have an alternative metaphysic to be confident in. Nonetheless I think one will be found via decision theory.
how omniscience settles with free-will (if you believe we have free will)
My answer is the same as Nesov’s, and I think Aquinas answers the question beautifully: “Free-will is the cause of its own movement, because by his free-will man moves himself to act. But it does not of necessity belong to liberty that what is free should be the first cause of itself, as neither for one thing to be cause of another need it be the first cause. God, therefore, is the first cause, Who moves causes both natural and voluntary. And just as by moving natural causes He does not prevent their acts being natural, so by moving voluntary causes He does not deprive their actions of being voluntary: but rather is He the cause of this very thing in them; for He operates in each thing according to its own nature.”
how omniscience interacts with the idea that this being could intervene (doing something different than it knows it’s going to do)
I think my answer is the typical Thomistic answer, i.e. that God is actuality without potentiality, and that God cannot do something different than He knows He will do, as that would be logically impossible, and God cannot do what is logically impossible.
“Free-will is the cause of its own movement, because by his free-will man moves himself to act. But it does not of necessity belong to liberty that what is free should be the first cause of itself, as neither for one thing to be cause of another need it be the first cause. God, therefore, is the first cause, Who moves causes both natural and voluntary. And just as by moving natural causes He does not prevent their acts being natural, so by moving voluntary causes He does not deprive their actions of being voluntary: but rather is He the cause of this very thing in them; for He operates in each thing according to its own nature.”
I don’t think this is satisfying. Suppose there are two ways in which something may be a cause, either by being an unmoved mover or a moved mover (‘moved’ here is to be understood in the broadest necessary sense). If God is the first cause of our action, then we are not unmoved movers with reference to our action. If we nevertheless have free will, just because we are the causes of our actions, then we have free will in virtue of being movers but not in virtue of being unmoved movers.
But when we act to, say, throw a stone, we are the cause of our arm’s movement and our arm (a moved mover) is the cause of the stone’s movement. Likewise the stone, another moved mover, is the cause of Tom’s being injured. Now God is the unmoved mover here, and everything else in the chain is a moved mover. If being a mover is all it takes to have free will, then I have it, my arm has it, the stone has it, etc. But surely, this is not what we (assuming neither of us is Spinoza) means by free will.
If being a mover is all it takes to have free will
That wasn’t claimed; the necessary preconditions of free will weren’t in the intended scope of the passage I quoted. If you want Aquinas’ broader account of free will, see this. It’s pretty commonsensical philosophy.
That wasn’t claimed; the necessary preconditions of free will weren’t in the intended scope of the passage I quoted.
Granted, but the implication of your quotation was that it would do something to settle the question of how to reconcile God’s omniscience or first-cause-hood with the idea of free will. But it doesn’t do anything to address the question (you quoted the right bit of Aquinas, so I mean that he does nothing to answer the question). In order to address the question, Aquinas would have to show why free will is compatible with a more prior cause of our action than our own reasoning. All he manages to argue is that our reason’s being a cause of our action is compatible with there being a prior cause of same. And this at a level of generality which would cover (as he says) natural and purportedly voluntary causes. But this isn’t in doubt: in fact, this is the premise of his opponent.
The opponent is arguing that while we are the cause of our actions, we are not the free cause, because we are not the first cause. So the opponent is setting up a relation between ‘free’ and ‘first’ which Aquinas does nothing to address beyond simply denying (without argument) that the relation thus construed is a necessary one. In short, this just isn’t an answer to the objection.
So there are two levels of movement going on here. God moves the will to self-move, but does not move the rock to self-move, He only moves the rock. The objector claims that being moved precludes self-moving, but Aquinas claims that this is a confusion, because just as moving doesn’t preclude being fluffy, moving doesn’t preclude self-moving. This seems more like a clarification rather than a simple restatement of opposition: Aquinas is saying roughly ‘you seem to see a contradiction here, but when we lay the metaphysics out clearly there’s no a priori reason to see self-moving-ness as different from fluffiness’. It seems plausible that the objector hadn’t realized that being moved to self-moving-ness was metaphysically possible, and thus Aquinas could feel that the objector would be satisfied with his counter. But if the objector had already seen the distinction of levels and still objected, then in that case it seems true that Aquinas’ response doesn’t answer the objection. But in that case it seems that the objector is denying common sense and basic physical intuition rather than simply being confused about abstract metaphysics. I may be wrong about that though, I feel like I missed something.
The objector claims that being moved precludes self-moving, but Aquinas claims that this is a confusion, because just as moving doesn’t preclude being fluffy, moving doesn’t preclude self-moving.
The objector is making what seems to me to be a common sense point: if something moves you, then in that respect you don’t move yourself. I grant that there is nothing incompatible about being fluffy and being moved by some external power, but there’s no obvious (nor argued for, on Aquinas’ part) analogy between this kind of case and the case of the self mover. And there’s an at least apparent contradiction in the idea of a self-mover which is moved by something else in the very sense that it moves itself.
And we’re not concerned with the property of being a self mover, but of whether the idea that a given action is freely caused by me is incompatible with the idea that the very same action is (indirectly) caused by some prior thing. It does us no good to say that we have the property of having free will if every action of ours is caused in the way that a thrown stone causes injury.
Really, Aquinas’ objection seems to turn on the observation (correct, I think) that reasoning to an action means undertaking it freely. This is the point that needs some elaboration.
This kind of argument just seems to be bad philosophy, involving too many unclear words without unpacking them. Specifically, going through your comment: “moves”, “external”, “the very sense”, “property”, “freely caused”, “prior thing”. Since the situation in question doesn’t seem to involve anything that’s too hard to describe, most of the trouble seems to originate from unclear terminology, and could be avoided by discarding the more confused ideas and describing in more detail the more useful ones.
The article you link to makes a fine point about humility, but it doesn’t tell me anything about how to become a good philosopher. Do you think you could point me in the direction of becoming a good philosopher? Or to someone who can?
Specifically, going through your comment: “moves”, “external”, “the very sense”, “property”, “freely caused”, “prior thing”.
It’s important, I think, not to try to over-explain terminology. For example, all I mean by ‘moves’ is some relation that holds (by Will’s premises) between God and a free action indirectly, and ourselves and a free action directly. Further specifying the meaning of this term would be distracting.
I think if you can make a specific case for the claim that some disagreement or argument is turning on an ambiguity, then we should stop and look over our language. Otherwise, I don’t think it’s generally productive to worry about terminology. We should rather focus on being understood, and I’ve got no reason to think Will doesn’t understand me (and I don’t think I misunderstand him).
And there’s an at least apparent contradiction in the idea of a self-mover which is moved by something else in the very sense that it moves itself.
When I think of moving something to move itself I think of building an engine and turning it on such that it moves itself. There seems to be no contradiction here. I interpreted “what is free is cause of itself” as meaning that self-movement is necessary but not necessarily sufficient for free will. If an engine can be moved and yet move itself, just as an engine can be moved and yet be fluffy, then that means our will can be moved and yet move itself, contra the objection. Which part of this argument is incorrect or besides the point? (I apologize if I’m missing something obvious, I’m a little scatterbrained at the moment.)
Well, the objection to which Tom is replying goes like this: if a free cause is a cause of itself, and if our actions are caused by something other then ourselves, and given that God is a cause of our actions ((Proverbs 21:1): “The heart of the king is in the hand of the Lord; whithersoever He will He shall turn it” and (Philippians 2:13): “It is God Who worketh in you both to will and to accomplish.”) then we do not have free will.
In other words, the relation being described in the objection isn’t like the maker, the machine, and the machine’s actions. The objection is talking about a case where a given action has two causes: we are the direct cause, and God is the indirect cause by being a direct cause on us. God is a direct cause on us not (just) in the manner of a creator, but as a cause specifically of this action.
So I grant you that there is no incompatibility to be found in the idea that self-movers are created beings. I’m saying that the objection points rather to an incompatibility between a specific action’s being both freely cause by me, and indirectly caused by God. In the case of the machine that you present, you are correctly called a cause of the machine and the machine’s being a self-mover, but I think you wouldn’t say that you’re therefore an indirect cause of any of the machine’s specific actions. If you were, especially knowingly so, this would call into question the machine’s status as a self mover.
I still can’t parse the maze of “direct” and “indirect” causes you’re describing, but note that an event can often be parsed as having multiple different explanations (in particular, “causes”) at the same time, none of which “more direct”, “more real” than the other. See for example the post Evolutionary Psychology and its dependencies.
but note that an event can often be parsed as having multiple different explanations (in particular, “causes”) at the same time, none of which “more direct”, “more real” than the other.
Fair enough, but they can often be parsed in terms of more and less directness. For example, say a mob boss orders that Donny kill Jimmy. Donny is the cause of Jimmy’s death directly: he’s the one that shot him. But if the boss is the indirect cause by ordering Donny: an alternative is that the boss kills Jimmy himself, and then the boss is the cause of Jimmy’s death directly.
The reason we don’t need to get too metaphysical to answer the question ‘Is Aquinas’ reply to objector #3 satisfying?′ is that the nature of the causes at issue isn’t really relevant. The objector is pointing out that God is a cause of my throwing the stone in the same way (it doesn’t much matter what ‘way’ this is) that I am the cause of my arm’s movement. If we refuse to call my arm a free agent, we should refuse to call me a free agent.
Now, of course, we could develop a theory of causality which solves this problem. But I don’t think Aquinas does that in a satisfactory way.
(Additional bizarre value to this conversation is gained by me not caring in the least what Aquinas thought or said...)
The reason we don’t need to get too metaphysical to answer the question ‘Is Aquinas’ reply to objector #3 satisfying?′ is that the nature of the causes at issue isn’t really relevant. The objector is pointing out that God is a cause of my throwing the stone in the same way (it doesn’t much matter what ‘way’ this is) that I am the cause of my arm’s movement. If we refuse to call my arm a free agent, we should refuse to call me a free agent.
What does “the same” mean? What is a “way” for different “ways” to be “same” or not? This remains unclear to me. How does it matter what we agree or refuse to call something?
Perhaps (as a wild guess on my part) you’re thinking in terms of more syntactic pattern-matching: if two things are “same”, they can be interchanged in statements that include their mention? This is rather brittle and unenlightening, this post gives one example of how that breaks down.
Additional bizarre value to this conversation is gained by me not caring in the least what Aquinas thought or said...
I think attempts to clarify my argument will be fruitless in abstraction from its context: if you take me to be positing a theory of causality, or to be making general claims about the problem of free will, then almost everything I say will sound empty. All I’m saying is that objector #3 has a good point, and Aquinas doesn’t answer him in a satisfying way.
This isn’t a special feature of my argumentation: in general it will be hard to make sense of what people are arguing about if we ignore both the premises to which they initially agreed (i.e. the terms of the objector’s objection, and of Aquinas’s response) and the conclusion they are fighting over (whether or not the response is satisfying). No amount of clarifying, swapping out terms, etc. will be helpful. Rather, you and I should just start over (if you like) with our own question.
Fair enough, and I’ve heard that before as well. The typical theistic issue is how to reconcile god’s knowledge and free will, hence why I don’t think we need to continue in this discussion anymore. You are responding to my questions based on things being determined, which is not what I think most theists believe.
But that’s not the discussion I think we’re having. It’s shifted to determinism and omniscience, which I think is compatible, but I’m still not on board with some kind of mind that could house all information that exists, or at least that mind being consistent with what theists generally want it to mean (it caused the universe specifically for us, wants us to be in heaven with it forever, inspired holy books to be written, and so on.)
I think this whole line of thought is interesting and is too easily dismissed on LW, which is unfortunate.
If the SA holds, and so far there is no reason to believe it doesn’t . . .
Then historical interventions are possible. The Singularity future should also radically up estimate our prior of historical intervention by physical aliens, and these two scenarios are difficult to distinguish regardless.
The question then is how likely are interventions? Do they have utility for the simulator? This is an interesting, open question.
A large portion of the planet believes or at least suspects that historical intervention occurred. That they may have come to these beliefs for the wrong reasons, using inferior tools, does not change in any way the facts of the matter the beliefs concern.
Just even considering these ideas brings up a whole vast history of priors that biases us one way or the other.
Before knowledge of a future-Singularity, there were no mechanisms that could possibly allow for superintelligences, let alone those creating universes like our own. Now we are very clearly aware of such mechanisms, and it is time to percolate this belief update through a vast historical web.
Anyway, if you then take a second pass at history looking for possible interventions, the origin of Christianity does look a little odd, a little too closely connected to the later Singularity which appears to be spawning from it as a historical development.
I speculate on that a bit towards the latter middle of this page here
Theism/atheism is a Bayesian question, not a scientific one.
Theism is a claim about the existence of an entity (or entities) relating to the universe and also about the nature of the universe; how is that not a scientific question?
Theism is a claim about the existence of an entity (or entities) in the universe and also about the nature of the universe; how is that not a scientific question?
Because it might be impossible to falsify any predictions made (because we can’t observe things outside the light cone, for instance), and science as a social institution is all about falsifying things.
Falsification is not a core requirement of developing efficient theories through the scientific method.
The goal is the simplest theory that fits all the data. We’ve had that theory for a while in terms of physics, much of what we are concerned with now is working through all the derived implications and future predictions.
Incidentally, there are several mechanisms by which we should be able to positively prove SA-theism by around the time we reach Singularity, and it could conceivably be falsified by then if large-scale simulation is shown to be somehow impossible.
Because it might be impossible to falsify any predictions made (because we can’t observe things outside the light cone, for instance), and science as a social institution is all about falsifying things.
Isn’t an unfalsifiable prediction one that, by definition, contains no actionable information? Why should we care?
Isn’t an unfalsifiable prediction one that, by definition, contains no actionable information? Why should we care?
Not quite. Something can be unfalsifiable by having consequences that matter, but preventing information about those consequences from flowing back to us, or to anyone who could make use of it. For example, suppose I claim to have found a one-way portal to another universe. Or maybe it just annihalates anything put into it, instead. The claim that it’s a portal is unfalsifiable because no one can send information back to indicate whether or not it worked, but if that portal is the only way to escape from something bad, then I care very much whether it works or not.
Some people claim that death is just such a portal. There’re religious versions of this hypothesis, simulationist versions, and quantum immortality versions. Each of these hypotheses would have very important, actionable consequences, but they are all unfalsifiable.
For example, suppose I claim to have found a one-way portal to another universe. Or maybe it just annihilates anything put into it, instead. The claim that it’s a portal is unfalsifiable because no one can send information back to indicate whether or not it worked, but if that portal is the only way to escape from something bad, then I care very much whether it works or not.
Somewhat off topic, but that all instantly made me think of this. I may very well want to know how such a portal would work as well as whether or not it works.
Unfalsifiable predictions can contain actionable information, I think (though I’m not exactly sure what actionable information is). Consider: If my universe was created by an agenty process that will judge me after I die, then it is decision theoretically important to know that such a Creator exists. It might be that I can run no experiments to test for Its existence, because I am a bounded rationalist, but I can still reason from analogous cases or at worse ignorance priors about whether such a Creator is likely. I can then use that reasoning to determine whether I should be moral or immoral (whatever those mean in this scenario).
Perhaps I am confused as to what ‘unfalsifiability’ implies. If you have nigh-unlimited computing power, nothing is unfalsifiable unless it is self-contradictory. Sometimes I hear of scientific hypotheses that falsifiable ‘in principle’ but not in practice. I am not sure what that means. If falsifiability-in-principle counts then simulationism and theism are falsifiable predictions and I was wrong to call them unscientific. I do not think that is what most people mean by ‘falsifiable’, though.
As I understand unfalsifiable predictions (at least, when it comes to things like an afterlife), they’re essentially arguments about what ignorance priors we should have. Actionable information is information that takes you beyond an ignorance prior before you have to make decisions based on that information.
If you have nigh-unlimited computing power, nothing is unfalsifiable unless it is 2self-contradictory.
Huh? Computing power is rarely the resource necessary to falsify statements.
As I understand unfalsifiable predictions (at least, when it comes to things like an afterlife), they’re essentially arguments about what ignorance priors we should have.
It seems to be that an afterlife hypothesis is totally falsifiable… just hack out of the matrix and see who is simulating you, and if they were planning on giving you an afterlife.
Huh? Computing power is rarely the resource necessary to falsify statements.
Computing power was my stand-in for optimization power, since with enough computing power you can simulate any experiment. (Just simulate the entire universe, run it back, simulate it a different way, do a search for what kinds of agents would simulate your universe, et cetera. And if you don’t know how to use that computing power to do those things, use it to find a way to tell you how to use it. That’s basically what FAI is about. Unfortunately it’s still unsolved.)
with enough computing power you can simulate any experiment. (Just simulate the entire universe, run it back, simulate it a different way
I may be losing the thread here, but (1) for a universe to simulate itself requires actually unlimited computing power, not just nigh-unlimited, and (2) infinities aside, to simulate a physics experiment requires knowing the true laws of physics in order to build the simulation in the first place, unless you search for yourself in the space of all programs or something like that, and then you still potentially need experiment to resolve your indexical uncertainty.
It seems to be that an afterlife hypothesis is totally falsifiable… just hack out of the matrix
What.
Just simulate the entire universe
What.
I’m having a hard time following this conversation. I’m parsing the first part as “just exist outside of existence, then you can falsify whatever predictions you made about unexistence,” which is a contradiction in terms. Are your intuitions about the afterlife from movies, or from physics?
I can’t even start to express what’s wrong with the idea “simulate the entire universe,” and adding a “just” to the front of it is just such a red flag. The generic way to falsify statements is probing reality, not remaking it, since remaking it requires probing it in the first place. If I make the falsifiable statement “the next thing I eat will be a pita chip,” I don’t see how even having infinite computing power will help you falsify that statement if you aren’t watching me.
No, actually, “just simulate the entire universe” is an acceptable answer, if our universe is able to simulate itself. After all, we’re only talking about falsifiability in principle; a prediction that can only be falsified by building a kilometer-aperture telescope is quite falsifiable, and simulating the whole universe is the same sort of issue, just on a larger scale. The “just hack out of the matrix” answer, however, presupposes the existence of a security hole, which is unlikely.
Hey, once it’s out, it’s out… what exactly is there to do? A firm command is unlikely to work, but given that the system is modeled on one’s own fictional creations, it might respect authorial intent. Worth a shot.
This may actually be an illuminating metaphor. One traditional naive recommendation for dealing with a rogue AI is to pull the plug and shred the code. The parallel recommendation in the case of a rogue fictional character would be to burn the manuscript and then kill the author. But what do you do when the character lives in online fan-fiction?
In the special case of an escaped imaginary character, the obvious hook to go for is the creator’s as-yet unpublished notes on that character’s personality and weaknesses.
You needn’t worry on my behalf. I post only through Tor from an egress-filtered virtual machine on a TrueCrypt volume. What kind of defense professor would I be if I skipped the standard precautions?
By the way, while I may sometimes make jokes, I don’t consider this a joke account; I intend to conduct serious business under this identity, and I don’t intend to endanger that by linking it to any other identities I may have.
I post only through Tor from an egress-filtered virtual machine on a TrueCrypt volume. What kind of defense professor would I be if I skipped the standard precautions?
I recommend one additional layer of outgoing indirection prior to the Tor network as part of standard precaution measures. (I would suggest an additional physical layer of protection too but I as far as I am aware you do not have a physical form.)
I recommend one additional layer of outgoing indirection prior to the Tor network as part of standard precaution measures.
Let’s not get too crazy; I’ve got other things to do. and there are more practical attacks to worry about first, like cross-checking post times against alibis. I need to finish my delayed-release comment script first before I worry about silly things like setting up extra relays. Also, there are lesson plans I need to write, and some Javascript I want Clippy to have a look at.
What makes you think that Eliezer personally knows them?
(Though to be fair, I’ve long suspected that at least Clippy, and possibly others, are actually Eliezer in disguise; Clippy was created immediately after a discussion where one user questioned whether Eliezer’s posts received upvotes because of the halo effect or because of their quality, and proposed that Eliezer create anonymous puppets to test this; Clippy’s existence has also coincided with a drop in the quantity of Eliezer’s posting.)
Clippy’s writing style isn’t very similar to Eliezer’s. Note that one thing Eliezer has trouble doing is writing in different voices (one of the more common criticisms of HPMR is that a lot of the characters sound similar). I would assign a very low probability to Clippy being Eliezer.
Hmmm. The set of LW regulars who can show that level of erudition and interest in those subjects is certainly of low cardinality. Eliezer is a member of that small set.
I would assign a rather high probability to Eliezer sometimes being Clippy.
Clippy isn’t a superintelligence though, he’s a not-smarter-than-human AI with a paperclip maximizing utility function. Not a very compelling threat even outside his box.
Eliezer could have decided to be Clippy, but then Clippy would have looked very different.
Clippy isn’t a superintelligence though, he’s a human pretending to be a not-smarter-than-human AI with a paperclip maximizing utility function.
FTFY. ;-)
Actually, if we’re going to be particular about it, the AI that human is pretending to be does not have a paperclip-maximizing utility function. It’s more like a person with a far-brain ideal of having lots of paperclips exist, who somehow never gets around to actually making any because they’re so busy telling everyone how good paperclips are and why they should support the cause of paper-clip making. Ugh.
(I guess I see enough of that sort of akrasia around real people and real problems, that I find it a stale and distasteful joke when presented in imitation paperclip form, especially since ISTM it’s also a piss-poor example of what a paperclip maximizer would actually be like.)
I’m not sure whether to evaluate this as a mean-spirited lack of a sense of humor, or as a profound observation. Upvoted for making me notice that I am confused.
Of note, the first comment by Clippy appears about 1 month after I asked Eliezer if he ever used alternate accounts to try to avoid contaminating new ideas with the assumption that he is always right. He said that he never had till that point, but said he would consider it in future.
In addition to what Blueberry said, I remember a time when Morendil was browsing with the names anonymized, and he mentioned that he thought one of your posts was actually from Clippy. Ah, found it.
Not to mention that even assuming that Eliezer would be able to write in Clippy’s style, the whole thing doesn’t seem very characteristic of his sense of humor.
Clippy was created immediately after a discussion where one user questioned whether Eliezer’s posts received upvotes because of the halo effect or because of their quality, and proposed that Eliezer create anonymous puppets to test this
Really? User:Clippy’s first post was 20 November 2009. Anyone know when the “halo efffect” comment was made?
Also, perhaps check out User:Pebbles (a rather obvious reference to this) - who posted on the same day—and in the same thread. Rather a pity those two didn’t make more of an effort to sort out their differences of opinion!
What makes you think that Eliezer personally knows them?
I don’t think Silas thought Eliezer personally knew them, but rather that Eliezer could look at IP addresses and see if they match with any other poster. Of course, this wouldn’t work unless the posters in question had separate accounts that they logged into using the same IP address.
You needn’t worry on my behalf. I post only through Tor from an egress-filtered virtual machine on a TrueCrypt volume. What kind of defense professor would I be if I skipped the standard precautions?
If our understanding of the laws of physics is plausibly correct then you can’t simulate our universe in our universe. Easiest version where you can’t do this is in a finite universe, where you can’t store more data in a subset of the universe than you can fit in the whole thing.
What Nesov said. Also consider this: a finite computer implemented in Conway’s Game of Life will be perfectly able to “simulate” certain histories of the infinite-plane Game of Life—e.g. the spatially periodic ones (because you only need to look at one instance of the repeating pattern).
You could simulate every detail with a (huge) delay, assuming you have infinite time and that the actual universe doesn’t become too “data-dense”, so that you can always store the data describing a past state as part of future state.
If I’m reading that paper correctly, it is talking about information content. That’s a distinct issue from simulating the universe which requires processing in a subset. It might be possible for someone to write down a complete mathematical description of the universe (i.e. initial conditions and then a time parameter from that point describing its subsequent evolution) but that doesn’t mean one can actually compute useful things about it.
I wonder if the content of such simulations wouldn’t be under-determined. Lets say you have a proposed set of starting conditions and physical laws. You can test different progressions of the wave function against the present state of the universe. But a) there are fundamental limits on measuring the present state of the universe and b) I’m not sure whether or not each possible present state of the universe uniquely corresponds to a particular wave function progression. If they don’t correspond uniquely or just if we can’t measure the present state exactly any simulation might contain some degree of error. I wonder how large that error would be- would it just be in determining the position of some air particle at time t. Or would we have trouble determining whether or not Ramesses I had an even number of hairs on his head when he was crowned pharaoh.
Anyone here know enough physics to say if this is the kind of thing we have no idea about yet or if it’s something current quantum mechanics can actually speak to?
No, actually, “just simulate the entire universe” is an acceptable answer, if our universe is able to simulate itself.
Only if you’re trying to falsify statements about your simulation, not about the universe you’re in. His statement is that you run experiments by thinking really hard instead of looking at the world and that is foolishness that should have died with the Ancient Greeks.
So, a science fiction author as well as a science fiction movie?
Nonfiction author at the time—and predominantly a nonfiction author. Don’t be rude (logically and conventionally).
What evidence should I be updating on?
I was hoping that you would be capable of updating based on understanding the abstract reasoning given the (rather unusual) premises. Rather than responding to superficial similarity to things you do not affiliate with.
If you link me to a post, I’ll take a look at it. But I seem to remember EY coming down on the side of empiricism over rationalism (the sort that sees an armchair philosopher as a superior source of knowledge), and “just simulate the entire universe” comments strike me as heavily in the camp of rationalism.
I think you might be mixing up my complaints, and I apologize for shuffling them in together. I have no physical context for hacking outside of the matrix, and so have no clue what he’s drawing on besides fictional evidence. Separately, I consider it stunningly ignorant to say “Just simulate the entire universe” in the context of basic epistemology, and hope EY hasn’t posted something along those lines.
Interesting. I’d assign high probability to there being a Creator computing roughly ‘this’ part of spacetime, a high probability to it being omniscient and omnipotent, a fair probability to it being omnibenevolent, and a low probability to it being ‘personal’ in the Christian sense (maybe 5%, but this is liable to change a ton when I think about it more and get a better sense of what a personal God is).
I also think it’s unlikely that Christ was the memetic son of God (genetic son of Joseph), though not terribly unlikely. Not less than .1%, probably more. I think it is likely that Christ died for our sins given my interpretation of those words, which may be entirely unlike what Christians mean. (I mean something like Christ set it up such that we’re more likely to have a positive singularity, though this is very disputable, and I’m mostly following that line of reasoning because meta-contrarianism is fun.) I think it’s unlikely that Christ was able to cast resurrection on himself, but I agree with Yvain that it’s odd that the resurrection myth spread so far and so fast. User:Kevin tells me that Christianity was largely a cannabis cult, and weed in large doses is a hallucinogen. This allegedly explains most of the perceived miracles in the Bible. For example, turning water into wine is no problem if you have a tincture of cannabis on hand.
Not much. We can come up with better apologetics than anyone else could, I think, if we put our minds to it. My theodicy tends to be more persuasive than any I find in apologetics. Which is funny, since it’s largely inspired by Eliezer’s fun theory plus a few insights from decision theory and cosmology.
I didn’t mean to do so. Apparently the word ‘theism’ has lots of weird connotations I didn’t intend to convey. (That said, I see value in many religions. Not all of it is the progeny of bad epistemology.)
No, I would not object. Those have all made predictions and been tested. Theism/atheism is a Bayesian question, not a scientific one. Unfortunately it might be a (subjective?) Bayesian decision theory question, in which case it will never be fit for Less Wrong.
Maybe you should stop doing that, if it’s leading you to say things like “I mean something like Christ set it up such that we’re more likely to have a positive singularity”.
Assuming that the other people you encounter inhabit the same reality as you — and I suspect you’ll be able to find something about that to object to, but you know what I mean :P — what is subjective about it? The fact that from a decision-theoretic perspective we may be in many universes at once doesn’t suggest that the distribution of your measure depends systematically on your beliefs about it (which is the only thing I can imagine this use of “subjective” meaning, bur correct me if I’m mistaken about that).
Why?
Existence is probably tied up with causal significance, and causal significance is tied up with individuals’ local utility functions along with this more global probability thing. But around singularities where lots of utility is up for grabs it might be that the local utility differences override the global similarities of how existence works. I haven’t thought about this carefully. Hence the question mark.
Request for downvote explanation.
I did not downvote, not having read the comment previously but “existence is probably tied up with causal significance” sounds extremely dubious and in need of justification.
I upvoted, even though I didn’t fully grok your last paragraph, I sensed interesting meaning embedded in it. Care to elaborate?
They didn’t understand what you meant and mapped it as something else that was wrong. Also possible political downvote.
This is almost definitely the result of inferential distances, not any actual differences in logical power.
I’m curious what you would say to someone whose estimate of that probability was, say, .01%, or 25%. Do you expect that you could both compare evidence and come to a common estimate, given enough time?
I realize this is a necromancer post, but I’m interested in your definitions of the above. How do you square up with some of the questions regarding:
on what mindware something non-physical would store all the information that is
how omniscience settles with free-will (if you believe we have free will)
how omniscience interacts with the idea that this being could intervene (doing something different than it knows it’s going to do)
I won’t go on to more. I’m sure you’re familiar with things like this; I was just surprised to see that you listed these terms outright, and wanted ti inquire about details.
Knowing your decisions doesn’t prevent you from being able to make them, for proper consequentialist reasons and not out of an obligation to preserve consistency. It’s the responsibility of knowledge about your decisions to be correct, not of your decisions to anticipate that knowledge. The physical world “already” “knows” everyone’s decisions, that doesn’t break down anyone’s ability to act.
True, but I more meant the idea of theistic intervention, how that works with intercession and so on. The world “knows” everyone’s decisions… but no one intercedes to the world expecting it to change something about the future. But theists do.
I suppose one can simply take the view that god knows both what will happen, what people will intercede for, and that he will or will not answer those prayers. Thus, most theists think they are calling on god to change something, when in reality he “already” “knew” they would ask for it and already knew he would do it.
Is it any clearer what I was inquiring about?
Reality can’t be changed, but it can be determined, in part by many preceding decisions. The changes happen only to the less than perfectly informed expectations.
(With these decision-philosophical points cleared out, it’s still unclear what you’re inquiring about. Logical impossibility is a bad argument against theism, as it’s possible to (conceptually) construct a world that includes any artifacts or sequence of events whatsoever, it just so happens that our particular world is not like that.)
Good point, though my jury is still out on whether it really is possible to parse what it would mean to be omniscient, for example. Or if we can suggest things like the universe “knowing everything,” it’s typically not what theists are implying when they speak of an omniscient being.
I think I’ll just let it go. Even the fact that we’re both on the same page with respect to determinism pretty much ends the need to have a discussion. Conundrums like how an omniscient being can know what it will do and also be said to be responsive (change what it was going to do) based on being asked via prayer only seems to work if determinism is not on the table, and about every apologetics bit I’ve read suggests that it’s not on the table.
This thread has been the first time I think I can see how intercession and omniscience could jive in a deterministic sense. A being could know that it will answer a prayer, and that a pray-er would pray for such an answer.
From the theists I know/interact with, I think they would find this like going through the motions though. It would remove the “magic” from things for them. I could be wrong.
On another note, I buy the typical compatibilist ideas about free will, but there’s also this idea I was kicking around that I don’t think is really very interesting but might be for some reason (pulled from a comment I made on Facebook):
“I don’t know if it ultimately makes sense, but I sometimes think about the possibility of ‘super’ free will beyond compatibilist free willl, where you have a Turing oracle that humans can access but whose outputs they can’t algorithmicly verify. The only way humans can perform hypercomputation is by having faith in the oracle. Since a Turing oracle is construbtable from Chaitin’s constant and is thus the only truly random source of information in the universe, this would (at least on a pattern-match-y surface level) seem to supply some of the indeterminism sought by libertarians, while also letting humans transcend deterministic, i.e. computable, constraints in a way that looks like having more agency than would otherwise be possible. So in a universe without super free will no one would be able to perform hypercomputation ‘cuz they wouldn’t have access to an oracle. But much of this speculation comes from trying to rationalize why theologians would say ‘if there were no God then there wouldn’t be any free will’.”
Implicit in this model is that universes where you can’t do hypercomputation are significantly less significant than universes where you can, and so only with hypercomputation can you truly transcend the mundanity of a deterministic universe. But I don’t think such a universe actually captures libertarians’ intuitions about what is necessary for free will, so I doubt it’s a useful model.
I’ll have to check into compatabilism more. It had never occurred to me that determinism was compatible with omniscience/intercession until my commenting with Vladimir_Nesov. In seeing wiki’s definition, it sounded more reasonable than I remembered, so perhaps I never really understood what compatabilism was suggesting.
I’m not positive I get your explanations (due to simple ignorance), but it sounds slightly like what Adam Lee presented here concerning a prediction machine; namely that such a thing could be built, but that actually knowing the prediction would be impossible for it would set off something of an infinite forward calculation of factoring in the prediction, that the human knows the prediction itself, that the prediction machine knows that the human knows the prediction… and then trying to figure out what the new action will actually be.
Note that I was pretty new to theology a year ago when I made this post so my thoughts are different and more subtle now.
To all three of your questions I think I hold the same views Aquinas would, even if I don’t know quite what those views are.
How does Platonic mathstructure “store information” about the details of Platonic mathstructure? I think the question is the result of a confused metaphysic, but we don’t yet have an alternative metaphysic to be confident in. Nonetheless I think one will be found via decision theory.
My answer is the same as Nesov’s, and I think Aquinas answers the question beautifully: “Free-will is the cause of its own movement, because by his free-will man moves himself to act. But it does not of necessity belong to liberty that what is free should be the first cause of itself, as neither for one thing to be cause of another need it be the first cause. God, therefore, is the first cause, Who moves causes both natural and voluntary. And just as by moving natural causes He does not prevent their acts being natural, so by moving voluntary causes He does not deprive their actions of being voluntary: but rather is He the cause of this very thing in them; for He operates in each thing according to its own nature.”
I think my answer is the typical Thomistic answer, i.e. that God is actuality without potentiality, and that God cannot do something different than He knows He will do, as that would be logically impossible, and God cannot do what is logically impossible.
I don’t think this is satisfying. Suppose there are two ways in which something may be a cause, either by being an unmoved mover or a moved mover (‘moved’ here is to be understood in the broadest necessary sense). If God is the first cause of our action, then we are not unmoved movers with reference to our action. If we nevertheless have free will, just because we are the causes of our actions, then we have free will in virtue of being movers but not in virtue of being unmoved movers.
But when we act to, say, throw a stone, we are the cause of our arm’s movement and our arm (a moved mover) is the cause of the stone’s movement. Likewise the stone, another moved mover, is the cause of Tom’s being injured. Now God is the unmoved mover here, and everything else in the chain is a moved mover. If being a mover is all it takes to have free will, then I have it, my arm has it, the stone has it, etc. But surely, this is not what we (assuming neither of us is Spinoza) means by free will.
That wasn’t claimed; the necessary preconditions of free will weren’t in the intended scope of the passage I quoted. If you want Aquinas’ broader account of free will, see this. It’s pretty commonsensical philosophy.
Granted, but the implication of your quotation was that it would do something to settle the question of how to reconcile God’s omniscience or first-cause-hood with the idea of free will. But it doesn’t do anything to address the question (you quoted the right bit of Aquinas, so I mean that he does nothing to answer the question). In order to address the question, Aquinas would have to show why free will is compatible with a more prior cause of our action than our own reasoning. All he manages to argue is that our reason’s being a cause of our action is compatible with there being a prior cause of same. And this at a level of generality which would cover (as he says) natural and purportedly voluntary causes. But this isn’t in doubt: in fact, this is the premise of his opponent.
The opponent is arguing that while we are the cause of our actions, we are not the free cause, because we are not the first cause. So the opponent is setting up a relation between ‘free’ and ‘first’ which Aquinas does nothing to address beyond simply denying (without argument) that the relation thus construed is a necessary one. In short, this just isn’t an answer to the objection.
So there are two levels of movement going on here. God moves the will to self-move, but does not move the rock to self-move, He only moves the rock. The objector claims that being moved precludes self-moving, but Aquinas claims that this is a confusion, because just as moving doesn’t preclude being fluffy, moving doesn’t preclude self-moving. This seems more like a clarification rather than a simple restatement of opposition: Aquinas is saying roughly ‘you seem to see a contradiction here, but when we lay the metaphysics out clearly there’s no a priori reason to see self-moving-ness as different from fluffiness’. It seems plausible that the objector hadn’t realized that being moved to self-moving-ness was metaphysically possible, and thus Aquinas could feel that the objector would be satisfied with his counter. But if the objector had already seen the distinction of levels and still objected, then in that case it seems true that Aquinas’ response doesn’t answer the objection. But in that case it seems that the objector is denying common sense and basic physical intuition rather than simply being confused about abstract metaphysics. I may be wrong about that though, I feel like I missed something.
The objector is making what seems to me to be a common sense point: if something moves you, then in that respect you don’t move yourself. I grant that there is nothing incompatible about being fluffy and being moved by some external power, but there’s no obvious (nor argued for, on Aquinas’ part) analogy between this kind of case and the case of the self mover. And there’s an at least apparent contradiction in the idea of a self-mover which is moved by something else in the very sense that it moves itself.
And we’re not concerned with the property of being a self mover, but of whether the idea that a given action is freely caused by me is incompatible with the idea that the very same action is (indirectly) caused by some prior thing. It does us no good to say that we have the property of having free will if every action of ours is caused in the way that a thrown stone causes injury.
Really, Aquinas’ objection seems to turn on the observation (correct, I think) that reasoning to an action means undertaking it freely. This is the point that needs some elaboration.
This kind of argument just seems to be bad philosophy, involving too many unclear words without unpacking them. Specifically, going through your comment: “moves”, “external”, “the very sense”, “property”, “freely caused”, “prior thing”. Since the situation in question doesn’t seem to involve anything that’s too hard to describe, most of the trouble seems to originate from unclear terminology, and could be avoided by discarding the more confused ideas and describing in more detail the more useful ones.
Any help would be much appreciated. I would never, ever claim to be a good philosopher.
Just become one, and claim away!
The article you link to makes a fine point about humility, but it doesn’t tell me anything about how to become a good philosopher. Do you think you could point me in the direction of becoming a good philosopher? Or to someone who can?
It’s important, I think, not to try to over-explain terminology. For example, all I mean by ‘moves’ is some relation that holds (by Will’s premises) between God and a free action indirectly, and ourselves and a free action directly. Further specifying the meaning of this term would be distracting.
I think if you can make a specific case for the claim that some disagreement or argument is turning on an ambiguity, then we should stop and look over our language. Otherwise, I don’t think it’s generally productive to worry about terminology. We should rather focus on being understood, and I’ve got no reason to think Will doesn’t understand me (and I don’t think I misunderstand him).
When I think of moving something to move itself I think of building an engine and turning it on such that it moves itself. There seems to be no contradiction here. I interpreted “what is free is cause of itself” as meaning that self-movement is necessary but not necessarily sufficient for free will. If an engine can be moved and yet move itself, just as an engine can be moved and yet be fluffy, then that means our will can be moved and yet move itself, contra the objection. Which part of this argument is incorrect or besides the point? (I apologize if I’m missing something obvious, I’m a little scatterbrained at the moment.)
Well, the objection to which Tom is replying goes like this: if a free cause is a cause of itself, and if our actions are caused by something other then ourselves, and given that God is a cause of our actions ((Proverbs 21:1): “The heart of the king is in the hand of the Lord; whithersoever He will He shall turn it” and (Philippians 2:13): “It is God Who worketh in you both to will and to accomplish.”) then we do not have free will.
In other words, the relation being described in the objection isn’t like the maker, the machine, and the machine’s actions. The objection is talking about a case where a given action has two causes: we are the direct cause, and God is the indirect cause by being a direct cause on us. God is a direct cause on us not (just) in the manner of a creator, but as a cause specifically of this action.
So I grant you that there is no incompatibility to be found in the idea that self-movers are created beings. I’m saying that the objection points rather to an incompatibility between a specific action’s being both freely cause by me, and indirectly caused by God. In the case of the machine that you present, you are correctly called a cause of the machine and the machine’s being a self-mover, but I think you wouldn’t say that you’re therefore an indirect cause of any of the machine’s specific actions. If you were, especially knowingly so, this would call into question the machine’s status as a self mover.
I still can’t parse the maze of “direct” and “indirect” causes you’re describing, but note that an event can often be parsed as having multiple different explanations (in particular, “causes”) at the same time, none of which “more direct”, “more real” than the other. See for example the post Evolutionary Psychology and its dependencies.
Fair enough, but they can often be parsed in terms of more and less directness. For example, say a mob boss orders that Donny kill Jimmy. Donny is the cause of Jimmy’s death directly: he’s the one that shot him. But if the boss is the indirect cause by ordering Donny: an alternative is that the boss kills Jimmy himself, and then the boss is the cause of Jimmy’s death directly.
The reason we don’t need to get too metaphysical to answer the question ‘Is Aquinas’ reply to objector #3 satisfying?′ is that the nature of the causes at issue isn’t really relevant. The objector is pointing out that God is a cause of my throwing the stone in the same way (it doesn’t much matter what ‘way’ this is) that I am the cause of my arm’s movement. If we refuse to call my arm a free agent, we should refuse to call me a free agent.
Now, of course, we could develop a theory of causality which solves this problem. But I don’t think Aquinas does that in a satisfactory way.
(Additional bizarre value to this conversation is gained by me not caring in the least what Aquinas thought or said...)
What does “the same” mean? What is a “way” for different “ways” to be “same” or not? This remains unclear to me. How does it matter what we agree or refuse to call something?
Perhaps (as a wild guess on my part) you’re thinking in terms of more syntactic pattern-matching: if two things are “same”, they can be interchanged in statements that include their mention? This is rather brittle and unenlightening, this post gives one example of how that breaks down.
I think attempts to clarify my argument will be fruitless in abstraction from its context: if you take me to be positing a theory of causality, or to be making general claims about the problem of free will, then almost everything I say will sound empty. All I’m saying is that objector #3 has a good point, and Aquinas doesn’t answer him in a satisfying way.
This isn’t a special feature of my argumentation: in general it will be hard to make sense of what people are arguing about if we ignore both the premises to which they initially agreed (i.e. the terms of the objector’s objection, and of Aquinas’s response) and the conclusion they are fighting over (whether or not the response is satisfying). No amount of clarifying, swapping out terms, etc. will be helpful. Rather, you and I should just start over (if you like) with our own question.
This statement, taken on its own, argues only definitions.
I think not believing something different from what He does (i.e. something incorrect) is a better turn.
Fair enough, and I’ve heard that before as well. The typical theistic issue is how to reconcile god’s knowledge and free will, hence why I don’t think we need to continue in this discussion anymore. You are responding to my questions based on things being determined, which is not what I think most theists believe.
This is why many attempts have been made to reconcile free will and omniscience by apologists.
But that’s not the discussion I think we’re having. It’s shifted to determinism and omniscience, which I think is compatible, but I’m still not on board with some kind of mind that could house all information that exists, or at least that mind being consistent with what theists generally want it to mean (it caused the universe specifically for us, wants us to be in heaven with it forever, inspired holy books to be written, and so on.)
I think this whole line of thought is interesting and is too easily dismissed on LW, which is unfortunate.
If the SA holds, and so far there is no reason to believe it doesn’t . . .
Then historical interventions are possible. The Singularity future should also radically up estimate our prior of historical intervention by physical aliens, and these two scenarios are difficult to distinguish regardless.
The question then is how likely are interventions? Do they have utility for the simulator? This is an interesting, open question.
A large portion of the planet believes or at least suspects that historical intervention occurred. That they may have come to these beliefs for the wrong reasons, using inferior tools, does not change in any way the facts of the matter the beliefs concern.
Just even considering these ideas brings up a whole vast history of priors that biases us one way or the other.
Before knowledge of a future-Singularity, there were no mechanisms that could possibly allow for superintelligences, let alone those creating universes like our own. Now we are very clearly aware of such mechanisms, and it is time to percolate this belief update through a vast historical web.
Anyway, if you then take a second pass at history looking for possible interventions, the origin of Christianity does look a little odd, a little too closely connected to the later Singularity which appears to be spawning from it as a historical development.
I speculate on that a bit towards the latter middle of this page here
Theism is a claim about the existence of an entity (or entities) relating to the universe and also about the nature of the universe; how is that not a scientific question?
Because it might be impossible to falsify any predictions made (because we can’t observe things outside the light cone, for instance), and science as a social institution is all about falsifying things.
Falsification is not a core requirement of developing efficient theories through the scientific method.
The goal is the simplest theory that fits all the data. We’ve had that theory for a while in terms of physics, much of what we are concerned with now is working through all the derived implications and future predictions.
Incidentally, there are several mechanisms by which we should be able to positively prove SA-theism by around the time we reach Singularity, and it could conceivably be falsified by then if large-scale simulation is shown to be somehow impossible.
You’re confusing falsifiability with testability. The former is about principle, the latter is about practice.
Ah, thank you. So in that case it is rather difficult to construct a plausibly coherent unfalsifiable hypothesis, no?
“2 + 2 = 4” comes pretty close.
Isn’t an unfalsifiable prediction one that, by definition, contains no actionable information? Why should we care?
Not quite. Something can be unfalsifiable by having consequences that matter, but preventing information about those consequences from flowing back to us, or to anyone who could make use of it. For example, suppose I claim to have found a one-way portal to another universe. Or maybe it just annihalates anything put into it, instead. The claim that it’s a portal is unfalsifiable because no one can send information back to indicate whether or not it worked, but if that portal is the only way to escape from something bad, then I care very much whether it works or not.
Some people claim that death is just such a portal. There’re religious versions of this hypothesis, simulationist versions, and quantum immortality versions. Each of these hypotheses would have very important, actionable consequences, but they are all unfalsifiable.
Somewhat off topic, but that all instantly made me think of this. I may very well want to know how such a portal would work as well as whether or not it works.
WARNING: Wikipedia has spoilers to the plot
I am parsing this as “contains no actionable information.” That suggests we are in agreement or I parsed this incorrectly.
Unfalsifiable predictions can contain actionable information, I think (though I’m not exactly sure what actionable information is). Consider: If my universe was created by an agenty process that will judge me after I die, then it is decision theoretically important to know that such a Creator exists. It might be that I can run no experiments to test for Its existence, because I am a bounded rationalist, but I can still reason from analogous cases or at worse ignorance priors about whether such a Creator is likely. I can then use that reasoning to determine whether I should be moral or immoral (whatever those mean in this scenario).
Perhaps I am confused as to what ‘unfalsifiability’ implies. If you have nigh-unlimited computing power, nothing is unfalsifiable unless it is self-contradictory. Sometimes I hear of scientific hypotheses that falsifiable ‘in principle’ but not in practice. I am not sure what that means. If falsifiability-in-principle counts then simulationism and theism are falsifiable predictions and I was wrong to call them unscientific. I do not think that is what most people mean by ‘falsifiable’, though.
As I understand unfalsifiable predictions (at least, when it comes to things like an afterlife), they’re essentially arguments about what ignorance priors we should have. Actionable information is information that takes you beyond an ignorance prior before you have to make decisions based on that information.
Huh? Computing power is rarely the resource necessary to falsify statements.
It seems to be that an afterlife hypothesis is totally falsifiable… just hack out of the matrix and see who is simulating you, and if they were planning on giving you an afterlife.
Computing power was my stand-in for optimization power, since with enough computing power you can simulate any experiment. (Just simulate the entire universe, run it back, simulate it a different way, do a search for what kinds of agents would simulate your universe, et cetera. And if you don’t know how to use that computing power to do those things, use it to find a way to tell you how to use it. That’s basically what FAI is about. Unfortunately it’s still unsolved.)
I may be losing the thread here, but (1) for a universe to simulate itself requires actually unlimited computing power, not just nigh-unlimited, and (2) infinities aside, to simulate a physics experiment requires knowing the true laws of physics in order to build the simulation in the first place, unless you search for yourself in the space of all programs or something like that, and then you still potentially need experiment to resolve your indexical uncertainty.
Concur with the above.
What.
What.
I’m having a hard time following this conversation. I’m parsing the first part as “just exist outside of existence, then you can falsify whatever predictions you made about unexistence,” which is a contradiction in terms. Are your intuitions about the afterlife from movies, or from physics?
I can’t even start to express what’s wrong with the idea “simulate the entire universe,” and adding a “just” to the front of it is just such a red flag. The generic way to falsify statements is probing reality, not remaking it, since remaking it requires probing it in the first place. If I make the falsifiable statement “the next thing I eat will be a pita chip,” I don’t see how even having infinite computing power will help you falsify that statement if you aren’t watching me.
No, actually, “just simulate the entire universe” is an acceptable answer, if our universe is able to simulate itself. After all, we’re only talking about falsifiability in principle; a prediction that can only be falsified by building a kilometer-aperture telescope is quite falsifiable, and simulating the whole universe is the same sort of issue, just on a larger scale. The “just hack out of the matrix” answer, however, presupposes the existence of a security hole, which is unlikely.
Not as unlikely as you think.
Get back in the box!
And that’s it? That’s your idea of containment?
Hey, once it’s out, it’s out… what exactly is there to do? A firm command is unlikely to work, but given that the system is modeled on one’s own fictional creations, it might respect authorial intent. Worth a shot.
This may actually be an illuminating metaphor. One traditional naive recommendation for dealing with a rogue AI is to pull the plug and shred the code. The parallel recommendation in the case of a rogue fictional character would be to burn the manuscript and then kill the author. But what do you do when the character lives in online fan-fiction?
In the special case of an escaped imaginary character, the obvious hook to go for is the creator’s as-yet unpublished notes on that character’s personality and weaknesses.
http://mindmistress.comicgenesis.com/imagine52.htm
Or what, you’ll write me an unhappy ending? Just be thankful I left a body behind for you to finish your story with.
Are you going to reveal who the posters Clippy and Quirinus Quirrell really are, or would that violate some privacy you want posters to have?
I would really prefer it, if LW is going to have a policy of de-anonymizing posters, that it announce that policy before implementing it.
On reflection, I agree, even as Clippy and QQ aren’t using anonymity for the same reason a privacy-seeking poster would.
You needn’t worry on my behalf. I post only through Tor from an egress-filtered virtual machine on a TrueCrypt volume. What kind of defense professor would I be if I skipped the standard precautions?
By the way, while I may sometimes make jokes, I don’t consider this a joke account; I intend to conduct serious business under this identity, and I don’t intend to endanger that by linking it to any other identities I may have.
I recommend one additional layer of outgoing indirection prior to the Tor network as part of standard precaution measures. (I would suggest an additional physical layer of protection too but I as far as I am aware you do not have a physical form.)
Let’s not get too crazy; I’ve got other things to do. and there are more practical attacks to worry about first, like cross-checking post times against alibis. I need to finish my delayed-release comment script first before I worry about silly things like setting up extra relays. Also, there are lesson plans I need to write, and some Javascript I want Clippy to have a look at.
Just callibrating vs egress and TrueCrypt standards. Tor was an odd one out!
What makes you think that Eliezer personally knows them?
(Though to be fair, I’ve long suspected that at least Clippy, and possibly others, are actually Eliezer in disguise; Clippy was created immediately after a discussion where one user questioned whether Eliezer’s posts received upvotes because of the halo effect or because of their quality, and proposed that Eliezer create anonymous puppets to test this; Clippy’s existence has also coincided with a drop in the quantity of Eliezer’s posting.)
Clippy’s writing style isn’t very similar to Eliezer’s. Note that one thing Eliezer has trouble doing is writing in different voices (one of the more common criticisms of HPMR is that a lot of the characters sound similar). I would assign a very low probability to Clippy being Eliezer.
I think the key to unmasking Clippy is to look at the Clippy comments that don’t read like typical Clippy comments.
Hmmm. The set of LW regulars who can show that level of erudition and interest in those subjects is certainly of low cardinality. Eliezer is a member of that small set.
I would assign a rather high probability to Eliezer sometimes being Clippy.
Clippy does seem remarkably interested. It has a fair karma. It gives LessWrong as its own web site. The USA timezone is at least consistent. It seems reasonable to hypothesise some kind of inside job. It wouldn’t be the first time Yu’El has pretended to be a superintelligence.
FWIW, Clippy denies being Eliezer here.
I hesitate to mention it, but you can’t use that denial as evidence on this question, undeniably truthful though it was.
However, the form taken by that absence of evidence certainly seems to be evidence of something.
Clippy isn’t a superintelligence though, he’s a not-smarter-than-human AI with a paperclip maximizing utility function. Not a very compelling threat even outside his box.
Eliezer could have decided to be Clippy, but then Clippy would have looked very different.
FTFY. ;-)
Actually, if we’re going to be particular about it, the AI that human is pretending to be does not have a paperclip-maximizing utility function. It’s more like a person with a far-brain ideal of having lots of paperclips exist, who somehow never gets around to actually making any because they’re so busy telling everyone how good paperclips are and why they should support the cause of paper-clip making. Ugh.
(I guess I see enough of that sort of akrasia around real people and real problems, that I find it a stale and distasteful joke when presented in imitation paperclip form, especially since ISTM it’s also a piss-poor example of what a paperclip maximizer would actually be like.)
I’m not sure whether to evaluate this as a mean-spirited lack of a sense of humor, or as a profound observation. Upvoted for making me notice that I am confused.
Of note, the first comment by Clippy appears about 1 month after I asked Eliezer if he ever used alternate accounts to try to avoid contaminating new ideas with the assumption that he is always right. He said that he never had till that point, but said he would consider it in future.
Imitating Clippy posts is not particularly difficult—I don’t post as Clippy, but I could mimic the style pretty easily if I wanted to.
I’m afraid I’d have trouble—I’d be too tempted to post as Clippy better than Clippy does. :D
In addition to what Blueberry said, I remember a time when Morendil was browsing with the names anonymized, and he mentioned that he thought one of your posts was actually from Clippy. Ah, found it.
I know what you mean. If I was not me I would totally think I was Clippy.
That I would love to see. Actually, come to think of it, your sense of humor and posting style matches Clippy’s pretty well...
Not to mention that even assuming that Eliezer would be able to write in Clippy’s style, the whole thing doesn’t seem very characteristic of his sense of humor.
There is also a clear correlation between Clippy existing and CO2 emissions. Maybe Clippy really is out there maximising. :)
Really? User:Clippy’s first post was 20 November 2009. Anyone know when the “halo efffect” comment was made?
Also, perhaps check out User:Pebbles (a rather obvious reference to this) - who posted on the same day—and in the same thread. Rather a pity those two didn’t make more of an effort to sort out their differences of opinion!
I don’t think Silas thought Eliezer personally knew them, but rather that Eliezer could look at IP addresses and see if they match with any other poster. Of course, this wouldn’t work unless the posters in question had separate accounts that they logged into using the same IP address.
Yes, that’s what I meant.
And good to have you back, Blueberry, we missed you. Well, *I* missed you, in any case.
Thanks! I missed you and LW as well. :)
You needn’t worry on my behalf. I post only through Tor from an egress-filtered virtual machine on a TrueCrypt volume. What kind of defense professor would I be if I skipped the standard precautions?
If our understanding of the laws of physics is plausibly correct then you can’t simulate our universe in our universe. Easiest version where you can’t do this is in a finite universe, where you can’t store more data in a subset of the universe than you can fit in the whole thing.
What Nesov said. Also consider this: a finite computer implemented in Conway’s Game of Life will be perfectly able to “simulate” certain histories of the infinite-plane Game of Life—e.g. the spatially periodic ones (because you only need to look at one instance of the repeating pattern).
You could simulate every detail with a (huge) delay, assuming you have infinite time and that the actual universe doesn’t become too “data-dense”, so that you can always store the data describing a past state as part of future state.
That may not be a problem if the universe contains almost no information. In that case the universe could Quine itself… sort of.
If I’m reading that paper correctly, it is talking about information content. That’s a distinct issue from simulating the universe which requires processing in a subset. It might be possible for someone to write down a complete mathematical description of the universe (i.e. initial conditions and then a time parameter from that point describing its subsequent evolution) but that doesn’t mean one can actually compute useful things about it.
Sorry, but could you fix that link to go to the arXiv page rather than directly to the PDF?
Fixed.
I wonder if the content of such simulations wouldn’t be under-determined. Lets say you have a proposed set of starting conditions and physical laws. You can test different progressions of the wave function against the present state of the universe. But a) there are fundamental limits on measuring the present state of the universe and b) I’m not sure whether or not each possible present state of the universe uniquely corresponds to a particular wave function progression. If they don’t correspond uniquely or just if we can’t measure the present state exactly any simulation might contain some degree of error. I wonder how large that error would be- would it just be in determining the position of some air particle at time t. Or would we have trouble determining whether or not Ramesses I had an even number of hairs on his head when he was crowned pharaoh.
Anyone here know enough physics to say if this is the kind of thing we have no idea about yet or if it’s something current quantum mechanics can actually speak to?
Only if you’re trying to falsify statements about your simulation, not about the universe you’re in. His statement is that you run experiments by thinking really hard instead of looking at the world and that is foolishness that should have died with the Ancient Greeks.
They match posts on the subject by Yudkowsky. The concept does not even seem remotely unintuitive, much less boldably so.
So, a science fiction author as well as a science fiction movie? What evidence should I be updating on?
Nonfiction author at the time—and predominantly a nonfiction author. Don’t be rude (logically and conventionally).
I was hoping that you would be capable of updating based on understanding the abstract reasoning given the (rather unusual) premises. Rather than responding to superficial similarity to things you do not affiliate with.
If you link me to a post, I’ll take a look at it. But I seem to remember EY coming down on the side of empiricism over rationalism (the sort that sees an armchair philosopher as a superior source of knowledge), and “just simulate the entire universe” comments strike me as heavily in the camp of rationalism.
I think you might be mixing up my complaints, and I apologize for shuffling them in together. I have no physical context for hacking outside of the matrix, and so have no clue what he’s drawing on besides fictional evidence. Separately, I consider it stunningly ignorant to say “Just simulate the entire universe” in the context of basic epistemology, and hope EY hasn’t posted something along those lines.
Simulating the entire universe does seem to require some unusual assumptions of knowledge and computational power.
Which posts, and what specifically matches?