If I am a number in a calculation, I priviledge the simulation I am in above all others. I expect residents of all other simulations to priviledge their own simulation above all others.
Being made of carbon chains isn’t relevant; being made of matter instead of information or an abstraction is important, and even if there exists a reference point from which my matter is abstract information, I, the abstract information, insrinically value my flavor of abstraction more so than any other reference. (there is an instrumental value to manipulating the upstream contexts, however)
I can’t understand the lack of local-universe privilege.
Suppose that literally everything I observe is a barely imperfect simulation made by IBM, as evidenced by the observation that a particular particle interaction leaves traces which reliably read “World sim version 7.00.1.5 build 11/11/11 Copyright IBM, special thanks JKR” instead of the expected particle traces. Also, invoking certain words and gestures allows people with a certain genetic expression to break various physical laws.
Now, suppose that a golden tablet appeared before me explicitly stating that Omega has threatened the world which created our simulation. However, we, the simulation, are able to alter the terms of this threat. If a selected resident (me) of Sim-Earth decides to destroy Sim-Earth, Meta-1 Earth will suffer no consequences other than one instance of an obsolete version of one of their simulations crashing. If I refuse, then Omega will roll a fair d6, and on a result of 3 or higher will destroy Meta-1 Earth, along with all of their simulations including mine.
Which is the consequentialist thing to do? (I dodge the question by not being consequentialist; I am not responsible for Omega’s actions, even if Omega tells me how to influence him. I am responsible for my own actions.)
To prefer a 60% chance of the destruction of more than two existences to the certainty of the extinction of humanity in one of them is an interesting position.
Clearly, however, either such a preference either incurs local privilege, or it should be just as logical to prefer the 60% destruction of more than everything over the certain destruction of a different simulation, one that would never have interaction with the one that the agent experiences.
To prefer a 60% chance of the destruction of more than two existences to the certainty of the extinction of humanity in one of them is an interesting position.
Yes, far from inconceivable and perhaps even held coherently by a majority of humans but certainly different to mine. I have decidedly different preferences, in certain cases it’s less than that. If I found I was in certain kinds of simulations I’d value my own existence either less or not at all.
Clearly, however, either such a preference either incurs local privilege
Yes, it would (assuming I understand correctly what you mean by that).
I hadn’t considered the angle that the simulation might be run by an actively hostile entity; in that case, destroying the hostile entity (ending the simulation) is the practical thing to do at the top layer, and also the desired result in the simulation (end of simulation rather than torture).
Just to make sure I understand, let me restate your scenario: there’s a world (“Meta-1 Earth”) which contains a simulation (“Sim-Earth”), and I get to choose whether to destroy Sim-Earth or not. If I refuse, there’s a 50% chance of both Sim-Earth and Meta-1 Earth being destroyed. Right?
So, the consequentialist thing to do is compare the value of Sim-Earth (V1) to the value of Meta-1 Earth (V2), and destroy Sim-Earth iff V2/2 > V1.
You haven’t said much about Meta-1 Earth, but just to pick an easily calculated hypothetical, if Omega further informs me that there are ten other copies of World sim version 7.00.1.5 build 11/11/11 running on machines in Meta-1 Earth (not identical to Sim-Earth, because there’s some randomness built into the sim, but roughly equivalent), I would conclude that destroying Sim-Earth is the right thing to do if everything is as Omega has represented it.
I might not actually do that, in the same way that I might not kill myself to save ten other people, or even give up my morning latte to save ten other people, but that’s a different question.
Subtle distinctions. We have no knowledge about Meta-1 Earth. We only have the types of highly persuasive but technically circumstantial evidence provided; Omega exists in this scenario and is known by name, but he is silent on the question of whether the inscription on the massive solid gold tablet is truthful. The doomseday button is known to be real.
What would evidence regarding the existence of M1E look like?
(Also:4/6 chance of a 3 or higher. I don’t think the exact odds are critical.)
Well, if there are grounds for confidence that the button destroys the world, but no grounds for confidence in anything about the Meta-1 Earth stuff, then a sensible decision theory chooses not to press the button.
(Oh, right. I can do basic mathematics, honest! I just can’t read. :-( )
“Suppose that literally everything I observe is a barely imperfect simulation made by IBM, as evidenced by the observation that a particular particle interaction leaves traces which reliably read “World sim version 7.00.1.5 build 11/11/11 Copyright IBM, special thanks JKR” instead of the expected particle traces. Also, invoking certain words and gestures allows people with a certain genetic expression to break various physical laws.”
I was content to accept that supposition, not so much because I think I would necessarily be convinced of it by experiencing that, as because it seems plausible enough for a thought experiment and I didn’t want to fight the hypothetical.
But now it sounds like you’ve changed the question completely? Or am I deeply confused? In any case, I’ve lost the thread of whatever point you’re making.
Anyway, to answer your question, I’m not sure what would be compelling evidence for or against being in a simulation per se. For example, I can imagine discovering that physical constants encode a complex message under a plausible reading frame, and “I’m in a simulation” is one of the theories which accounts for that, but not the only one. I’m not sure how I would disambiguate “I’m in a simulation” from “there exists an intelligent entity with the power to edit physical constants” from “there exists an intelligent entity with the power to edit the reported results of measurements of physical constants.” Mostly, I would have to accept I was confused and start rethinking everything I used to believe about the universe.
Here’s a better way of looking at the problem: Is it possible to run a simulation which is both indistinguishable from reality (from within the simulation) and such that something which develops within the simulation will realize that it is in a simulation?
Is it possible, purely from within a simulation, for a resident to differentiate the simulation from reality, regardless of the quality of the simulation?
How can moral imperatives point towards things which are existence-agnostic?
Is it possible to run a simulation which is both indistinguishable from reality (from within the simulation) and such that something which develops within the simulation will realize that it is in a simulation?
We may need to further define “realize.” Supposing that it is possible to run a simulation which is indistinguishable from reality in the first place, it’s certainly possible for something which develops within the simulation to believe it is in a simulation, just like it’s possible for people in reality to do so.
Is it possible, purely from within a simulation, for a resident to differentiate the simulation from reality, regardless of the quality of the simulation?
Within a simulation that is indistinguishable from reality, it is of course not possible for a resident to distinguish the simulation from reality.
How can moral imperatives point towards things which are existence-agnostic?
I have no idea what this question means. Can you give me some examples of proposed moral imperatives that are existence-agnostic?
A moral imperative which references something which may or may not be exemplified; it doesn’t change if that which it references does not exist.
“Maximize the density of the æther.” is such an imperative.
“Include God when maximizing total utility.” is the version I think you are using (with ‘God’ being the creator of the simulation; I think that the use of the religious referent is appropriate because they have the same properties.)
So, if I’m understanding you: when my father was alive, I endorsed “Don’t kill your father.” When he died I continued to endorse it just as I had before. That makes “Don’t kill your father” a moral imperative which points towards something existence-agnostic, on your account… yes?
I have no idea what you’re on about by bringing God into this.
“Maximize the amount of gyration and gimbling of slithy toves” would be a better example.
I’m using God as a shorthand for the people running the simulation. I’m not introducing anything from religion but the name for something with that power.
I don’t think a moral imperative can meaningfully include a meaningless term. I do think a moral imperative can meaningfully include a meaningful term whose referent doesn’t currently exist in the world.
Also, it can be meaningful to make a moral assertion that depends on an epistemically unreachable state. For example, if I believe (for whatever reason) that I’ve been poisoned and that the pill in my hand contains an antidote, but in fact I haven’t been poisoned and the pill is poison, taking the pill is in fact the wrong thing to do, even though I can’t know that.
I prefer to have knowable morality- I must make decisions without information about the world, but only with my beliefs.
For example, it is wrong to pull the trigger of a gun aimed at an innocent person without knowing if it is loaded. The expected outcome is what matters, not the actual outcome.
I prefer to have knowable morality- I must make decisions without information about the world, but only with my beliefs.
Well, I certainly agree that we make decisions based on our beliefs (I would also say that our beliefs are, or at least can be, based on information about the world, but I understand you here to be saying that we must make decisions without perfect information about the world, which I agree with).
That said, I think you are eliding morality and decision procedures, which I think elides an important distinction.
For example, if at time T1 the preponderance of the evidence I have indicates the pill is an antidote, and at some later time T2 the preponderance of the evidence indicates that the pill is poison, a sensible decision theory says (at T1) to take the pill and (at T2) not to take the pill.
But to say that taking the pill is morally right at T1 and not-taking the pill is morally right at T2 seems no more justified to me than to say that the pill really is an antidote at T1 and is poison at T2. That just isn’t the case, and a morality or an ontology that says it is the case is simply mistaken. The pill is always poison, and taking the pill is therefore the wrong thing to do, whether I know it or not.
I guess you could say that I prefer that my morality, like my ontology, be consistent to having it be knowable.
So then it is nonsense to claim that someone did the right thing, but had a bad outcome?
If you see someone drowning and are in a position where you can safely do nothing or risk becoming another victim by assisting, you should assist iff your assistance will be successful, right?
Is it moral to bet irresponsibly if you win? Is it immoral to refuse an irresponsible bet that would have paid off?
I can’t see the practical use of a system where the morality of a choice is very often unknowable.
Suppose I have two buttons, one red and one green. I know that one of those buttons (call it “G”) creates high positive utility and the other (“B”) creates high negative utility. I don’t know whether G is red and B green, or the other way around.
On your account, if I understand you correctly, to say “pressing G is the right thing to do” is meaningless, because I can’t know which button is G. Pressing G, pressing B, and pressing neither are equally good acts on your account, even though one of them creates high positive utility and the other creates high negative utility. Is that right?
On my account, I would say that the choice between red and green is a question of decision theory, and the choice between G and B is a question of morality. Pressing G is the right thing to do, but I don’t know how to do it.
‘Pressing a button’ is one act, and ‘pressing both buttons’ and ‘pressing neither button’ are two others. If you press a button randomly, it isn’t morally relevant which random choice you made.
What does it mean to choose between G and B, when you have zero relevant information?
(shrug) It means that I do something that either causes G to be pressed, or causes B to be pressed. It means that the future I experience goes one way or another as a consequence of my act.
I have trouble believing that this is unclear; I feel at this point that you’re asking rhetorical questions by way of trying to express your incredulity rather than to genuinely extract new knowledge.Either way, I think we’ve gotten as far as we’re going to get here; we’re just going in circles.
I prefer a moral system in which the moral value of an act relative to a set of values is consistent over time, and I accept that this means it’s possible for there to be a right thing to do even when I don’t happen to have any way of knowing what the right thing to do is… that it’s possible to do something wrong out of ignorance. I understand you reject such a system, and that’s fine; I’m not trying to convince you to adopt it.
I’m not sure there’s anything more for us to say on the subject.
So then it is nonsense to claim that someone did the right thing, but had a bad outcome?
Well, it’s not nonsense, but it’s imprecise.
One thing that can mean is that the action had a net positive result globally, but negative results in various local frames. I assume that’s not what you mean here though, you mean had a bad outcome overall.
Another thing that can mean is that someone decided correctly, because they did the thing that had the highest expected value, but that led to doing the wrong thing because their beliefs about the world were incorrect and led them to miscalculate expected value. I assume that’s what you mean here.
If you see someone drowning and are in a position where you can safely do nothing or risk becoming another victim by assisting, you should assist iff your assistance will be successful, right?
Again, the language is ambiguous:
Moral “should”—yes, I should assist iff my assistance will be successful (assuming that saving the person’s life is a good thing).
Decision-theory “should”—I should assist if the expected value of my assistance is sufficiently high.
Is it moral to bet irresponsibly if you win? Is it immoral to refuse an irresponsible bet that would have paid off?
Assuming that winning the bet is moral, then betting irresponsibly was the morally right thing to do, though I could not have known that, and it was therefore an incorrect decision to make with the data I had.
Is it immoral to refuse an irresponsible bet that would have paid off?
Same reasoning.
I can’t see the practical use of a system where the morality of a choice is very often unknowable.
Another thing that can mean is that someone decided correctly, because they did the thing that had the highest expected value, but that led to doing the wrong thing because their beliefs about the world were incorrect and led them to miscalculate expected value. I assume that’s what you mean here.
Again, not quite. It’s possible for someone to accurately determine the expected results of a decision, but the actual results to vary significantly from the expected. Take a typical parimutuel gambling-for-cash scenario; the expected outcome is typically that the house gets a little richer, and all of the gamblers get a little poorer. That outcome literally never happens, according to the rules of the game.
Because my simulation (if I am in one) includes all of my existence. Meanwhile, a simulation run inside this existence contains only mathematical constructs or the equivalent.
Surely you don’t think that your mental model of me deserves to have its desires considered in addition to mine? You use that model of me to estimate what I value, which enters into your utility function. To also include the model’s point of view is double-counting the map.
My “mental model of you” consists of little more than a list of beliefs, which I then have my brain pretend it believes. In your case, it is woefully incomplete; but even the most detailed of those models are little more than characters I play to help predict how people would really respond to them. My brain lacks the knowledge and computing power to model people on the level of neurons or atoms, and if it had such power I would refuse to use it (at least for predictive purposes.)
OTOH, I don’t see what the difference is between two layers of simulation just because I happen to be in one of them. Do you think they don’t have qualia? Do you think they don’t have souls? Do you think they are exactly the same as you, but don’t care?
I don’t see what the difference is between two layers of simulation just because I happen to be in one of them. Do you think they don’t have qualia? Do you think they don’t have souls? Do you think they are exactly the same as you, but don’t care?
Ok, entities which exist only in simulation aren’t conscious. (or, if I am in a simulation, there is some characteristic which I lack which makes me irrelevant to the upstream entity.)
That seems to be a pretty clear answer to your questions.
What is this mysterious characteristic? How do you know about it? Is it possible to create a sim that does have it? Why should I care if someone has this characteristic, if they act just as intelligent and conscious as you do, and inspection of the source code reveals that you do so for the same reasons?
You were able to claim that one set of simulated individuals wasn’t conscious, but didn’t say how others were different.
What does it mean to inspect the source code of the universe, of of the simulation from within the simulation?
And the core difference seems to be that I don’t think simulated people can be harmed in the same sense that physical people can be harmed, and you disagree. Is that an apt summary as you see it?
You were able to claim that one set of simulated individuals wasn’t conscious, but didn’t say how others were different.
Eh? A dorf is just a few lines of code. If you built a robot with the same thought process, it wouldn’t be conscious either.
What does it mean to inspect the source code of the universe, of of the simulation from within the simulation?
No, you inspect the source code of the simulation, and check it does things for the same reason as the physical version.
Although there’s no reason, in theory, why a sim couldn’t read it’s own source code, so I’m not sure I understand your objection.
And the core difference seems to be that I don’t think simulated people can be harmed in the same sense that physical people can be harmed, and you disagree. Is that an apt summary as you see it?
You claimed to know (a priori?) that any level of simulation below your own would be different in such a way that we shouldn’t care about their suffering. You refused to state what this difference was, how it came about, or indeed answer any of know how much we disagree. I don’t know what your position is at all.
A matrix lord copies you. Both copies are in the same layer of simulation you currently occupy. Is the copy a person? Is it you?
A matrix lord copies your friend, Bob. Is the copy still Bob?
A matrix lord copies you. The copy is in another simulation, but one no “deeper” than this one. Is the copy a person? Is it you?
A matrix lord copies you. The copy is one layer “deeper” than this one. Is the copy a person? Is it you?
A matrix lord copies your friend Bob. Is the copy a person? Is it Bob?
A matrix lord copied you, without your knowledge. You are one layer deeper than the original. Are you a person? Are you still “you”?
You meet a robot. As far as you can tell, it is as sentient as your friend Bob. Is the robot a person?
A matrix lord scans your brain, simplifies it down to pure math (as complex as required to avoid changing how anything behaves) and programs this into the brain of a robot. Is this robot a person? Is it you?
A matrix lord copies your brain. The copy is one layer deeper than the original. They connect a robot in your original layer to the simulation. Is the result a person? Is it you?
A matrix lord tortures you for a thousand years. Is this wrong, in your estimation?
A matrix lord tortures your friend Bob for a thousand years. Is this wrong, in your estimation?
A matrix lord tortures you for a thousand years, then resets the program to before the torture began. Is the result the same person who was tortured? Is the result the same person as before the torture?
A matrix lord tortures you for a thousand years, then resets the program. Is this wrong in your estimation?
How many lines of code are required for a sim to be conscious? “a few” is too few, and “an entire universe” is too few if the universe is too different from our own. I say no amount is adequate.
A perfect copy that appears by magic begins identical with the original, but the descendent of the copy is not identical with the descendent of the original.
It’s possible to have a universal Turing machine without having the code which, when run on that machine, runs the universal Turing machine. If there exists more than one UTM, it is impossible by looking at the output to tell which one is running a given program. Similarly, examining the source of a simulation also required knowing the physics of the world in which the simulation runs.
For all of your torture suggestions, clarify. Do you mean “a matrix lord edits a simulation of X to be torture. Does this act violate the morality of the simulation?”? It’s currently unclear where the victim is in relation to both the torturer and the judgement.
How many lines of code are required for a sim to be conscious?
Funny.
A perfect copy that appears by magic begins identical with the original, but the descendent of the copy is not identical with the descendent of the original.
Even if the copy is a sim?
Similarly, examining the source of a simulation also required knowing the physics of the world in which the simulation runs.
Um, yes. Hence “in theory”.
For all of your torture suggestions, clarify. Do you mean “a matrix lord edits a simulation of X to be torture. Does this act violate the morality of the simulation?”? It’s currently unclear where the victim is in relation to both the torturer and the judgement.
Torture means inflict large amounts of pain. The precise method may be assumed to be one that does not interfere with the question (e.g. they haven’t been turned into anti-orgazmium.)
Where questions touch on morality, I stated whose morality was being referred to. Bear in mind that different questions may ask different things. The same applies to “where the victim is in relation to both the torturer and the judgement.”
Should I conclude that ‘a matrix lord’ is altering a simulation, and I, the judge, am in the next cubicle? If so, he is doing amoral math and nobody cares. The simulation might simulate caring, but that can be modified out without terminating the simulation, because it isn’t real.
There is a difference between simulating something and doIng it, regardless of the accuracy of the simulation.
Confirm that you don’t think a Turing machine can be or contain consciousness?
Should I conclude that ‘a matrix lord’ is altering a simulation, and I, the judge, am in the next cubicle?
A matrix lord refers to the (one of) simulator(s) who control this layer. In other words, they are one layer higher than you, and basically omnipotent from our perspective.
If so, he is doing amoral math and nobody cares. The simulation might simulate caring, but that can be modified out without terminating the simulation, because it isn’t real.
A superintelligence could modify you so you stop caring. I’m guessing you wouldn’t be OK with them torturing you?
There is a difference between simulating something and doIng it, regardless of the accuracy of the simulation.
What difference?
Confirm that you don’t think a Turing machine can be or contain consciousness?
Why … why would I think that? I’m the one defending sim rights, remember?
A matrix lord refers to the (one of) simulator(s) who control this layer. In other words, they are one layer higher than you, and basically omnipotent from our perspective.
In other words, the matrix lord IS the laws of physics. They exist beyond judgement from this layer.
What difference?
Actual things are made out of something besides information; there is a sense in which concrete things exist and abstract things (like simulations) don’t exist.
Why … why would I think that? I’m the one defending sim rights, remember?
Because that position requires that either a set of numbers or every universal Turing machine is conscious and capable of experiencing harm. Plus if a simulation can be conscious you need to describe a difference between a conscious sim and a dorf. Both of them are mathematical constructs, so your original objection is invalid. Dorfs have souls, noted as such in the code; how are the souls of sims qualitatively different?
I don’t claim to have a perfect nonperson predicate, I’m attacking yours as excluding entities that are clearly conscious.
In other words, the matrix lord IS the laws of physics.
Well, they can manipulate them. I’ll specify that they are roughly equivalent to a sim of a human at the same level, if that helps.
They exist beyond judgement from this layer.
Could you just go down the list and answer the questions?
Actual things are made out of something besides information; there is a sense in which concrete things exist and abstract things (like simulations) don’t exist.
Is there? Really? A sim isn’t floating in platonic space, you know.
Because that position requires that either a set of numbers or every universal Turing machine is conscious and capable of experiencing harm.
A set of numbers can’t be conscious. A set of numbers interpreted as computer code and run can be. Or, for that matter, interpreted as genetic code and cloned.
Plus if a simulation can be conscious you need to describe a difference between a conscious sim and a dorf.
As noted above, I don’t claim to have a perfect nonperson predicate. However, since you ask, a sim is doing everything the origional (who I believe was probably as conscious as I am based on their actions and brainscans) was—when they see a red ball, virtual neurons light up in the same patterns the real one did; when they talk about experiencing qualia of a red ball, the same thoughts run through their mind and if I was smart enough to decode them from real neurons I could decode them from virtual ones too.
Both of them are mathematical constructs, so your original objection is invalid.
And both humans and rocks (or insects) are physical constructs. My objection is not that it is a mathematical construct, but that it is one too simple to support the complexity of conscious thought.
Dorfs have souls, noted as such in the code; how are the souls of sims qualitatively different?
Eh? Writing “soul” on something does not a person make. Writing “hot” on something does not a fire make, either.
The laws of physics copies you. Both copies are next to you. Is the copy a person? Is it you?
It is a person, and it is as much me as I am- it has a descendent one tick later which is as much me` as the descendent of the other copy. Here we hit the ship problem.
The laws of physics copies your friend, Bob. The copy is next to Bob Is the copy still Bob?
The descendant of the copy is as much Bob as the descendent of the original.
The laws of physics copies you. The copy exists in a universe with different rules. Is the copy a person? Is it you?
That depends on the rules of the simulation in which the copy exists; assuming it is only the starting condition which differs, the two are indistinguishable.
Duplicate
Duplicate
You meet a robot. As far as you can tell, it is as sentient as your friend Bob. Is the robot a person?
If ‘person’ is understood to mean ‘sentient’, then I conclude that the robot is a person. If ‘person’ is understood to mean ‘human’, then I conclude that the robot is a robot.
The laws of physics scans your brain, simplifies it down to pure math (as complex as required to avoid changing how anything behaves) and programs this into the brain of a robot. Is this robot a person? Is it you?
Assuming that the premise is possible; assuming that life-support is also maintained identically (the brain of the robot has identical blood flow through it, which requires that the robot brain have a physically identical structure); the robot is as sentient as I am and it’s decisions are defined to be identical to mine. It is not as much me as the direct descendent of me is.
The laws of physicscauses to appear a simulating machine with a copy of your brainThey connect a robot to the simulation. Is the result a person? Is it you?
Assuming that the robot uses a perfect simulation of a brain instead of a real one, it is as sentient as if it were using a brain. It is not identical with the previous robot nor with me.
The laws of physics tortures you for a thousand years. Is this wrong, in your estimation?
Undesirable. Since the matrix lord does not make decisions in any context I am aware of, it can’t be wrong.
The laws of physics tortures your friend Bob for a thousand years. Is this wrong, in your estimation?
Ditto
The laws of physics tortures you for a thousand years, then the universe returns to a state identical to the state prior to the torture. Is the result the same person who was tortured? Is the result the same person as before the torture?
‘Same’ has lost meaning in this context.
The laws of physics tortures you for a thousand years, then the universe returns to a state identical to the state prior to the torture. Is this wrong in your estimation?
I can’t tell the difference between this case and any contrary case; either way, I observe a universe in which I have not yet been tortured.
It is not always meaningful to refer to ‘human’ when referencing a different level. What is a matrix lord, and how do I tell the difference between a matrix lord and physics?
Well, a matrix lord can talk, assume a human shape, respond to verbal requests etc. as well as modify the current laws of physics (including stuff like conjuring a seat out of thin air, which is more like using physical law that was clearly engineered for their benefit.
However, the questions are meant to be considered in the abstract; please assume you know with certainty that this occurred, for simplicity.
A matrix lord copies you. Both copies are in the same layer of simulation you currently occupy. Is the copy a person? Is it you?
A matrix lord copies your friend, Bob. Is the copy still Bob?
A matrix lord copies you. The copy is in another simulation, but one no “deeper” than this one. Is the copy a person? Is it you?
A matrix lord copies you. The copy is one layer “deeper” than this one. Is the copy a person? Is it you?
A matrix lord copies your friend Bob. Is the copy a person? Is it Bob?
A matrix lord copied you, without your knowledge. You are one layer deeper than the original. Are you a person? Are you still “you”?
You meet a robot. As far as you can tell, it is as sentient as your friend Bob. Is the robot a person?
A matrix lord scans your brain, simplifies it down to pure math (as complex as required to avoid changing how anything behaves) and programs this into the brain of a robot. Is this robot a person? Is it you?
A matrix lord copies your brain. The copy is one layer deeper than the original. They connect a robot in your original layer to the simulation. Is the result a person? Is it you?
A matrix lord tortures you for a thousand years. Is this wrong, in your estimation?
A matrix lord tortures your friend Bob for a thousand years. Is this wrong, in your estimation?
A matrix lord tortures you for a thousand years, then resets the program to before the torture began. Is the result the same person who was tortured? Is the result the same person as before the torture?
A matrix lord tortures you for a thousand years, then resets the program. Is this wrong in your estimation?
The matrix lord can cause a person to poof into (or out of) existence, but the person so created is not a matrix lord. If the matrix lord is communicating to me (for example, by editing the air density in the room to cause me to hear spoken words, or by editing my brain so that I hear the words, or editing my brain so that I believe, the edits used by the lord are different from him.
I don’t see what the distinction is between “Objects have now accelerated toward each other by an amount proportional to the product of their masses divided by the cube of the distance between them” and “There is now a chair here.” Both are equally meaningful as ‘physical law’.
Fair enough. Your evidence that the Matrix Lord exists is probably laws of physics being changed in ways that appear to be the work of intelligence, and conveying information claiming to be from a Matrix Lord.
Or they could have edited your brain to think so; the point is that you are reasonably certain that the events described in the question actually happened.
How about if it’s Omega, and you’re real as far as you can tell:
Omega duplicates you. Is the copy a person? Is it you?
Omega duplicates your friend, Bob. Is the copy still Bob?
Omega simulates you. Is this sim a person? Is it you?
Omega duplicates your friend Bob. Is the copy a person? Is it Bob?
Omegacopied you, without your knowledge. You are actually in a simulation. Are you a person? Are you still “you”?
You meet a robot. As far as you can tell, it is as sentient as your friend Bob. Is the robot a person?
Omega scans your brain, simplifies it down to pure math (as complex as required to avoid changing how anything behaves) and programs this into the brain of a robot. Is this robot a person? Is it you?
Omega scans your brain. He then simulates it. Then he connects a (real) robot to the simulation. Is the result a person? Is it you?
Omega tortures you for a thousand years. Is this wrong, in your estimation?
Omega tortures your friend Bob for a thousand years. Is this wrong, in your estimation?
Omega tortures you for a thousand years, then “resets” you with nanotech to before the torture began. Is the result the same person who was tortured? Is the result the same person as before the torture?
Omega tortures you for a thousand years, then “resets” you with nanotech. Is this wrong in your estimation?
And a new one, to balance out the question that required you to be in a sim:
Omega scans you and simulates you. The simulation tells you that it’s still conscious, experiences qualia etc. and admits this seems to contradict it’s position on the ethics of simulations. Do you change your mind on anything?
Define “duplicates”, “original”, and “same” well enough to answer the Ship of Theseus problem.
Can I summarize the last question to “Omega writes a computer program which outputs ’I am conscious, experience qualia, etc. and this contradicts my position on the ethics of simulations”?
If not, what additional aspects need be included? If so, the simulation is imperfect because I do not believe that such a contradiction would be indicated.
Can I summarize the last question to “Omega writes a computer program which outputs ’I am conscious, experience qualia, etc. and this contradicts my position on the ethics of simulations”?
Well, it talks to you first. IDK what you would talk about with a perfect copy of yourself, but it says what you would expect an actual conscious copy to say (because it’s a perfect simulation.)
If so, the simulation is imperfect because I do not believe that such a contradiction would be indicated.
You don’t think finding yourself as a conscious sim would indicate sims are conscious? Because I assumed that’s what you meant by
So, I am reasonably certain that I am (part of?) a number which is being processed by an algorithm.
That breaks all of my moral values, and I have to start again from scratch.
Well, it talks to you first. IDK what you would talk about with a perfect copy of yourself, but it says what you would expect an actual conscious copy to say (because it’s a perfect simulation.)
So, it passes the Turing test, as I adjudicate it? It’s a simulation of me which sits at a computer and engages with me over the internet?
When I tell you that you are the copy of me, and prove it without significantly changing the conditions of the simulation or breaking the laws of physics, I predict that you will change your position. Promptly remove the nearest deck of cards from the pack, and throw it against the ceiling fairly hard. Only all of the black cards will land face up.
You don’t think finding yourself as a conscious sim would indicate sims are conscious?
When I recognize that numbers in general are conscious entities which experience all things simultaneously, (proof: Consider the set of all universal turing machines. Select a UTM which takes this number as input and simulates a world with some set of arbitrary conditions.) I stop caring about conscious entities and reevaluate what is and is not an agent.
If I am a number in a calculation, I priviledge the simulation I am in above all others. I expect residents of all other simulations to priviledge their own simulation above all others.
Being made of carbon chains isn’t relevant; being made of matter instead of information or an abstraction is important, and even if there exists a reference point from which my matter is abstract information, I, the abstract information, insrinically value my flavor of abstraction more so than any other reference. (there is an instrumental value to manipulating the upstream contexts, however)
Ah, OK. Sure, I can understand local-context privileging. Thanks for clarifying.
I can’t understand the lack of local-universe privilege.
Suppose that literally everything I observe is a barely imperfect simulation made by IBM, as evidenced by the observation that a particular particle interaction leaves traces which reliably read “World sim version 7.00.1.5 build 11/11/11 Copyright IBM, special thanks JKR” instead of the expected particle traces. Also, invoking certain words and gestures allows people with a certain genetic expression to break various physical laws.
Now, suppose that a golden tablet appeared before me explicitly stating that Omega has threatened the world which created our simulation. However, we, the simulation, are able to alter the terms of this threat. If a selected resident (me) of Sim-Earth decides to destroy Sim-Earth, Meta-1 Earth will suffer no consequences other than one instance of an obsolete version of one of their simulations crashing. If I refuse, then Omega will roll a fair d6, and on a result of 3 or higher will destroy Meta-1 Earth, along with all of their simulations including mine.
Which is the consequentialist thing to do? (I dodge the question by not being consequentialist; I am not responsible for Omega’s actions, even if Omega tells me how to influence him. I am responsible for my own actions.)
Undefined. Legitimate and plausible consequentialist value systems can be conceived that go either way.
To prefer a 60% chance of the destruction of more than two existences to the certainty of the extinction of humanity in one of them is an interesting position.
Clearly, however, either such a preference either incurs local privilege, or it should be just as logical to prefer the 60% destruction of more than everything over the certain destruction of a different simulation, one that would never have interaction with the one that the agent experiences.
Yes, far from inconceivable and perhaps even held coherently by a majority of humans but certainly different to mine. I have decidedly different preferences, in certain cases it’s less than that. If I found I was in certain kinds of simulations I’d value my own existence either less or not at all.
Yes, it would (assuming I understand correctly what you mean by that).
I hadn’t considered the angle that the simulation might be run by an actively hostile entity; in that case, destroying the hostile entity (ending the simulation) is the practical thing to do at the top layer, and also the desired result in the simulation (end of simulation rather than torture).
Just to make sure I understand, let me restate your scenario: there’s a world (“Meta-1 Earth”) which contains a simulation (“Sim-Earth”), and I get to choose whether to destroy Sim-Earth or not. If I refuse, there’s a 50% chance of both Sim-Earth and Meta-1 Earth being destroyed. Right?
So, the consequentialist thing to do is compare the value of Sim-Earth (V1) to the value of Meta-1 Earth (V2), and destroy Sim-Earth iff V2/2 > V1.
You haven’t said much about Meta-1 Earth, but just to pick an easily calculated hypothetical, if Omega further informs me that there are ten other copies of World sim version 7.00.1.5 build 11/11/11 running on machines in Meta-1 Earth (not identical to Sim-Earth, because there’s some randomness built into the sim, but roughly equivalent), I would conclude that destroying Sim-Earth is the right thing to do if everything is as Omega has represented it.
I might not actually do that, in the same way that I might not kill myself to save ten other people, or even give up my morning latte to save ten other people, but that’s a different question.
Subtle distinctions. We have no knowledge about Meta-1 Earth. We only have the types of highly persuasive but technically circumstantial evidence provided; Omega exists in this scenario and is known by name, but he is silent on the question of whether the inscription on the massive solid gold tablet is truthful. The doomseday button is known to be real.
What would evidence regarding the existence of M1E look like?
(Also:4/6 chance of a 3 or higher. I don’t think the exact odds are critical.)
Well, if there are grounds for confidence that the button destroys the world, but no grounds for confidence in anything about the Meta-1 Earth stuff, then a sensible decision theory chooses not to press the button.
(Oh, right. I can do basic mathematics, honest! I just can’t read. :-( )
What would evidence for or against being in a simulation look like?
I’m really puzzled by this question.
You started out by saying:
I was content to accept that supposition, not so much because I think I would necessarily be convinced of it by experiencing that, as because it seems plausible enough for a thought experiment and I didn’t want to fight the hypothetical.
But now it sounds like you’ve changed the question completely? Or am I deeply confused? In any case, I’ve lost the thread of whatever point you’re making.
Anyway, to answer your question, I’m not sure what would be compelling evidence for or against being in a simulation per se. For example, I can imagine discovering that physical constants encode a complex message under a plausible reading frame, and “I’m in a simulation” is one of the theories which accounts for that, but not the only one. I’m not sure how I would disambiguate “I’m in a simulation” from “there exists an intelligent entity with the power to edit physical constants” from “there exists an intelligent entity with the power to edit the reported results of measurements of physical constants.” Mostly, I would have to accept I was confused and start rethinking everything I used to believe about the universe.
Here’s a better way of looking at the problem: Is it possible to run a simulation which is both indistinguishable from reality (from within the simulation) and such that something which develops within the simulation will realize that it is in a simulation?
Is it possible, purely from within a simulation, for a resident to differentiate the simulation from reality, regardless of the quality of the simulation?
How can moral imperatives point towards things which are existence-agnostic?
We may need to further define “realize.” Supposing that it is possible to run a simulation which is indistinguishable from reality in the first place, it’s certainly possible for something which develops within the simulation to believe it is in a simulation, just like it’s possible for people in reality to do so.
Within a simulation that is indistinguishable from reality, it is of course not possible for a resident to distinguish the simulation from reality.
I have no idea what this question means. Can you give me some examples of proposed moral imperatives that are existence-agnostic?
A moral imperative which references something which may or may not be exemplified; it doesn’t change if that which it references does not exist.
“Maximize the density of the æther.” is such an imperative.
“Include God when maximizing total utility.” is the version I think you are using (with ‘God’ being the creator of the simulation; I think that the use of the religious referent is appropriate because they have the same properties.)
So, if I’m understanding you: when my father was alive, I endorsed “Don’t kill your father.” When he died I continued to endorse it just as I had before. That makes “Don’t kill your father” a moral imperative which points towards something existence-agnostic, on your account… yes?
I have no idea what you’re on about by bringing God into this.
No- because fathers exist.
“Maximize the amount of gyration and gimbling of slithy toves” would be a better example.
I’m using God as a shorthand for the people running the simulation. I’m not introducing anything from religion but the name for something with that power.
OK; thanks for the clarification.
I don’t think a moral imperative can meaningfully include a meaningless term.
I do think a moral imperative can meaningfully include a meaningful term whose referent doesn’t currently exist in the world.
Also, it can be meaningful to make a moral assertion that depends on an epistemically unreachable state. For example, if I believe (for whatever reason) that I’ve been poisoned and that the pill in my hand contains an antidote, but in fact I haven’t been poisoned and the pill is poison, taking the pill is in fact the wrong thing to do, even though I can’t know that.
I prefer to have knowable morality- I must make decisions without information about the world, but only with my beliefs.
For example, it is wrong to pull the trigger of a gun aimed at an innocent person without knowing if it is loaded. The expected outcome is what matters, not the actual outcome.
Well, I certainly agree that we make decisions based on our beliefs (I would also say that our beliefs are, or at least can be, based on information about the world, but I understand you here to be saying that we must make decisions without perfect information about the world, which I agree with).
That said, I think you are eliding morality and decision procedures, which I think elides an important distinction.
For example, if at time T1 the preponderance of the evidence I have indicates the pill is an antidote, and at some later time T2 the preponderance of the evidence indicates that the pill is poison, a sensible decision theory says (at T1) to take the pill and (at T2) not to take the pill.
But to say that taking the pill is morally right at T1 and not-taking the pill is morally right at T2 seems no more justified to me than to say that the pill really is an antidote at T1 and is poison at T2. That just isn’t the case, and a morality or an ontology that says it is the case is simply mistaken. The pill is always poison, and taking the pill is therefore the wrong thing to do, whether I know it or not.
I guess you could say that I prefer that my morality, like my ontology, be consistent to having it be knowable.
So then it is nonsense to claim that someone did the right thing, but had a bad outcome?
If you see someone drowning and are in a position where you can safely do nothing or risk becoming another victim by assisting, you should assist iff your assistance will be successful, right?
Is it moral to bet irresponsibly if you win? Is it immoral to refuse an irresponsible bet that would have paid off?
I can’t see the practical use of a system where the morality of a choice is very often unknowable.
Also, thinking about this some more:
Suppose I have two buttons, one red and one green. I know that one of those buttons (call it “G”) creates high positive utility and the other (“B”) creates high negative utility. I don’t know whether G is red and B green, or the other way around.
On your account, if I understand you correctly, to say “pressing G is the right thing to do” is meaningless, because I can’t know which button is G. Pressing G, pressing B, and pressing neither are equally good acts on your account, even though one of them creates high positive utility and the other creates high negative utility. Is that right?
On my account, I would say that the choice between red and green is a question of decision theory, and the choice between G and B is a question of morality. Pressing G is the right thing to do, but I don’t know how to do it.
‘Pressing a button’ is one act, and ‘pressing both buttons’ and ‘pressing neither button’ are two others. If you press a button randomly, it isn’t morally relevant which random choice you made.
What does it mean to choose between G and B, when you have zero relevant information?
(shrug) It means that I do something that either causes G to be pressed, or causes B to be pressed. It means that the future I experience goes one way or another as a consequence of my act.
I have trouble believing that this is unclear; I feel at this point that you’re asking rhetorical questions by way of trying to express your incredulity rather than to genuinely extract new knowledge.Either way, I think we’ve gotten as far as we’re going to get here; we’re just going in circles.
I prefer a moral system in which the moral value of an act relative to a set of values is consistent over time, and I accept that this means it’s possible for there to be a right thing to do even when I don’t happen to have any way of knowing what the right thing to do is… that it’s possible to do something wrong out of ignorance. I understand you reject such a system, and that’s fine; I’m not trying to convince you to adopt it.
I’m not sure there’s anything more for us to say on the subject.
Well, it’s not nonsense, but it’s imprecise.
One thing that can mean is that the action had a net positive result globally, but negative results in various local frames. I assume that’s not what you mean here though, you mean had a bad outcome overall.
Another thing that can mean is that someone decided correctly, because they did the thing that had the highest expected value, but that led to doing the wrong thing because their beliefs about the world were incorrect and led them to miscalculate expected value. I assume that’s what you mean here.
Again, the language is ambiguous:
Moral “should”—yes, I should assist iff my assistance will be successful (assuming that saving the person’s life is a good thing).
Decision-theory “should”—I should assist if the expected value of my assistance is sufficiently high.
Assuming that winning the bet is moral, then betting irresponsibly was the morally right thing to do, though I could not have known that, and it was therefore an incorrect decision to make with the data I had.
Same reasoning.
All right.
Again, not quite. It’s possible for someone to accurately determine the expected results of a decision, but the actual results to vary significantly from the expected. Take a typical parimutuel gambling-for-cash scenario; the expected outcome is typically that the house gets a little richer, and all of the gamblers get a little poorer. That outcome literally never happens, according to the rules of the game.
I agree, but this seems entirely tangential to the points either of us were making.
Once again: why? Why privilege your simulation? Why not do the same for your planet? Your species? Your country? (Do you implement some of these?)
Because my simulation (if I am in one) includes all of my existence. Meanwhile, a simulation run inside this existence contains only mathematical constructs or the equivalent.
Surely you don’t think that your mental model of me deserves to have its desires considered in addition to mine? You use that model of me to estimate what I value, which enters into your utility function. To also include the model’s point of view is double-counting the map.
My “mental model of you” consists of little more than a list of beliefs, which I then have my brain pretend it believes. In your case, it is woefully incomplete; but even the most detailed of those models are little more than characters I play to help predict how people would really respond to them. My brain lacks the knowledge and computing power to model people on the level of neurons or atoms, and if it had such power I would refuse to use it (at least for predictive purposes.)
OTOH, I don’t see what the difference is between two layers of simulation just because I happen to be in one of them. Do you think they don’t have qualia? Do you think they don’t have souls? Do you think they are exactly the same as you, but don’t care?
Does Dwarf Fortress qualify as a simulation? If so, is there a moral element to running it?
Does f’(), which is the perfect simulation function f(), modified such that a cake appears in my cupboard every night, qualify?
Dorfs aren’t conscious.
To reiterate:
Ok, entities which exist only in simulation aren’t conscious. (or, if I am in a simulation, there is some characteristic which I lack which makes me irrelevant to the upstream entity.)
That seems to be a pretty clear answer to your questions.
No, it’s really not.
What is this mysterious characteristic? How do you know about it? Is it possible to create a sim that does have it? Why should I care if someone has this characteristic, if they act just as intelligent and conscious as you do, and inspection of the source code reveals that you do so for the same reasons?
You were able to claim that one set of simulated individuals wasn’t conscious, but didn’t say how others were different.
What does it mean to inspect the source code of the universe, of of the simulation from within the simulation?
And the core difference seems to be that I don’t think simulated people can be harmed in the same sense that physical people can be harmed, and you disagree. Is that an apt summary as you see it?
Eh? A dorf is just a few lines of code. If you built a robot with the same thought process, it wouldn’t be conscious either.
No, you inspect the source code of the simulation, and check it does things for the same reason as the physical version.
Although there’s no reason, in theory, why a sim couldn’t read it’s own source code, so I’m not sure I understand your objection.
You claimed to know (a priori?) that any level of simulation below your own would be different in such a way that we shouldn’t care about their suffering. You refused to state what this difference was, how it came about, or indeed answer any of know how much we disagree. I don’t know what your position is at all.
A matrix lord copies you. Both copies are in the same layer of simulation you currently occupy. Is the copy a person? Is it you?
A matrix lord copies your friend, Bob. Is the copy still Bob?
A matrix lord copies you. The copy is in another simulation, but one no “deeper” than this one. Is the copy a person? Is it you?
A matrix lord copies you. The copy is one layer “deeper” than this one. Is the copy a person? Is it you?
A matrix lord copies your friend Bob. Is the copy a person? Is it Bob?
A matrix lord copied you, without your knowledge. You are one layer deeper than the original. Are you a person? Are you still “you”?
You meet a robot. As far as you can tell, it is as sentient as your friend Bob. Is the robot a person?
A matrix lord scans your brain, simplifies it down to pure math (as complex as required to avoid changing how anything behaves) and programs this into the brain of a robot. Is this robot a person? Is it you?
A matrix lord copies your brain. The copy is one layer deeper than the original. They connect a robot in your original layer to the simulation. Is the result a person? Is it you?
A matrix lord tortures you for a thousand years. Is this wrong, in your estimation?
A matrix lord tortures your friend Bob for a thousand years. Is this wrong, in your estimation?
A matrix lord tortures you for a thousand years, then resets the program to before the torture began. Is the result the same person who was tortured? Is the result the same person as before the torture?
A matrix lord tortures you for a thousand years, then resets the program. Is this wrong in your estimation?
How many lines of code are required for a sim to be conscious? “a few” is too few, and “an entire universe” is too few if the universe is too different from our own. I say no amount is adequate.
A perfect copy that appears by magic begins identical with the original, but the descendent of the copy is not identical with the descendent of the original.
It’s possible to have a universal Turing machine without having the code which, when run on that machine, runs the universal Turing machine. If there exists more than one UTM, it is impossible by looking at the output to tell which one is running a given program. Similarly, examining the source of a simulation also required knowing the physics of the world in which the simulation runs.
For all of your torture suggestions, clarify. Do you mean “a matrix lord edits a simulation of X to be torture. Does this act violate the morality of the simulation?”? It’s currently unclear where the victim is in relation to both the torturer and the judgement.
Funny.
Even if the copy is a sim?
Um, yes. Hence “in theory”.
Torture means inflict large amounts of pain. The precise method may be assumed to be one that does not interfere with the question (e.g. they haven’t been turned into anti-orgazmium.)
Where questions touch on morality, I stated whose morality was being referred to. Bear in mind that different questions may ask different things. The same applies to “where the victim is in relation to both the torturer and the judgement.”
Should I conclude that ‘a matrix lord’ is altering a simulation, and I, the judge, am in the next cubicle? If so, he is doing amoral math and nobody cares. The simulation might simulate caring, but that can be modified out without terminating the simulation, because it isn’t real.
There is a difference between simulating something and doIng it, regardless of the accuracy of the simulation.
Confirm that you don’t think a Turing machine can be or contain consciousness?
A matrix lord refers to the (one of) simulator(s) who control this layer. In other words, they are one layer higher than you, and basically omnipotent from our perspective.
A superintelligence could modify you so you stop caring. I’m guessing you wouldn’t be OK with them torturing you?
What difference?
Why … why would I think that? I’m the one defending sim rights, remember?
In other words, the matrix lord IS the laws of physics. They exist beyond judgement from this layer.
Actual things are made out of something besides information; there is a sense in which concrete things exist and abstract things (like simulations) don’t exist.
Because that position requires that either a set of numbers or every universal Turing machine is conscious and capable of experiencing harm. Plus if a simulation can be conscious you need to describe a difference between a conscious sim and a dorf. Both of them are mathematical constructs, so your original objection is invalid. Dorfs have souls, noted as such in the code; how are the souls of sims qualitatively different?
I don’t claim to have a perfect nonperson predicate, I’m attacking yours as excluding entities that are clearly conscious.
Well, they can manipulate them. I’ll specify that they are roughly equivalent to a sim of a human at the same level, if that helps.
Could you just go down the list and answer the questions?
Is there? Really? A sim isn’t floating in platonic space, you know.
A set of numbers can’t be conscious. A set of numbers interpreted as computer code and run can be. Or, for that matter, interpreted as genetic code and cloned.
As noted above, I don’t claim to have a perfect nonperson predicate. However, since you ask, a sim is doing everything the origional (who I believe was probably as conscious as I am based on their actions and brainscans) was—when they see a red ball, virtual neurons light up in the same patterns the real one did; when they talk about experiencing qualia of a red ball, the same thoughts run through their mind and if I was smart enough to decode them from real neurons I could decode them from virtual ones too.
And both humans and rocks (or insects) are physical constructs. My objection is not that it is a mathematical construct, but that it is one too simple to support the complexity of conscious thought.
Eh? Writing “soul” on something does not a person make. Writing “hot” on something does not a fire make, either.
Now that I have the time and capability:
It is a person, and it is as much me as I am- it has a descendent one tick later which is as much me` as the descendent of the other copy. Here we hit the ship problem.
The descendant of the copy is as much Bob as the descendent of the original.
That depends on the rules of the simulation in which the copy exists; assuming it is only the starting condition which differs, the two are indistinguishable.
If ‘person’ is understood to mean ‘sentient’, then I conclude that the robot is a person. If ‘person’ is understood to mean ‘human’, then I conclude that the robot is a robot.
Assuming that the premise is possible; assuming that life-support is also maintained identically (the brain of the robot has identical blood flow through it, which requires that the robot brain have a physically identical structure); the robot is as sentient as I am and it’s decisions are defined to be identical to mine. It is not as much me as the direct descendent of me is.
Assuming that the robot uses a perfect simulation of a brain instead of a real one, it is as sentient as if it were using a brain. It is not identical with the previous robot nor with me.
Undesirable. Since the matrix lord does not make decisions in any context I am aware of, it can’t be wrong.
Ditto
‘Same’ has lost meaning in this context.
I can’t tell the difference between this case and any contrary case; either way, I observe a universe in which I have not yet been tortured.
It is not always meaningful to refer to ‘human’ when referencing a different level. What is a matrix lord, and how do I tell the difference between a matrix lord and physics?
Well, a matrix lord can talk, assume a human shape, respond to verbal requests etc. as well as modify the current laws of physics (including stuff like conjuring a seat out of thin air, which is more like using physical law that was clearly engineered for their benefit.
However, the questions are meant to be considered in the abstract; please assume you know with certainty that this occurred, for simplicity.
The matrix lord can cause a person to poof into (or out of) existence, but the person so created is not a matrix lord. If the matrix lord is communicating to me (for example, by editing the air density in the room to cause me to hear spoken words, or by editing my brain so that I hear the words, or editing my brain so that I believe, the edits used by the lord are different from him.
I don’t see what the distinction is between “Objects have now accelerated toward each other by an amount proportional to the product of their masses divided by the cube of the distance between them” and “There is now a chair here.” Both are equally meaningful as ‘physical law’.
Fair enough. Your evidence that the Matrix Lord exists is probably laws of physics being changed in ways that appear to be the work of intelligence, and conveying information claiming to be from a Matrix Lord.
Or they could have edited your brain to think so; the point is that you are reasonably certain that the events described in the question actually happened.
So, I am reasonably certain that I am (part of?) a number which is being processed by an algorithm.
That breaks all of my moral values, and I have to start again from scratch.
Cop-out: I decide whatever the matrix lord chooses for me to decide.
Fair enough.
How about if it’s Omega, and you’re real as far as you can tell:
And a new one, to balance out the question that required you to be in a sim:
Omega scans you and simulates you. The simulation tells you that it’s still conscious, experiences qualia etc. and admits this seems to contradict it’s position on the ethics of simulations. Do you change your mind on anything?
Define “duplicates”, “original”, and “same” well enough to answer the Ship of Theseus problem.
Can I summarize the last question to “Omega writes a computer program which outputs ’I am conscious, experience qualia, etc. and this contradicts my position on the ethics of simulations”?
If not, what additional aspects need be included? If so, the simulation is imperfect because I do not believe that such a contradiction would be indicated.
Well, it talks to you first. IDK what you would talk about with a perfect copy of yourself, but it says what you would expect an actual conscious copy to say (because it’s a perfect simulation.)
You don’t think finding yourself as a conscious sim would indicate sims are conscious? Because I assumed that’s what you meant by
So, it passes the Turing test, as I adjudicate it? It’s a simulation of me which sits at a computer and engages with me over the internet?
When I tell you that you are the copy of me, and prove it without significantly changing the conditions of the simulation or breaking the laws of physics, I predict that you will change your position. Promptly remove the nearest deck of cards from the pack, and throw it against the ceiling fairly hard. Only all of the black cards will land face up.
When I recognize that numbers in general are conscious entities which experience all things simultaneously, (proof: Consider the set of all universal turing machines. Select a UTM which takes this number as input and simulates a world with some set of arbitrary conditions.) I stop caring about conscious entities and reevaluate what is and is not an agent.