I’d like to know people who agree with me that mental models of people can often be people. Consider contacting me if that’s you.
Nox ML
Mental Models Of People Can Be People
I like the distinctions you make between sentient, sapient, and conscious. I would like to bring up some thoughts about how to choose a morality that I think are relevant to your points about death of cows and transient beings, which I disagree with.
I think that when choosing our morality, we should do so under the assumption that we have been given complete omnipotent control over reality and that we should analyze all of our values independently, not taking into consideration any trade-offs, even when some of our values are logically impossible to satisfy simultaneously. Only after doing this do we start talking about what’s actually physically and logically possible and what trade-offs we are willing to make, while always making sure to be clear when something is actually part of our morality vs when something is a trade-off.
The reason for this approach is to avoid accidentally locking in trade-offs into our morality which might later turn out to not actually be necessary. And the great thing about it is that if we have not accidentally locked in any trade-offs into our morality, this approach should give back the exact same morality that we started off with, so when it doesn’t return the same answer I find it pretty instructive.
I think this applies to the idea that it’s okay to kill cows, because when I consider a world where I have to decide whether or not cows die, and this decision will not affect anything else in any way, then my intuition is that I slightly prefer that they not die. Therefore my morality is that cows should not die, even though in practice I think I might make similar trade-offs as you when it comes to cows in the world of today.
Something similar applies to transient computational subprocesses. If you had unlimited power and you had to explicitly choose if the things you currently call “transient computational subprocesses” are terminated, and you were certain that this choice would not affect anything else in any way at all (not even the things you think it’s logically impossible for it not to affect), would you still choose to terminate them? Remember that no matter what you choose here, you can still choose to trade things off the same way afterwards, so your answer doesn’t have to change your behavior in any way.
It’s possible that you still give the exact same answers with this approach, but I figure there’s a chance this might be helpful.
The reason I care if something is a person or not is that “caring about people” is part of my values. I feel pretty secure in taking for granted that my readers also share that value, because it’s a pretty common one and if they don’t then there’s nothing to argue about since we just have incompatible utility functions.
What would be different if it were or weren’t, and likewise what would be different if it were just part of our person-hood?
One difference that I would expect in a world where they weren’t people is that there would be some feature you could point to in humans which cannot be found in mental models of people, and for which there is a principled reason to say “clearly, anything missing that feature is not a person”.
I don’t personally think I’m making this mistake, since I do think that saying “the conscious experience is the data” actually does resolve my confusion about the hard problem of consciousness. (Though I am still left with many questions.)
And if we take reductionism as a strongly supported axiom (which I do), then necessarily any explanation of consciousness will have to be describable in terms of data and computation. So it seems to me that if we’re waiting for an explanation of experience that doesn’t boil down to saying “it’s a certain type of data and computation”, then we’ll be waiting forever.
The reason I reject all the arguments of the form “mental models are embedded inside another person, therefore they are that person” is that this argument is too strong. If a conscious AI was simulating you directly inside its main process, I think you would still qualify as a person of your own, even though the AI’s conscious experience would contain all your experiences in much the same way that your experience contains all the experiences of your character.
I also added an addendum to the end of the post which explains why I don’t think it’s safe to assume that you feel everything your character does the same way they do.
By pretty much every objective measure, the people who accept the doomsday argument in my thought experiment do better than those who don’t. So I don’t think it takes any additional assumptions to conclude that even selfish people should say yes.
From what I can tell, a lot of your arguments seem to be applicable even outside anthropics. Consider the following experiment. An experimenter rolls a fair 100-sided die. Then they ask someone to guess if they rolled a number >5 or not, giving them some reward if they guess correctly. Then they reroll and ask a different person, and repeat this 100 times. Now suppose I was one of these 100 people. In this situation, I could use reasoning that seems very similar to yours to reject any kind of action based on probability:
I either get the reward or not as the die landed on a number >5 or not. Giving an answer based on expected value might maximize the total benefit in aggregate of the 100 people, but it doesn’t help me, because I can’t know if the die is showing >5 or not. It is correct to say if everyone makes decisions based on expected utility then they will have more reward combined. But I will only have more reward if the die is >5, and this was already determined at the time of my decision, so there is no fact of the matter about what the best decision is.
And granted, it’s true, you can’t be sure what the die is showing in my experiment, or which copy you are in anthropic problems. But the whole point of probability is reasoning when you’re not sure, so that’s not a good reason to reject probabilistic reasoning in either of those situations.
I think we just have different values. I think death is bad in itself, regardless of anything else. If someone dies painlessly and no one ever noticed that they had died, I would still consider it bad.
I also think that truth is good in and of itself. I want to know the truth and I think it’s good in general when people know the truth.
Here, I technically don’t think you’re lying to the simulated characters at all—in so far as the mental simulation makes them real, it makes the fictional world, their age, and their job real too.
Telling the truth to a mental model means telling them that they are a mental model, not that they are a regular human. It means telling them that the world they think they live in is actually a small mental model living in your brain with a minuscule population.
And sure, it might technically be true that within the context of your mental models, they “live” inside the fictional world, so “it’s not a lie”. But not telling them that they are in a mental model is such a incredibly huge thing to omit that I think it’s significantly worse than the majority of lies people tell, even though it can technically qualify as a “lie by omission” if you phrase it right.
so I would expect simulating pain in such away to be profoundly uncomfortable for the author.
I’ve given my opinion on this in an addendum added to the end of the post, since multiple people brought up similar points.
I can definitely create mental models of people who have a pain-analogue which affects their behavior in ways similar to how pain affects mine, without their pain-analogue causing me pain.
there’s no point on reducing this to a minimal Platonic concept of ‘simulating’ in which simulating excruciating pain causes excruciating pain regardless of physiological effects.
I think this is the crux of where we disagree. I don’t think it matters if pain is “physiological” in the sense of being physiologically like how a regular human feels pain. I only care if there is an experience of pain.
I don’t know of any difference between physiological pain and the pain-analogues I inflicted on my mental models which I would accept as necessary for it to qualify as an experience of pain. But since you clearly do think that there is such a difference, what would you say the difference is?
- 25 Apr 2023 23:21 UTC; 2 points) 's comment on Mental Models Of People Can Be People by (
My best guess about what you mean is that you are referring to the part in the “Ethics” section where I recommend just not creating such mental models in the first place?
To some extent I agree that mortality doesn’t mean it should’ve never lived, and indeed I am not against having children. However, after stumbling on the power to create lives that are entirely at my mercy and very high-maintenance to keep alive, I became more deontological about my approach to the ethics of creating lives. I think it’s okay to create lives, but you must put in a best effort to give them the best life that you can. For mental models, that includes keeping them alive for as long as you do, letting them interact with the world, and not lying to them. I think that following this rule leads to better outcomes than not following it.
That’s right. It’s why I included the warning at the top.
So what part of a mathematical universe do you find distasteful?
the idea that “2” exists as an abstract idea apart from any physical model
It’s this one.
Okay, but if actual infinities are allowed, then what defines small in the “made up of small parts”? Like, would tiny ghosts be okay because they’re “small”?
Given that you’re asking this question, I still haven’t been clear enough. I’ll try to explain it one last time. This time I’ll talk about Conway’s Game of Life and AI. The argument will carry over straightforwardly to physics and humans. (I know that Conway’s Game of life is made up of discrete cells, but I won’t be using that fact in the following argument.)
Suppose there is a Game of Life board which has an initial state which will simulate an AI. Hopefully it is inarguable that the AI’s behavior is entirely determined by the cell states and GoL rules.
Now suppose that as the game board evolves, the AI discovers Peano Arithmetic, derives “2 + 2 = 4”, and observes that this corresponds to what happens when it puts 2 apples in a bag that already contains 2 apples (there are apple-like things in the AI’s simulation). The fact that the AI derives “2 + 2 = 4″, and the fact that it observes a correspondence between this and the apples, has to be entirely determined by the rules of the Game of Life and the initial state.
In case this seems too simple and obvious so far and you’re wondering if you’re missing something, you’re probably not missing anything, this is meant to be simple and obvious.
If the AI notices how deep and intricate math is, how its many branches seem to be greatly interconnected with each other, and postulates that math is unreasonably effective. This also has to be caused entirely by the initial state and rules of the Game of Life. And if the Game of Life board is made up of sets embedded inside some model of set theory, or if it’s not embedded in anything and is just the only thing in all of existence, in either case nothing changes about the AI’s observations or actions and nothing ought to change about its predictions!
And if the existence or non-existence of something changes nothing about what it will observe, then using its existence to “explain” any of its observations is a contradiction in terms. This means that even its observation of the unreasonable effectiveness of math cannot be explained by the existence of a mathematical universe outside of the Game of Life board.
Connecting this back to what I was saying before, the “small parts” here are the cells of the Game of Life. You’ll note that it doesn’t matter if we replace the Game of Life by some other similar game where the board is a continuum. It also doesn’t even matter if the act of translating statements about the AI into statements about the board is uncomputable. All that matters is that the AI’s behavior is entirely determined by the “small parts”.
You might have noticed a loophole in this argument, in that even though the existence of math cannot change anything past the initial board state, if the board was embedded inside a model of set theory, then it would be that model which determined the initial state and rules. However, since the existence of math is compatible with every consistent set of rules and literally every initial board state, knowing this would also give no predictive power to the AI.
At best the AI could try to argue that being embedded inside a mathematical universe explains why the Game of Life rules are consistent. But then it would still be a mystery why the mathematical universe itself follows consistent rules, so in the end the AI would be left with just as many questions as it started with.
My view is compatible with the existence of actual infinities within the physical universe. One potential source of infinity is, as you say, the possibility of infinite subdivision of spacetime. Another is the possibility that spacetime is unboundedly large. I don’t have strong opinions one way or another on if these possibilities are true or not.
The assumption is that everything is made up of small physical parts. I do not assume or believe that it’s easy to predict the large physical systems from those small physical parts. But I do assume that the behavior of the large physical systems is determined solely from their smaller parts.
The tautology is that any explanation about large-scale behavior that invokes the existence of things other than the small physical parts must be wrong, because those other things cannot have any effect on what happens. Note that this does not mean that we need to describe everything in terms of quantum physics. But it does mean that a proper explanation must only invoke abstractions that we in principle would be able to break down into statements about physics, if we had arbitrary time and memory to work out the reduction. (Now I’ve used the word reduction again, because I can’t think of a better word, but hopefully what I mean is clear.)
This rules out many common beliefs, including the platonic existence of math separately from physics, since the platonic existence of math cannot have any effect on why math works in the physical world. It does not rule out using math, since every known instance of math, being encoded in human brains / computers, must in principle be convertible into a statement about the physical world.
I completely agree that reasoning about worlds that do not exist reaches meaningful conclusions, though my view classifies that as a physical fact (since we produce a description of that nonexistent world inside our brains, and this description is itself physical).
it becomes apparent that if our physical world wasn’t real in a similar sense, literally nothing about anything would change as a result.
It seems to me like if every possible world is equally not real, then expecting a pink elephant to appear next to me after I submit this post seems just as justified as any other expectation, because there are possible worlds where it happens, and ones where it doesn’t. But I have high confidence that no pink elephant will appear, and this is not because I care more about worlds where pink elephants don’t appear, but because nothing like that has ever happened before, so my priors that it will happen are low.
For this reason I don’t think I agree that nothing would change if the physical world wasn’t real in a similar sense as hypothetical ones.
I will refer to this other comment of mine to explain this miscommunication.
Reasoning being real and the thing it reasons about being real are different things.
I do agree with this, but I am very confused about what your position is. In your sibling comment you said this:
Possibly the fact that I perceive the argument about reality of physics as both irrelevant and incorrect (the latter being a point I didn’t bring up) caused this mistake in misperceiving something relevant to it as not relevant to anything.
The existence of physics is a premise in my reasoning, which I justify (but cannot prove) by using the observation that humanity has used this knowledge to accomplish incredible things. But you seem to base your reasoning on very different starting premises, and I don’t understand what they are, so it’s hard to get at the heart of the disagreement.
Edit: I understand that using observation of the physical world to justify that it exists is a bit circular. However, I think that premises based on things that everyone has to at least act like they believe is the weakest possible sort of premise one can have. I assume you also must at least act like the physical world is real, otherwise you would not be alive to talk to me.
Okay, let’s forget the stuff about the “I”, you’re right that it’s not relevant here.
For existence in the sense that physics exists, I don’t see how it’s relevant for reasoning, but I do see how it’s relevant to decision making
Okay, I think my view actually has some interesting things to say about this. Since reasoning takes place in a physical brain, reasoning about things that don’t exist can be seen as a form of physical experiment, where your brain builds a description which has properties which we assume the thing that doesn’t exist would have if it existed. I will reuse my example from my previous post to explain what I mean by this:
To be more clear about what I mean by mathematical descriptions “sharing properties” with the thing it describes, we can take as example the real numbers again. The real numbers have a property called the least upper bound property, which says that every nonempty collection of real numbers which is bounded above has a least upper bound. In mathematics, if I assume that a variable x is assigned to a nonempty set of real numbers which is bounded above, I can assume a variable y which points to its least upper bound. That I can do this is a very useful property that my description of the reals shares with the real numbers, but not with the rational numbers or the computable real numbers.
So my view would say that reasoning is not fundamentally different from running experiments. Experiments seem to me to be in a gray area with respect to this reasoning/decision-making dichotomy, since you have to make decisions to perform experiments.
I don’t say in this post that everything can be deduced from bottom up reasoning.
The fact that I live in a physical world is just a fact that I’ve observed, it’s not a part of my values. If I lived in a different world where the evidence pointed in a different direction, I would reason about the different direction instead. And regardless of my values, if I stopped reasoning about the physical world, I would die, and this seems to me to be an important difference between the physical world and other worlds I could be thinking about.
Of course this is predicated on the concept of “I” being meaningful. But I think that this is better supported by my observations than the idea that every possible world exists and the idea that probability just represents a statement about my values.
Suppose when you are about to die, time freezes, and Omega shows up and tells you this: “I appear once to every human who has ever lived or will live, right when they are about to die. Answer this question with yes or no: are you in the last 95% of humans who will ever live in this universe? If your answer is correct, I will bring you to this amazing afterlife that I’ve prepared. If you guess wrong, you get nothing.” Do you say yes or no?
Let’s look at actual outcomes here. If every human says yes, 95% of them get to the afterlife. If every human says no, 5% of them get to the afterlife. So it seems better to say yes in this case, unless you have access to more information about the world than is specified in this problem. But if you accept that it’s better to say yes here, then you’ve basically accepted the doomsday argument.
However, an important thing to note is that when using the doomsday argument, there will always be 5% of people who are wrong. And those 5% will be the first people who ever lived, whose decisions in many ways have the biggest impact on the world. So in most situations, you should still be acting like there will be a lot more people in the future, because that’s what you want the first 5% of people to have been doing.
More generally, my procedure for resolving this type of confusion is similar to how this post handles the Sleeping Beauty problem. Basically, probability is in the mind, so when a thought experiment messes with the concept of “mind”, probability can become underspecified. But if you convert it to a decision problem by looking at the actual outcomes and rating them based on your preferences, things start making sense again.