Panpsychism is correct about genuineness and subjectivity of experiences, but you can quantize your caring about other differences between experiences of human and zygote however you want.
Signer
If we live in naive MWI, an IBP agent would not care for good reasons, because naive MWI is a “library of babel” where essentially every conceivable thing happens no matter what you do.
Isn’t the frequency of amplitude-patterns changes depending on what you do? So an agent can care about that instead of point-states.
In the case of teleportation, I think teleportation-phobic people are mostly making an implicit error of the form “mistakenly modeling situations as though you are a Cartesian Ghost who is observing experiences from outside the universe”, not making a mistake about what their preferences are per se.
Why not both? I can imagine that someone would be persuaded to accept teleportation/uploading if they stopped believing in physical Cartesian Ghost. But it’s possible that if you remind them that continuity of experience, like table, is just a description of physical situation and not divinely blessed necessary value, that would be enough to tip the balance toward them valuing carbon or whatever. It’s bad to be wrong about Cartesian Ghosts, but it’s also bad to think that you don’t have a choice about how you value experience.
Analogy: When you’re writing in your personal diary, you’re free to define “table” however you want. But in ordinary English-language discourse, if you call all penguins “tables” you’ll just be wrong. And this fact isn’t changed at all by the fact that “table” lacks a perfectly formal physics-level definition.
You’re also free to define “I” however you want in your values. You’re only wrong if your definitions imply wrong physical reality. But defining “I” and “experiences” in such a way that you will not experience anything after teleportation is possible without implying anything physically wrong.
You can be wrong about physical reality of teleportation. But even after you figured out that there is no additional physical process going on that kills your soul, except for the change of location, you still can move from “my soul crashes against an asteroid” to “soul-death in my values means sudden change in location” instead of to “my soul remains alive”.
It’s not like I even expect you specifically to mean “don’t liking teleportation is necessary irrational” much. It’s just that saying that there should be an actual answer to questions about “I” and “experiences” makes people moral-realist.
I’m asking how physicists in the laboratory know that their observation are sharp-valued and classical?
If we were just talking about word definitions and nothing else, then sure, define “self” however you want. You have the universe’s permission to define yourself into dying as often or as rarely as you’d like, if word definitions alone are what concerns you.
But this post hasn’t been talking about word definitions. It’s been talking about substantive predictive questions like “What’s the very next thing I’m going to see? The other side of the teleporter? Or nothing at all?”
There should be an actual answer to this, at least to the same degree there’s an answer to “When I step through this doorway, will I have another experience? And if so, what will that experience be?”
Why? If “I” is arbitrary definition, then “When I step through this doorway, will I have another experience?” depends on this arbitrary definition and so is also arbitrary.
But I hope the arguments I’ve laid out above make it clear what the right answer has to be: You should anticipate having both experiences.
So you always anticipate all possible experiences, because of multiverse? And if they are weighted, than wouldn’t discovering that you are made of mini-yous will change your anticipation even without changing your brain state?
What’s the evidence for these “sharp-valued classical observations” being real things?
In particular, a.follower many worlder has to discard unobserved results in the same way as a Copenhagenist—it’s just that they interpret doing so as the unobserved results existing in another branch, rather than being snipped off by collapse.
A many-worlder doesn’t have to discard unobserved results—you may care about other branches.
The wrong part is mostly in https://arxiv.org/pdf/1405.7577.pdf, but: indexical probabilities of being a copy are value-laden—seems like the derivation first assumes that branching happens globally and then assumes that you are forbidden to count different instantiations of yourself, that were created by this global process.
“The” was just me being bad in English. What I mean is:
There is probably a way to mathematically model true stochasticity. Properly, not as many-worlds.
Math being deterministic shouldn’t be a problem, because the laws of truly stochastic world are not stochastic themselves.
I don’t expect any such model to be simpler than many-worlds model. And that’s why you shouldn’t believe in true stochasticity.
If 1 is wrong and it’s not possible to mathematically model true stochasticity, then it’s even worse and I would question your assertion of true stochasticity being coherent.
If you say that mathematical models turn out complex because deterministic math is unnatural language for true stochasticity, then how do you compare them without math? The program that outputs an array is also simpler than the one that outputs one sample from that array.
How would you formulate this axiom?
Ugh, I’m bad at math. Let’s say given the space of outcomes O and reality predicate R, the axiom would be .
Carroll’s additional assumptions are not relied on by the MWI.
I don’t know, any model you like? Space of outcomes with “one outcome is real” axiom. The point is that I can understand the argument for why the true stochasticity may be coherent, but I don’t get why it would be better.
I disagree with this part—if Harry does the quantum equivalent of flipping an unbiased coin, then there’s a branch of the universe’s wavefunction in which Harry sees heads and says “gee, isn’t it interesting that I see heads and not tails, I wonder how that works, hmm why did my thread of subjective experience carry me into the heads branch?”, and there’s also a branch of the universe’s wavefunction in which Harry sees tails and says “gee, isn’t it interesting that I see tails and not heads, I wonder how that works, hmm why did my thread of subjective experience carry me into the tails branch?”. I don’t think either of these Harrys is “preferred”.
This is how it works in MWI without additional postulates. But if you postulate the probability that you will find yourself somewhere, then you are postulating the difference between the case where you have found yourself there, and the case where you haven’t. Having a number for how much you prefer something is the whole point of indexical probabilities. And as probability of some future “you” goes to zero, this future “you” goes to not being the continuation of your subjective experience, right? Surely that would make this “you” dispreferred in some sense?
such formalisms are unwieldy
Do you actually need any other reason to not believe in True Randomness?
that’s just passing the buck to the one who interprets the formalism
Any argument is just passing the buck to the one who interprets the language.
If the simplest assumption is that the world is just quantum mechanical
It isn’t a simpler assumption? Mathematically “one thing is real” is not simpler than “everything is real”. And I wouldn’t call “philosophically, but not mathematically coherent” objection “technical”? Like, are you saying the mathematical model of true stochasticity (with some “one thing is real” formalization) is somehow incomplete or imprecise or wrong, because mathematics is deterministic? Because it’s not like the laws of truly stochastic world are themselves stochastic.
My intuition finds zero problem with many worlds interpretation.
Why do you care about the Born measure?
this is in some sense the only thing I know for sure
You don’t. All your specific experiences are imprecise approximations: you can’t be sure what exact color you saw for how many nanoseconds, you can’t be sure all your brain except small part implementing only current thought haven’t evaporated microsecond ago. So you can have imprecise models of a fish brain the same way you have imprecise models of your brain—your awareness of your brain is casually connected to your brain the same way your thoughts can be casually connected to a fish brain. You just can’t be fully fish.
I would be curious to know what you know about my box trying to solve the meta-problem.
Sounds unethical. At least don’t kill them afterwards.
Any conclusions would raise usual questions about how much AI’s reasoning is about real things and how much it is about extrapolating human discourse. The actual implementation of this reasoning in AI could be interesting, especially given that AI would have different assumptions about its situation. But it wouldn’t be necessary the same as in a human brain.
Philosophically I mostly don’t see how is that different from introspecting your sensations and thoughts and writing isomorphic Python program. I guess Chalmers may agree that we have as much evidence of AIs’ consciousness as of other humans’, but would still ask why the thing that implements this reasoning is not a zombie?
But the most fun to think about are cases where it wouldn’t apparently solve the problem: like if the reasoning was definitely generated by a simple function over relevant words, but you still couldn’t find where it differs from human reasoning. Or maybe the actual implementation would be so complex, that humans couldn’t comprehend it on lower level, than what we have now.
The justification for pruning this neuron seems to me to be that if you can explain basically everything without using a dualistic view, it is so much simpler.
Yeah, but can you? Your story ended on stating the meta problem, so until it’s actually solved, you can’t explain everything. So how did you actually check that you would be able to explain everything once it’s solved? Just stating the meta problem of consciousness is like stating the meta problem of why people talk about light and calling the idea of light “a virus”.
Sure, “everything is a cluster” or “everything is a list” is as right as “everything is emergent”. But what’s the actual justification for pruning that neuron? You can prune everything like that.
Great! This text by Yudkowsky has convinced me that the Philosophical Zombie thought experiment leads only to epiphenomenalism and must be avoided at all costs.
Do you mean that the original argument that uses zombies leads only to epiphenomenalism, or that if zombies were real that would mean consciousness is epiphenomenal, or what?
There is a weaker and maybe shorter version by Chalmers: https://consc.net/papers/panpsychism.pdf. The short version is that there is no way for you to non-accidently know about quantization state of your brain and for that quantization not be a part of an easy problem: pretty much by definition, if you can just physically measure it, it’s easy and not mysterious.