wrt Blindsight response, assuming for some part of your brain you (your “main consciousness”, the one that can in addition to thinking and feeling also talk) don’t have direct access that there’s nothing like to be it seems a bit like assuming animals don’t have subjective experiences because you don’t have access to them yourself and the animals are very different from you. It’s almost like a trick of language, these parts are called “unconscious” because they are not in the subjective experience of the pointy-haired boss and then we equivocate this with a positive reason to think they lack subjective experience in and of themselves. This might be an irrelevant objection to what Watts is saying (since he seems to be talking about self-reflection etc.) however in that case it might not really answer Dawkins’ puzzlement either.
Spective
Also when used as more of an aesthetic judgement, AI sloppishness is quite orthogonal to the usual sense of “sloppy”—AI generated content has a certain polish and qualities that would be considered signs of skill and competence when produced by humans, esp. before they became easy to generate with AI and thus less valued. So in this context the problem isn’t that it’s “sloppy” as in looking careless, messy or full of mistakes but that it’s generic, bland, soulless etc. And a lot of creative, interesting and original (opposite of slop) content can be “sloppy” in the sense of being unpolished and imperfect.
you make an exact copy of you with all your memories, then 5 minutes later, the original you who got copied dies. Is this fine?
My first reaction to this is that it’s obviously not fine? I value living as myself, and I don’t get to do that if I die, and sure there is a copy of me living somewhere, but that is not the same? is it?
Clearly if there are two copies of someone, neither of them need to be fine with with dying even if the other one lives. After all there’s lots of potential future experiences lost if that happens. It’s semi-analogous to having one’s life extended first but then that extended life being taken away i.e. you have a heart that is expected to fail in ~10 years, somebody gives you an artificial heart that is supposed to last as long as the rest of your body, but then some bastard steals it and replaces it with your old crappy one—obviously you don’t need to be fine with that.
However, if we are presented with two alternative futures1) I get copied, 5 minutes later the original dies painlessly and instantly
2) I don’t get copied or killed.
Then I’ll happily bite the bullet and say I don’t think it matters much which of the two happens. Even if I take the perspective of the “original” who is about to die (who also knew before getting copied that these were the two only alternatives, maybe he needed to use the services of a teleport company whose policy is that the original must be deleted but there’s a 5 minute lag), to me it’s like I just had taken a pill that in 5 minutes will make me lose the memory of those 5 minutes.
Obvious facts nobody would disagree with when explicitly stated can still be underappreciated and not paid enough attention to, so they can still be worth spelling out for that reason. Although I must say when I get tired hearing of something obvious too many times, I get a pathological contrarian urge to argue against it.
Since OP contrasted neuralese with “legible CoT”, I’d like to add that while the “hard to train” may be true for neuralese, it doesn’t apply to o3-style Thinkish. Hopefully optimization pressures don’t favor that too much.
The kind of Vulcan you are imagining might have some kind of moral status, but is phenomenal consciousness really the crux here? Suppose there’s an otherwise identical Vulcan except no subjective experience—would there be no moral status in that case? Now, the obvious problem is of course that the coherence of this counterfactual is highly dubious to many. But my personal intuition is that to the extent I accept the counterfactual, making the Vulcan a zombie makes no moral difference. Whereas for beings with affective sentience, whether somebody goes around waving their magic wand and turning them into zombies or not intuitively seems morally significant.
There’s a lot of logical uncertainty here about the space of possible minds wrt phenomenal consciousness and affective experience, so there might be some kind of necessary connection between phenomenal consciousness (even when it’s totally non-affective) and features I associate with moral status for entities that lack affective sentience. But after I stopped assuming it’s always accompanied by valence, phenomenal consciousness in and of itself just prima facie doesn’t seem morally very important at all. Yes, destroying a planet of Vulcans to save a shrimp seems monstrous, but so does destroying a planet of entities that have sophisticated behavior and cognition without subjective experience.
It bothers me there’s no really established terminology for different views on personal identity. As in, whether you treat selves or persons essentially as “ontological primitives” or not. There’s a bunch of terms out there, but they are all sort of awkward for one reason or another and in any case I find there isn’t anything as widely established and easily understood as something like metaethical positions, i.e. moral realism vs anti-realism for example.
There isn’t really a snappy term to communicate something like “I don’t think the Star Trek teleporter kills me because I think my identity isn’t defined on any basic unchanging essence or specific atoms blah blah.” Except, if you’re a Buddhist. But if you’ve come to these views from totally different sources and know next to nothing about actual Buddhist traditions, calling yourself a Buddhist seems wrong. Using the term anattā isn’t too bad I suppose, but something without the Buddhist baggage would be more ideal.
I guess I felt the need to comment because I don’t even remember a time where that description would’ve been accurate for me—but wouldn’t be surprising if this also had something to do with memory formation so I can’t make too much of that. And notably, while I don’t recognize any point of my past kid self from that description, it’s indeed starting only around the age of 5 where I feel very confident about it. Curious if anyone here remembers relatively clearly something like “having experiences while lacking awareness of having a mind”.
IMO current LLMs probably have a small amount of what we usually call phenomenal consciousness or qualia. They have rich internal representations and can introspect and reflect on them. But neither is nearly as rich as in a human, particularly an adult human who’s learned a lot of introspection skills (including how to “play back” and interrogate contents of global workspace). Kids don’t even know they have minds, let alone what’s going on in there; figuring out how to figure that out is quite a learning process.
I’ll just note that this clashes heavily with my personal memories of being a kid. I usually use those as an intuition pump for the idea that phenomenal consciousness and intelligence are different, i.e. I wasn’t any “less conscious” as a kid AFAICT—to the contrary, if anything I remember having more intense and richer experiences than now as a comparatively jaded and emotionally blunted adult. There’s two things going on though—introspection and intensity of experience, but I also remember being very introspective and “kids don’t even know they have minds” in particular sounds very weird to me.
If you do something different, this by construction refutes the existence of the current situation where Omega made a correct prediction and communicated it correctly (your decision can determine whether the current situation is actual or counterfactual).
This is true and it’s also true in general that there’s always technically a chance that Omega’s prediction is false - I don’t think there’s a conceivable epistemic situation where you could be literally 100% confident in its predictions. However by postulation, typically in Omega scenarios it is according to what you know exceedingly unlikely that its prediction is incorrect.
You could also perhaps just ignore Omega’s prediction and do whatever you’d do without this foreknowledge, or with the assumption that defying the prediction is still on the table. You wouldn’t necessarily feel “constrained by the prediction” but rather “constrained” just in the normal sense various factors constrain your decision—but for one reason or other you’d almost certainly end up choosing as Omega predicted.
Let’s say this decision is complicated enough that doing the cost-benefit analysis “normally” carries a significant cost in terms of time and effort. Would you agree that it would be rational to skip that part and just base your decision on what Omega predicted when the time comes? That is the sense in which I think it makes sense to treat the decision as “already determined from your perspective”.
I think it’s correct that talking about “choice” in the moment is misguided. If omega is a perfect predictor, you don’t really have a choice at the point at which omega has left and you have two boxes. Or you do in some kind of compatibilist sense that we may care about morally but not in the decision theoretic sense.
If omega knew everything you were going to ever do, would that throw decision theory out of the window as far as you are considered? If you somehow knew what you were going to do at some point in the future—as in omega actually told you specifically what you will do—then yeah it would be pretty pointless to try to apply decision theory to that choice that was even from your own perspective “already determined”. But the fact that omega knows doesn’t suddenly make the analysis of what’s rational to do useless.
I don’t think you can just be conscious without being conscious of some things in particular. Subjective experience has to have content. What kind of experiences could a rock be having, considering what it’s physically doing? It’s probably not thinking “another day of being a rock”. Nor is it experiencing the sun shining on it because it doesn’t have any kind of visual processing system etc. Meanwhile prima facie at least, it’s considerably easier to imagine Claude having all kinds of human-like thoughts. On the condition that Claude has subjective experiences, but the contents thereof have to be computationally specified, it seems like we have some good reasons to have beliefs about what it might be experiencing.
Also, “a rock” is just one convenient way to draw boundaries for stuff in the universe and probably not very relevant for carving out experiential subjects. Even if in some sense “consciousness” is kinda everywhere, there seems to be an obvious sense in which some random group of people don’t form “one experiential macro-subject” but the individual members do (but for some groups of people acting really cohesively, it can arguably get a bit murky!) And this doesn’t seem that mystical but rather based on how information flows within the arrangement of objects we try to analyze as one subject and how unified it is.