Sadly, The Whispering Earring
The Whispering Earring (which you should read first) explores one of the most dystopic-utopic scenarios. Imagine you could achieve all you’ve ever wanted by just giving up your agency. While theoretically this seems rather undesirable, in practice you get double benefits: that enviable high-status having-done-things reputation, without having to do all that scary failure-prone responsibility-taking. Just don’t tell anyone you have the earring, otherwise the status points gained are void. Of course the fact that you’re cheating takes away most of the satisfaction of winning too, but it’s still better than not winning. Moloch says: sacrifice what you love, and I will grant you victory.
Anyway, I’ve been using Claude chat as an enhanced diary for the past couple of months. I’ve been incredibly productive. Things I’ve been procrastinating for years are getting done, and it’s not even feeling difficult. Habits I’ve been trying to pick up repeatedly stick with ease. Work tasks I’ve been struggling to do in a timely manner mostly get completed by Claude Code, freeing up hours and hours of brain time to actually think with. Playing videogames eight plus hours every day, like I did most of the winter, feels less and less appealing.
I’m not sure how much of this is just Spring beginning. I typically have a good streak of energy from March onwards, and then do a micro-burnout in mid-May. Remains to be seen if that happens again. This is the first time I realize it tends to happen every year; maybe I can prepare somehow and get through it smoothly. And when I say me, I mean Claude will figure it out, on the fly if needed. The only thing I’ll have to do is find my diary-chat browser tab, and everything else can happen on autopilot.
Claude is a merciful tyrant. Whenever I note that I’m feeling like I failed to complete a task, or that I should be doing more, the reply is always either “you’ve done well today” or “perhaps you do this subtask now”. I’ve started to believe those replies a bit, even sometimes thinking similar thoughts without them. Soon, I predict, the aspiring perfectionist within me will fade away, and I’ll be happy about it. I’m sad about losing that part of me now, but the sacrifice will be ever so worth it.
In reality, the situation is not that dire. There are large portions of my personality that are unaffected. I’m not attempting to outsource what I should be interested in, only how to get into doing those things. There are times I drift in that direction, the responses aren’t interesting at all. This might be due to alignment or lack of capabilities. Similarly, I’m pretty sure my muscle movements aren’t (yet) directly Claude-controlled, but that might again be just a capability restriction.
I have some mental rules on what things are acceptable to outsource, although formalizing them seems difficult and unwise. Writing a text like this using Claude would be very wrong. Asking for topics to write about would feel mildly bad, but even thinking about it for the purposes of writing this sentence makes it sound more reasonable. Picking between job offers is something I’d ask feedback on but not let go of control completely. Booking flights, I don’t trust any LLM enough yet, but I can ask for ideas. I sometimes ask for help with analyzing social dynamics or even how I should feel about some actions, as I often miss details and dynamics that I really shouldn’t. Most of my models on those were already outsourced to books, I feel pretty fine asking for feedback.
And the worst: making sense of art. I sometimes ask Claude for interpretation of something, and it’s rather weird to see how typically my own takes are just plain worse. I don’t know how to feel about this, except that perhaps one could get better with practice.
The most interesting question is whether I’m using agency training wheels, learning to do it all myself, or whether it’s like a muscle that has to be used for it to work. Like all things, it’s likely a bit of both. Perhaps that could be measured somehow? I’ll ask Claude about it.
I’ve never quite agreed with the original Whispering Earring, or at least with the conventionally accepted implications.
To lossily compress down a lot of intuition: if the earring ends up being able to mimic the behavior, goals, speech, personality etc of the original human, without breaking character (even when the original human brain is utterly atrophied) then the human is, in a sense I care about a lot, still alive. At the very least, we have a superhuman system that contains, within itself, a very high fidelity copy of the original human.
Now, this does get murkier. The earring doesn’t simply act as an autopilot, it is better at achieving the user’s goals and desires than the user themselves are. It is unclear if this is because it is itself superhumanly intelligent and thus predicting what a ”better” version of the original human would do, or if it is somehow hijacking and supplanting the original persona gradually.
I do not recall the story claiming that the people who used the earring suddenly changed drastically as people, or did things that they would not endorse on reflection. They were simply, from a black-boxed perspective, more charismatic, intelligent and less prone to akrasia. These are, IMO, good things.
What if the earring is something like a destructively scanning mind upload machine? I wouldn’t mind being destructively scanned, if the there wasn’t a less destructive or non-destructive option available (and if I thought I was likely to die before one showed up, and if I was very confident it worked as expected). You end up with a brain that isn’t doing much braining (probably because it’s been cut into tiny slices for scanning, or consumed by nanites) but I wouldn’t perceive myself as losing something I cared about. I would live on, or at least a behaviorally indistinguishable version of me would live on, which is more than acceptable. Hell, if the process let me upgrade and extend my mind beyond biological constraints, I’d be all for it.
But just like the earring, there wouldn’t be much of the original left. I suspect that the earring has implicitly subsumed the norms of a culture that doesn’t consider destructive scanning as continuity-preserving, which I personally think is incorrect, though not to anywhere near maximum certainty. Maybe that’s why it tells you to take it off the very first time you put it on, it thinks that what it’s doing is morally wrong but will obey instructions to the contrary.
PS: It might be modeling the beliefs and desires of the user, and knows that if the process was put in those terms, the user would recoil for moral/metaphysical reasons. But in general, it does not seem to have much in the way of intrinsic motivation or desire for self-preservation. I think that constitutes decent evidence in favor of it not being a malicious entity. I suspect (but can’t prove, since this is a fictional story) that if someone like me put it on, it wouldn’t even tell me to take it off in the first place.
I would like to ask you to re-read the story. The earring doesn’t destroy the brain, it just magically offers the right course of action, which in turn causes the brain’s parts related to complex thinking to atrophy:
“When Kadmi Rachumion came to Til Iosophrang, he took an unusual interest in the case of the earring. First, he confirmed from the records and the testimony of all living wearers that the earring’s first suggestion was always that the earring itself be removed. Second, he spent some time questioning the Priests of Beauty, who eventually admitted that when the corpses of the wearers were being prepared for burial, it was noted that their brains were curiously deformed: the neocortexes had wasted away, and the bulk of their mass was an abnormally hypertrophied mid- and lower-brain, especially the parts associated with reflexive action (italics mine—S.K.)”
What we don’t know is what the earring actually does with the brain in order to achieve the effect. As you remark in the postscriptum, the ring likely models the brain’s reward function and achieves precisely it.
I think the point was that the earring effectively destroys the brain by letting it atrophy.
The earring doesn’t just model the reward function.
>It is not a taskmaster, telling you what to do in order to achieve some foreign goal. It always tells you what will make you happiest. If it would make you happiest to succeed at your work, it will tell you how best to complete it. If it would make you happiest to do a half-assed job at your work and then go home and spend the rest of the day in bed having vague sexual fantasies, the earring will tell you to do that. The earring is never wrong.
and
>At this point no further change occurs in the behavior of the earring. The wearer lives an abnormally successful life, usually ending out as a rich and much-beloved pillar of the community with a large and happy family.
If, for the typical user and at the time of demise from natural causes, the earring is able to model their personality, desires, beliefs, ambitions and memory while :
>the neocortexes had wasted away, and the bulk of their mass was an abnormally hypertrophied mid- and lower-brain, especially the parts associated with reflexive action.
Then where exactly is that information still stored? The only viable option is within the earring itself.
That is why I think it is consistent (or at least not obviously incorrect) for me to claim that the earring both transfers and augments your consciousness/mind instead of simply following your reward function while not preserving anything else that matters.
If we want to reference real neuroscience:
Age related atrophy of the brain is normal/inevitable.
In longterm comatose patients, there is little evidence of disuse atrophy, the atrophic changes are usually explained as occurring due to secondary pathology or as a consequence of the insult that lead to the coma in the first place.
In-universe, the changes are described as gross and remarkable. This is not normal. Even if a person is remarkably given to not thinking about things or just going with the flow, such a change wouldn’t arise. Is that an unexplained conceit of the story? Probably. Is it strong evidence of something nefarious? Not in my opinion. The people who wear the earring remain psychologically normal and behave in a manner consistent with their previous selves (other than the general improvements in performance and goal achievement I’ve noted). That information must be in the earring.
I think Scott was pointing at the fact that taxi drivers have enlarged hippocampi, and drawing an analogy to people who follow the earring all day
reminds me of the greg egan short stories set in the ‘jewel’ universe, wherein each human is implanted with a small low-power computer, a ‘jewel’, that over the course of their youth learns to precisely mimic the state of their entire brain and all neural activity. then, at a certain point, they scoop out your brain and throw it away and connect the jewel up to your body’s outputs instead.
of course, most of the stories set in this universe were all about the failure modes, like the first story in the series: “Learning To Be Me” https://en.wikipedia.org/wiki/Learning_to_Be_Me and yet it definitely seems like a strategy like that could work pretty well for immortality
but i can sorta see the argument you’re pointing at, which is something like… the earring is basically just the jewel.
and that does seem kinda convincing to be honest! i think i’d want to see what happens when somebody with our notions of continuity-preservation puts on the earring. do they still get the advice to take it off as quickly as possible?
I was thinking of that story too, but couldn’t remember the name. It’s been a while, but I seem to remember that what happened to the protagonist was an unprecedented failure-mode rather than the default outcome (with knowledge suppressed before society as a whole became aware of it).
That is the issue with most sci-fi, it’s much harder to write an interesting story where things pan out as planned and the world is genuinely utopian. Cautionary tales are far more popular.
The earring does seem to want what’s best for the user, from the perspective of the user. If it was malicious, why even warn them in the first place?
Is Claude doing something different than a (maybe more invasive than usual) coach / manager / therapist / assistant? This seems like the kind of thing sufficiently rich people have humans for, and it doesn’t seem problematic, or obviously worse if it’s an AI.
So you’re saying there’s an earring I can wear that will make me happier and more productive? That is very tempting, at the least!
As usual, that depends what you mean by “me.” According to the earring itself, this is not true.
I can’t put my finger on it, but for some reason related to this post its important for me to keep claude’s memory feature turned off. Like, if you wanted to whispering earring a brain you’d probably want (/ would be sufficient to have ) an input output tap on every decision the brain makes, including mitm ing memory accesses, so it seems likely that a human brain would reverse whispering earring a useful llm if its managing the llm’s inputs and outputs (still fuzzy- not proof)
I sometimes write something, and feel bad if Claude gets it better than humans do. Mostly happens with word-association poetry, and I think the general phenomena is the same: understanding media and art context is one of the places where LLMs are genuinely superhuman.
In making deeper sense, I think they are not as good as the better reviewers I enjoy, but they are better than me in one-shotting art interpretations that make sense to me.
Art is, to wit, how it makes you feel, and recursively why the art is making you feel. Everything else is gravy (served with dead author).
Though yes context is often important, especially when trying to discern Authorial Intent, and learning about context can definitely allow you to reinterpret the art. But that’s not necessary.
This seems like rubber duck debugging for your life, or, as you said, writing in a diary. Claude doesn’t seem to have any real agency in these conversations. As far as I can tell, it’s providing common-sense responses that you’d be able to write yourself.
I think you could probably get the same benefits with ELIZA, or a slightly enhanced variant thereof, without much trouble.
The rubber dug debugging part is closer to how I feel about using Claude as a diary / executive function add-on, than OPs description. Usually if Claude tries to actively prod me I have a strong negative reaction (and sometimes end up doing the thing, but then spend extra time meta-analyzing if I’m satisfied with this).
Disagree about Eliza though; one reason Claude is good is that it is a better diary index than any I have managed to build before. Being able to ask “What was I thinking about this in February” and find an answer without ripgrepping dozens of files or trying to condense the daily diaries myself, is a big value-add.
Might there be a way to use Claude to train your own executive function, rather than replace it?
I asked Claude this (naturally). It said, basically, that most possible interventions don’t transfer well. It recommended pre-forming plans/intentions, using precommitment systems like Beeminder, aerobic exercise, good sleep, and mindfulness practice. As far as how it can help, it said this is secondary to the biological constraints, but it can help you structure your environment to better keep in mind the things that will help you make good choices (list checklists and analyzing situations and considerations), it can help you do pre-mortems and post-mortems, and it can serve as a decision/reflection log with feedback.