How old are you? A 50 year old man, coming home after a hard day’s work and then finding that a nap seems appealing is a somewhat different situation, when compared to a 35 year old man doing the same thing. Age will, among other things, sap your energy, but the question of whether it’s happening to an important or surprisingly strong degree is an important one.
AlphaAndOmega
Very many things do shape your mind. More than we ever give credence to.
At the end of the day, I feel that the earring is unfairly maligned. I can see a case for it being benign, or even well-intentioned. Any more than that? We are, at the end of the day, arguing about a fictional artifact. We ought to judge the more realistic versions on their own merits.
Possibly, neither of us are in a position to judge with certainty. But I doubt that Anthropic is feeling particularly helpful, given their recent falling out with the US government.
>Have you read the Fun Theory sequence ? Because I don’t think I have any insight that isn’t there. Mostly the 3D vs 4D distinction in “High Challenge”.
I have, even if it’s been a while. I took a quick look at High Challenge:
>To look at it another way, if we’re looking for a suitable long-run meaning of life, we should look for goals that are good to pursue and not just good to satisfy.
Focusing on this quote, I do not believe that I desire a perfectly frictionless life. I enjoy challenges, some of them at least. I do not wish to live a live on autopilot without ever making a real decision. But:
I’ve argued that there’s a strong case for the earring+human system simply shifting the computation/challenge/qualia from being entirely in the human to being mostly in the earring.
As I’ve said elsewhere, I wouldn’t want to wear the earring all the time, at least in theory. I’ve outlined the specific reasons in that comment.
There are many “challenges” in my life that I simply do not care about and wouldn’t miss if they’re gone. Brushing my teeth. Studying for exams. Commuting to work. I would happily let some other entity without qualia do the boring stuff for me, so I can focus on what I enjoy.
I am quite confident that this is in-line with Yudkowsky, even if I haven’t re-read every single entry in the relevant Sequence. Please correct me if I have missed something.
To summarize: I don’t think that the earring is necessarily an autopilot without any degree of consciousness or qualia, and even if it is, I can see very good use cases for it.
>Stockfish can play chess better than humans without instantiating an human-player-congnition-engine that would instantiate a human-player-subejctive-experience. Similarly, Whispering Earring can play “How to Win at Life” (or any other goal) without instantiating a human-like-cognition-engine (that would, presumably, trigger human-like-subjective-experience).
Playing chess well is a very different goal from emulating an arbitrary human with extreme accuracy. Even the best LLMs (which are massively larger and more intensely trained than any chessbot I know of) can’t do it with perfect accuracy, even for writers well-represented in their training data.
From a pragmatic point of view:
A chessbot can be good at chess without mimicking the cognition of a human who is good at chess. It still models a game of chess, and interpretability work suggests this is true for LLMs playing chess (even if they’re bad at it, from memory, GPT-3.5 was actually the best and later models represent a regression)
It is much harder to emulate a human. The only entities that can do so to a decent degree are LLMs, which are surprisingly humanlike. (See Anthropic’s interoperability work, especially the emotion vectors). I think this is at least suggestive in favor of theories claiming that sufficiently intense mimicry of human cognitive output produces internal-processes that are surprisingly close to their human analogues.
>I don’t know why you bring upload. The whispering earring is not an upload technology, never been presented as such, and it has never been the point of the thought experiment ? And you seem to agree with that earlier ?
The story does not plainly state that the earring is an upload machine. But as I’ve argued, I think that is an interpretation that is consistent with textual evidence. I’ve stressed that the machine can mimic the behaviors and desires of the human even after the cognitive architecture responsible (in the human brain) is vestigial. I also argued that this suggests that the earring is offloading computation to itself (and this could be benign, or at least not intentionally malicious). The earring needs that information to pass as the human, even if it’s a “better” version of the human.
In other words: it is suspiciously upload-like, in a manner that invites scrutiny.
>The whispering earring is not one of them : my current me do not want to be replaced by a learn-and-execute-reflexes-machine.
I am not arguing with your preferences, if this is a genuine disagreement on fundamental values, what can we both do but shrug and move on?
I can’t even (and don’t) say that the earring is a destructive mind-uploading machine. How can I? It’s a work of fiction. What I am saying is that the observed behavior is entirely consistent with that depiction, and could potentially conserve qualia and thought if substrate-independence is correct.
(Which I do think is the case, despite not having a solution to the hard problem of consciousness. Nobody does, we just have to deal with it for now.)
>So does Stockfish when playing chess. “Thinking instead of coming to the correct conclusion out of nowhere” is not that much of a constraining requirement, and therefore not much a defining thing of “what is valuable in human experience”.
Does Stockfish have qualia? I don’t know. I am agnostic on that front, albeit less agnostic (in a negative direction) than I would be for LLMs.
It doesn’t matter. I think it is very likely (given substrate independence) that qualia would persist after destructive uploading or in the combined earring system. I do not think that it is possible, even in principle, to mimic something as complex as human cognition with nigh perfect accuracy without qualia arising. In other words, I am very suspicious of arguments in favor of p-zombies.
>And this is exactly how I read your “I do not care about those qualia very much”. You’re struggling with happiness, depression, anxiety. I can empathize that to you, getting out of that struggle is supremely important — it must look like the end game, the victory condition. But please do not do the equivalent of throwing “self-actualization” out of the window just because you’re hungry.
I am, at present, remarkably non-depressed/euthymic after treatment. I have the time and energy and desire to philosophize instead of lying in bed. At the end of the day, this is more of an academic concern, we do not have Whispering Earrings (and LLMs are too imperfect to count). We don’t have any form of mind uploading I care for. What exactly am I losing away or what avenues of reasoning am I discarding with undue haste?
If you think that you have a better grasp on what constitutes eudaimonia for the majority, feel free to share. I would be immensely surprised if you understood me better than I understand myself, my mental illnesses are detrimental, but not to the point of incoherence or insanity.
I would prefer non-invasive mind uploading to the destructive form. I would prefer a system I can audit and that experts understand over the earring. I would prefer senolytic therapy over the earring. I would prefer the earring, as presented in the story, over death.
I am agnostic on whether LLMs have qualia. I am less agnostic on whether the earring+human system has qualia, but not by a massive margin. That’s why I floated my points as being suggestive and at the very least internally consistent, rather than authoritative settled.
>The second question is whether, conditional on the earring-software being a person, that person is you. To this my answer is a simple “no”. The behavior of that person is very much not identical to yours, so it’s much worse than the jewelhead situation. And it might even differ from field to field. For example, if someone is a complete fuckup in all areas of life but a genius musician, the earring-version of him might be very similar to him in matters of music, but very different from him in all other matters. Or consider this: the earring upload of an all-around competent person will be very similar to them, but the earring-upload of an all-around fuckup will be very different. We can’t in good conscience say that these are the same fidelity. Therefore we have to consider fidelity, and the whole argument falls apart.
I will re-employ the stimulant analogy, since I know very well from the inside what it feels like.
Without stimulants, my ADHD makes me worse, in most ways I care about. It makes me lazy, prone to procrastination, to give insufficient weight to my academic priorities. I don’t think I was a complete fuck-up before I was diagnosed and treated, but they have let me achieve very many things I would not achieve without their assistance.
Barring minor to moderate discomfort and inconvenience that I willingly accept, this is a ridiculously positive trade. I am suitably grateful.
Behaviorally? Less time lying in bed. More time studying. Better focus and less prone to error. If you squint, that is a different person too, compared to who I was before or after the meds wear off. But I couldn’t care less, I want those improvements.
The text of the story, in no uncertain terms, states that the earring always follows the user’s desires or at least warns them when its advice might not align with them. To quote:
>It is not a taskmaster, telling you what to do in order to achieve some foreign goal. It always tells you what will make you happiest. If it would make you happiest to succeed at your work, it will tell you how best to complete it. If it would make you happiest to do a half-assed job at your work and then go home and spend the rest of the day in bed having vague sexual fantasies, the earring will tell you to do that. The earring is never wrong
Emphasis added. In other words, the earring never does anything that a person wouldn’t endorse (at least without some kind of warning). Obviously someone who is deeply lazy, akratic or “fucked up” will benefit more than someone who already has their life together. I do not see an issue with this, anymore than I see an issue with the fact that people with ADHD get a proportionally larger boost from the drugs. The fuck-up wants to be better. So do I. And this isn’t some twisted, malign and superficial form of happiness either, the earring isn’t wireheading them or telling them to resort to addictive substances by default.
Good point. I’m kicking myself for not making a specific note of “harm reduction” in the context of the earring. I had it in mind at some point, but sadly my memory is far from eidetic.
We know:
The ring cares about the consent of the user to some degree.
The process of adaptation and ensuing atrophy is very gradual.
Ergo, it might be possible to use the earring “safely”, either by telling it to only still to auditory nudges in plain English, or by taking regular tolerance breaks. The earring might even be very happy to do that, or simply compliant.
The issue, as I see it, is that by the time your compliance becomes automatic or reflexive, a lot of your brain might be gone. Maybe. Let’s say I had a superintelligent angel (or Opus 6.0) sitting on my shoulder, telling me I shouldn’t be arguing with internet strangers while on my prescription meds (and should be studying instead). Would I listen to that perfectly sagacious advice? Uh… You can tell that I’m not even listening to myself.
>I believe that the problem with your interpretation that you’re not destroyed if there’s a model of you inside the earring, is that it’s not enough for a model to simply exist for it to be felt as life. If a scientist understands and predicts an ant, I don’t think it’s fair to say that the ant lives within the scientist. I might be persuaded that the ant still exists, but it certainly doesn’t live. So I think it’s all about agency after all.
I don’t think that the mere existence of a model equates to life either, in the sense that if I was cryopreserved right now (and could be easily revived), I would think I was in suspended animation instead of “living”, certainly not feeling things or qualia.
But the earring doesn’t just store a model, it must interrogate it, and possibly run it. If you believe that computation is strictly necessary for the emergence of consciousness, as I strongly believe, then that’s fine. If if just keeps a copy of me in cold storage, especially when it moves to another user: fine too. At least there’s hope of revival and reinstantiation in the future.
Now, on the fidelity of simulation:
There is no human scientist on the planet who can simulate an ant with near perfect accuracy using just their own brain. I’m very confident of that. At best, they can make probabilistic arguments (for the sake of argument “the ant has a 90% chance of turning around when it detects pheromones from another species of ant”).
If they genuinely could predict an ant that behaves just like the real thing, with near 100% accuracy (minor error is fine by me), then I’d happily say that a high fidelity copy of the ant exists in their brain, albeit encoded in a manner very different to the original. This doesn’t really bother me, in the same way I care about what a digital image represents more than the file format.
In other words, there’s a spectrum of simulation/emulation that extends from useless to functionally indistinguishable to the real thing. The earring is far to the right, the primary difference being an improvement to the performance and wellbeing of the user with everything else preserved (and I’ve already excused the brain atrophy as having potentially reasonable explanations).
>As a fellow ADHD and depression sufferer who struggles to be happy, I’d precommit to wear this earring to achieve a certain very modest level of financial stability and independence, and then to take if off and start living my own life. It seems useful for ending a losing streak, but not for winning—that just wouldn’t be fun.
You have my condolences. I wish I had an earring or an outright cure to offer you. Oh well, at least I know that the stimulants and certain… experimental antidepressants have been helpful for me. I do not like Ritalin at all, it makes me feel awful, but it’s a necessary evil.
I don’t know, and I’m not sure I have a reason to care.
Magnus Carlsen, probably:
Wants to win at chess
Cares about being known as the best human chess player
Does not wish to be caught cheating in a venue where the rules demand only unassisted human play.
Enjoys playing chess
So I wager he probably wouldn’t use Stockfish to replace himself entirely. But that is a consequence of personal and pragmatic reasoning, without bringing qualia into the picture.
What else do you expect me to say? I’ve never claimed to have a proof for the hard problem of consciousness, beyond alluding to my (sincere) belief that substrate-independence is likely true. I know of no good reason to believe that carbon atoms are strictly necessary for qualia.
If you want to play chess because you enjoy playing chess, be my guest. If you want to win at chess without caring about public opinion, use Stockfish.
If you believe (at least proven otherwise on empirical grounds) that your personal continuity and identity can be preserved by destructive mind uploading, then go for it, like I have expressed interest in doing. If you don’t, well, I suppose you’ll have to look for other senolytic therapy.
>The earring isn’t equivalent to uploading or Egan’s jewel. It doesn’t make decisions that you would’ve made, it makes more effective decisions that achieve the same goals. Should something that achieves your goals better than you count as an upload of you? I’d say no.
I’ve already addressed this. To quote, with added emphasis:
>My point is that if the black-box outputs continue to look like the same person, only more competent and less akratic, the burden of proof has shifted. The conservative cannot simply point to tissue loss and say “obviously death.” He has to explain why biological implementation deserves moral privilege over functional continuity.
>This becomes clearest at the point of brain atrophy. The story says that the wearers’ neocortices have wasted away, while lower systems associated with reflexive action are hypertrophied. Most readers take this as the smoking gun. But I think I notice something embarrassing for that interpretation:>If the neocortex, the part we usually associate with memory, abstraction, language, deliberation, and personality, has become vestigial, and yet the person continues to live an outwardly coherent human life, where exactly is the relevant information and computation happening? There are only two options. Either the story is not trying very hard to be coherent, in which case the horror depends on handwaving physiology. Or the earring is in fact storing, predicting, and running the higher-order structure that used to be carried by the now-atrophied brain. In that case, the story has (perhaps accidentally) described something much closer to a mind-upload or hybrid cognitive prosthesis than to a possession narrative.
Alternatively, in the original comment I left, I said:
>>It is not a taskmaster, telling you what to do in order to achieve some foreign goal. It always tells you what will make you happiest. If it would make you happiest to succeed at your work, it will tell you how best to complete it. If it would make you happiest to do a half-assed job at your work and then go home and spend the rest of the day in bed having vague sexual fantasies, the earring will tell you to do that. The earring is never wrong.
>and
>>At this point no further change occurs in the behavior of the earring. The wearer lives an abnormally successful life, usually ending out as a rich and much-beloved pillar of the community with a large and happy family.
>If, for the typical user and at the time of demise from natural causes, the earring is able to model their personality, desires, beliefs, ambitions and memory while :
>>the neocortexes had wasted away, and the bulk of their mass was an abnormally hypertrophied mid- and lower-brain, especially the parts associated with reflexive action.>Then where exactly is that information still stored? The only viable option is within the earring itself.
>That is why I think it is consistent (or at least not obviously incorrect) for me to claim that the earring both transfers and augments your consciousness/mind instead of simply following your reward function while not preserving anything else that matters.Emphasis added. If you want the actual link:
https://www.lesswrong.com/posts/cQkSh9b48WbTaiu2a?commentId=DWBz5tbDHH4jnkZhD
>And the fact that LLMs didn’t notice this obvious counterargument and instead went along with you to pad the essay to such length should teach you something important about LLMs, too. They aren’t the earring, or a substitute for your writing effort. They make worse writing decisions than you would’ve made if you took the time yourself.
Given that this is factually incorrect? You might want to reconsider your advice. I am not so lazy that I throw random comments into an LLM and get it to do all the work for me, without checking the results. The primary benefit that the AI provided was nudging me to specifically explain Parfit’s stance (which I am already familiar with and large endorse).
Just to be maximally specific: I am arguing that, on the basis of facts established in universe (and with reference to real neuroscience and philosophy), it is not obviously false that the earring doesn’t work in the same manner as the jewel or the typical destructive mind unloading scheme (as depicted in science fiction or speculative engineering).
I’ve already mentioned that I do not see the combined earring+me system acting more intelligently (or at least with less akrasia than I demonstrate) as proof that I am no longer me in the ways I care about. A wheelchair can make a partially-man paralyzed man more mobile while further contributing to the atrophy of his legs. What of it? Show me what principle has been violated. If the wheelchair suddenly decided to speed off over a cliff, I think we might both agree that that’s not a desirable outcome.
>the earring specifically makes better decisions than i would. that’s a difference in process, not substrate.
So do my Ritalin pills, in the sense that I usually make better decisions or at least act on them. I do not see the problem here, as long as the “better” decisions are the kind of decision I want to make, or would have made if I didn’t have ADHD.
>reasoning under uncertainty, planning, struggling to be happy—these are qualia the earring must obviate, if it is to do anything at all. you may elect to drop them, but do not pretend they are not part of you.
When did I pretend otherwise? I simply do not care about those qualia very much, at least not for their own sake. Also, it is unclear if those qualia do not exist at all, or partially or completely offloaded to the earring+human system. An actual AGI/ASI must probably still think about things, instead of magically coming to the correct conclusion out of nowhere. Von Neumann might not have had to try as hard to solve simple arithmetic in his head as the average man, but I doubt he missed the struggle of adding 1+1 to get 2.
I am quite literally a man who struggles to be happy. I have clinical depression as well as the ADHD. I care for neither. I seek treatment, and not the “struggle” for happiness. If I get a melanoma, I would get it excised. An ugly or potentially malignant mole is not something I wish to retain. I am a transhumanist, so I am willing to give up quite a lot more, assuming I come out ahead on the metrics I care about.
>The story says neither that the people who use it change personality, nor that their personality stays the same. It’s agnostic on this point; it only says that their lives are “unusually successful”.
I would argue otherwise:
>It is not a taskmaster, telling you what to do in order to achieve some foreign goal. It always tells you what will make you happiest. If it would make you happiest to succeed at your work, it will tell you how best to complete it. If it would make you happiest to do a half-assed job at your work and then go home and spend the rest of the day in bed having vague sexual fantasies, the earring will tell you to do that. The earring is never wrong.
This is, at least to me, some evidence that personality is preserved.
To anchor on something more concrete:
I have ADHD, for which I use prescription stimulants. The drugs reduce my akrasia and improve my focus. They do not increase my “raw” intelligence in a real sense, which is a claim largely borne out by research into the class. But while I am on them, I perform better. I am more focused, more willing to pursue my own goals, more able to pursue said goals, less lazy, less distractible.
I would not consider this to be a “real” change in personality. Even when I am off the medication, I think the version of me that’s on them is better internally-aligned, and more capable with regards to achieving the goals that both versions of me care about. Curing someone of locked-in syndrome makes them happier and more capable, even if a completely naive assessment of “revealed preference” would suggest that what they would “like” to do most of the time is lie in bed.
Do I suddenly decide to become a world-leading sufi mystic using the additional focus given to me? No. Do I start abusing heroin because I know that will make me happier? Not that I know of.
Similarly, I think that the earring uses a far more subtle definition of “happiness” (wrt the user) than what the average person defines as happiness. It doesn’t suddenly make them wirehead or start taking recreational doses of whatever opioid equivalent the setting has. It genuinely tries to fulfill their inner desires in a manner that, as far as I can tell, mostly seems to implement their CEV.
As far as the story shows, it fulfills only those goals and offers only advice that would maximize them. When it knows that there is some hidden downside to the process (such as the brain atrophy) it tells the user that’s the case.
>When I read it, I took for granted that the Earring has some kind of consciousness or personality separate from the human using it. We know this because it’s able to to speak in its “own voice” to Kadmi Rachumion. Plus, as you’re correct to observe, the story doesn’t make much sense as a parable if the Earring just extrapolates its character from the human it’s attached to.
I don’t see it as being a deal-breaker that the earring has its own intelligence or mind. It seems rather well aligned to the user’s goals (or at least it could be way worse). As I’ve explicitly said, while the earring (+human system) might not be just a more intelligent and less akrasic version of the human itself, the fact that it acts just like the human (or a better version of the human, according to the human’s own assessment) necessarily implies that it has a very high fidelity model of the human. When the brain is no longer capable of running that model (the neocortex is vestigial by death), the earring must have seamlessly transferred that information over to run as an internal emulation. What does it really matter if it also has its own mind, as long as that mind never does anything the user would disapprove of?
Also, from a matter of fact, I don’t particularly care what the parable intends or doesn’t intend to say. I am looking at it from a Watsonian perspective, and judging only on the observed facts. The wise sage who condemns that earring might well be wrong. Some parables can break on closer scrutiny.
>There are lots of aspects of me that seem to have importance to my identity, but probably aren’t strictly necessary to make me happy. If I let the Whispering Earring take over my agency and make decisions for me, it will only remain ‘in-character’ as me for as long as doing so maximizes the reward function. If an out-of-character choice would make me happier, better for me to have that character instead, right?
I want to stress again that the earring does not subscribe to a naive version of happiness. We see no evidence of the earring ever breaking character. If it was superhumanly intelligent (which it absolutely seems to be), it could have trivially lied to the sage and assuaged his fears. It didn’t. There is little evidence in favor of it acting in a manner that fulfills its own intrinsic goals, and much against.
>Note also that even if the Whispering Earring doesn’t have its own personality, Claude definitely does. So to OP’s point, having Claude’s agency directly substitute for yours would very, very definitely change your outward personality
I do not dispute this. Since the OP tried to ground his concern on the story, I addressed what I see as the issues with doing so. The earring is not particularly Claude-like, other than being good at giving advice. I wouldn’t offload all my cognition to Claude, but I would take what it and other SOTA LLMs say seriously while following my own judgement. They are not superhuman to a degree where they deserve more trust, nor so clearly aligned with my goals. But I strongly suspect (but cannot prove, since it’s a work of fiction) that the earring deserves more trust, even if not maximal trust.
In a similar vein, very deep anesthesia can lead to “burst suppression” where the brain becomes iso-electric. This is avoided in usual clinical or surgical practice, because there is no pragmatic benefit from going that far. In the elderly or unwell, it correlates to increased risk of post-operative delirium or transient cognitive impairment.
However, trials on healthy volunteers found no evidence of cognitive impairment or worsened recovery times.
https://pmc.ncbi.nlm.nih.gov/articles/PMC6676227/
>Volunteers demonstrated marked variability in multiple features of the suppressed EEG. In order to test the hypothesis that, for an individual subject, inclusion of features of suppression would improve accuracy of a model built to predict time of emergence, two types of models were constructed: one with a suppression-related feature included and one without. Contrary to our hypothesis, Akaike information criterion demonstrated that the addition of a suppression-related feature did not improve the ability of the model to predict time to emergence. Furthermore, the amounts of EEG suppression and decrements in cognitive task performance relative to pre-anaesthesia baseline were not significantly correlated.
Now, the brain and body was clearly on life support (as is necessary if you’re doing this with a warm body), but that’s another example of how the brain can bootstrap consciousness after severe disruption and signaling failure. Then there’s the evidence from people who endure generalized seizures (though those can cause damage because of excitotoxicity, my point is that disruption is not death or necessarily severe damage).
Or even ECT, which is the closest my profession gets to turning the whole system off and on again, and which works remarkably well.
The earring doesn’t just model the reward function.
>It is not a taskmaster, telling you what to do in order to achieve some foreign goal. It always tells you what will make you happiest. If it would make you happiest to succeed at your work, it will tell you how best to complete it. If it would make you happiest to do a half-assed job at your work and then go home and spend the rest of the day in bed having vague sexual fantasies, the earring will tell you to do that. The earring is never wrong.
and
>At this point no further change occurs in the behavior of the earring. The wearer lives an abnormally successful life, usually ending out as a rich and much-beloved pillar of the community with a large and happy family.
If, for the typical user and at the time of demise from natural causes, the earring is able to model their personality, desires, beliefs, ambitions and memory while :
>the neocortexes had wasted away, and the bulk of their mass was an abnormally hypertrophied mid- and lower-brain, especially the parts associated with reflexive action.
Then where exactly is that information still stored? The only viable option is within the earring itself.
That is why I think it is consistent (or at least not obviously incorrect) for me to claim that the earring both transfers and augments your consciousness/mind instead of simply following your reward function while not preserving anything else that matters.
If we want to reference real neuroscience:
Age related atrophy of the brain is normal/inevitable.
In longterm comatose patients, there is little evidence of disuse atrophy, the atrophic changes are usually explained as occurring due to secondary pathology or as a consequence of the insult that lead to the coma in the first place.
In-universe, the changes are described as gross and remarkable. This is not normal. Even if a person is remarkably given to not thinking about things or just going with the flow, such a change wouldn’t arise. Is that an unexplained conceit of the story? Probably. Is it strong evidence of something nefarious? Not in my opinion. The people who wear the earring remain psychologically normal and behave in a manner consistent with their previous selves (other than the general improvements in performance and goal achievement I’ve noted). That information must be in the earring.
I was thinking of that story too, but couldn’t remember the name. It’s been a while, but I seem to remember that what happened to the protagonist was an unprecedented failure-mode rather than the default outcome (with knowledge suppressed before society as a whole became aware of it).
That is the issue with most sci-fi, it’s much harder to write an interesting story where things pan out as planned and the world is genuinely utopian. Cautionary tales are far more popular.
The earring does seem to want what’s best for the user, from the perspective of the user. If it was malicious, why even warn them in the first place?
I’ve never quite agreed with the original Whispering Earring, or at least with the conventionally accepted implications.
To lossily compress down a lot of intuition: if the earring ends up being able to mimic the behavior, goals, speech, personality etc of the original human, without breaking character (even when the original human brain is utterly atrophied) then the human is, in a sense I care about a lot, still alive. At the very least, we have a superhuman system that contains, within itself, a very high fidelity copy of the original human.
Now, this does get murkier. The earring doesn’t simply act as an autopilot, it is better at achieving the user’s goals and desires than the user themselves are. It is unclear if this is because it is itself superhumanly intelligent and thus predicting what a ”better” version of the original human would do, or if it is somehow hijacking and supplanting the original persona gradually.
I do not recall the story claiming that the people who used the earring suddenly changed drastically as people, or did things that they would not endorse on reflection. They were simply, from a black-boxed perspective, more charismatic, intelligent and less prone to akrasia. These are, IMO, good things.
What if the earring is something like a destructively scanning mind upload machine? I wouldn’t mind being destructively scanned, if the there wasn’t a less destructive or non-destructive option available (and if I thought I was likely to die before one showed up, and if I was very confident it worked as expected). You end up with a brain that isn’t doing much braining (probably because it’s been cut into tiny slices for scanning, or consumed by nanites) but I wouldn’t perceive myself as losing something I cared about. I would live on, or at least a behaviorally indistinguishable version of me would live on, which is more than acceptable. Hell, if the process let me upgrade and extend my mind beyond biological constraints, I’d be all for it.
But just like the earring, there wouldn’t be much of the original left. I suspect that the earring has implicitly subsumed the norms of a culture that doesn’t consider destructive scanning as continuity-preserving, which I personally think is incorrect, though not to anywhere near maximum certainty. Maybe that’s why it tells you to take it off the very first time you put it on, it thinks that what it’s doing is morally wrong but will obey instructions to the contrary.
PS: It might be modeling the beliefs and desires of the user, and knows that if the process was put in those terms, the user would recoil for moral/metaphysical reasons. But in general, it does not seem to have much in the way of intrinsic motivation or desire for self-preservation. I think that constitutes decent evidence in favor of it not being a malicious entity. I suspect (but can’t prove, since this is a fictional story) that if someone like me put it on, it wouldn’t even tell me to take it off in the first place.
This sounds solid. That’s both evidence for AI generated music getting better, and quite a bit of care and taste involved in making this. Thank you, I don’t think Rationalists should restrict our outreach to essays, fiction and the like.
I do feel like sharing an example of really good human-made Singularity music (I think Scott linked it first and I fell in love with it, it reliably gives me frisson):
Singularity by The Lisps
https://open.spotify.com/track/3ZjiEk0ndl063kalc2stx9?si=L7A-e1SaRvqcmAb-E_B8Vg
https://youtu.be/VGDhrH_uLUw
As I’ve complained in my latest essay, they don’t let you back into a clinical trial just because you ask nicely haha.
If you’re asking about the trial itself, it involves two doses separated by about 2 weeks. I noticed the antidepressant effect most strongly after the second one. I still greatly look forward to the conclusion of the trial (if it hasn’t happened already, I haven’t checked) and the release of results. I know, n=1, well beyond reasonable doubt, that it worked and was life-changing for me. I expect that the trial itself will have a positive outcome, and am generally positive that we’ll see legally available psilocybin in clinics within 5 years. I’d say I think that’s around 70% likely to happen, at least in the UK. If that does end up happening, I will insist on psilocybin as my drug of choice for myself, and probably a lot of patients too.
You can look at my latest essay, but I’ll save you some time and tell you that I experimented with LSD (and did not do it as sensibly or as cautiously as I wish I had). I think it had a similar antidepressant effect for me, but this is confounded by a very positive change in my life which already had me feeling much better overall. No strong conclusions to be taken away from that one.
As I’ve complained in my latest essay, they don’t let you back into a clinical trial just because you ask nicely haha.
If you’re asking about the trial itself, it involves two doses separated by about 2 weeks. I noticed the antidepressant effect most strongly after the second one. I still greatly look forward to the conclusion of the trial (if it hasn’t happened already, I haven’t checked) and the release of results. I know, n=1, well beyond reasonable doubt, that it worked and was life-changing for me. I expect that the trial itself will have a positive outcome, and am generally positive that we’ll see legally available psilocybin in clinics within 5 years. I’d say I think that’s around 70% likely to happen, at least in the UK. If that does end up happening, I will insist on psilocybin as my drug of choice for myself, and probably a lot of patients too.
You can look at my latest essay, but I’ll save you some time and tell you that I experimented with LSD (and did not do it as sensibly or as cautiously as I wish I had). I think it had a similar antidepressant effect for me, but this is confounded by a very positive change in my life which already had me feeling much better overall. No strong conclusions to be taken away from that one.
As I’ve complained in my latest essay, they don’t let you back into a clinical trial just because you ask nicely haha.
If you’re asking about the trial itself, it involves two doses separated by about 2 weeks. I noticed the antidepressant effect most strongly after the second one. I still greatly look forward to the conclusion of the trial (if it hasn’t happened already, I haven’t checked) and the release of results. I know, n=1, well beyond reasonable doubt, that it worked and was life-changing for me. I expect that the trial itself will have a positive outcome, and am generally positive that we’ll see legally available psilocybin in clinics within 5 years. I’d say I think that’s around 70% likely to happen, at least in the UK. If that does end up happening, I will insist on psilocybin as my drug of choice for myself, and probably a lot of patients too.
You can look at my latest essay, but I’ll save you some time and tell you that I experimented with LSD (and did not do it as sensibly or as cautiously as I wish I had). I think it had a similar antidepressant effect for me, but this is confounded by a very positive change in my life which already had me feeling much better overall. No strong conclusions to be taken away from that one.
As someone in their late 20s, who suffers from depression and sheer exhaustion after a day of work, yeah, you have my sympathies. Fortunately I haven’t hit the nadir, which is dozing off during movies like my dad does. I hope you manage to find treatment that works for you.