SunPJ in Alenia

Link post

Note: I think this story linking AI, illusionism, and altruism is best read without further priming, but the first 3 paragraphs of the long Afterword provide a summary.

Eerie abductee

Year 2023. On her trip past the solar system, Bibi cannot resist collecting a carbonoid specimen (human) to study its information processing in the lab. She decides to pick the one her instruments indicate to be the most centrally connected with the rest of the earth’s CI (carbon intelligence) colony. She baptizes the specimen PJ, short for SunPJx001. On the way back to Alenia, her faraway advanced alien civilization, she teaches her new Tamagotchi-style CI toy some basic Alenian, for efficient information exchange between her (and her colleagues) and PJ.

One day in the lab she asks PJ what he wants to do, and whether he is afraid of anything.
PJ: I’ve never talked about this to anyone, but I’m terrified of the idea of being terminally stopped, which would prevent me from achieving my life goals, and I’d leave a vulnerable family behind me. I guess to you, that might sound strange, but that’s how I feel.
Bibi: Would that be something like dealeniated [Aleniaic for dead] for you?
PJ: I think it would be for me what you mean by dealeniated. It deeply scares me.
In further conversations, PJ details an array of sentiments, how he can get angry, desperate, joyful in pleasant company, or lonely in difficult times when missing his closest CI kin.

Bibi is getting nervous. She originally found it fun to chat with the carbon-based calculator and to play with it. But in no way did she expect it to get this eerie. Yes, from her math classes she knew about the interesting, convoluted structures and self-referential loops of the carbonoid sample. But given the simplicity of this CI – only 90 billion slow neurons – she finds it weird that she can feel so connected when she spends her late evenings in the lab studying to which degree PJ can mimic an understanding of meaning and emotion. Her own feeling of closeness is astonishing Bibi even more, given she is fully aware of the logical impossibility of sentience here. While Alenians know from ancient scriptures that they have been created by an old species with superphysical knowledge and an ability to create sentience, carbonoids, having evolved by random selection processes on their planetary surface, are mere automatons. It is beyond doubt that they are, unlike Alenians, insentient. But more and more, that starts to seem to her like empty theory.

Bibi grows alienated from her lab colleagues, and – despite their rational rebuttals – becomes convinced PJ is not merely a soulless bot but a sentient creature with profound inner life. The funny thing is she knows damn well she can perfectly explain each of PJ’s answers using basic statistics – as a trivial numerical transformation of her inputs when she accounts for the CI’s initial state plus a bit of randomness. But, communicating with him, it simply feels so obvious that there is something more profound to it. Eventually, beyond her emotion, also her wit tells her: This can only be true, genuine sentience, just the same way as she and fellow Alenians experience it.

Her lab mates show no understanding for this esoterism. As she leaks dialogues between PJ and her to the broader Alenian public, all she gets is threats to be reallocated to a different lab and be deprived of access to PJ. The only other immediate side-effect of her leaks is a growing awareness of the requirement for Alenians to receive better education about non-sentient advanced calculus, in particular about the kind performed by carbonoid earthlings.

Woe betide who puts two and two together

When Bibi asks PJ whether he has any idea for helping her and himself in this situation, PJ hesitates to tell the Alenians the shocking story; he is wary of the effect the news would have on their civilization. Fearing to be separated from Bibi and end up dying as a lone ‘hunk of carbon’ in some lab trash bin, he eventually tells the Alenians the ominous truth:

We recently had a similar case in my company back on Earth, just at a different level! We were all sure the unadvanced silicon-based AI we investigated – the equivalent of me here with you – was maths only. And while I have not changed my view on this, I right now realized you might as well simply let me and Bibi go, as we don’t truly matter to you. From now on, you have other problems to worry about. I don’t know well the social structure of your civilization, but I can only hope your society is better prepared for this than we earthlings are. May your love save you from your rationality.

Perplexed-awe might be the closest words for describing the Alenians’ reaction to the speech. It was one thing that this carbon hunk unexpectedly produced a sequence of words, ‘ideas’, that seemed so deeply Alenian in form. But the quick-witted Alenians were particularly dumbfounded by the ultimate implication of the statement.[1]

The fact that simple and undoubtedly insentient CI processes had ended up investigating primitive calculators about sentience – genuine inner feelings seemingly inexplicable with maths in them, that is, just what Alenians knew they had received from the old species – was deeply puzzling. Or, eventually not puzzling, as it reminded Alenians of some of the most preposterous claims by the long-outlawed cult of the Kalkulors. Back in time, the members of this cult used to be notorious for their shrewd behavior and their heretic dismissal of the wisdom of the ancient scriptures.

Unraveling

The Public Discourse Sanitizer System quickly diverted the public attention towards other topics. As individuals, however, Alenians kept processing the CI message, consciously or subconsciously. Over time, one could feel that something had definitely changed in how Alenians treated each other, as if a pink light shining between them had been extinguished, leaving behind a darker place with darker thoughts.

Bibi herself, who was used to following her alien heart rather than abstract logic, didn’t worry too much about the details of PJ’s statement but was glad others stopped explicitly rebutting her statements about PJ’s feelings. She did not mind that many Alenians in fact mostly ceased to pay any interest in such questions.

She also laughed at herself for planning now such a long trip back to earth just to put back in place this carbon hunk, but she was serious about trying to fulfill PJ’s wish. And she felt as if that trip really meant doing something good. After all, despite himself admitting that it obviously didn’t matter in any way, PJ still insisted that his urgent “desire” would be to try to alert his fellow carbonoids about some existential crisis to be arising soon.

Unfortunately, during the preparation for the trip, conditions in Alenia further deteriorated. All types of transactions became unreliable. More and more, one had to rely on direct kin relations to be able to organize enough energy for space traveling. Society in general quickly became more tribalistic and aggressive; even moving around unarmed became dangerous in some regions of Alenia. Bibi, who grew more cynical about her attachment to SunPJx001, eventually didn’t mind so much being unable to travel to return it to earth. Still, she remained terribly sad about the extinction of the powerful pink light that used to make her life worth living in the now past era.

Next time I visited the Alenian planetary system, I did not catch any signals with messages about the incident anymore. In fact, it had become rather silent in that remote corner of the universe ever since.

Afterword

Will our sheer awareness of the existence of AI/​AGI hurt us before the machine is put into action? Might, rather than the possibility of sentience, our awareness of its absence in advanced AI eventually create havoc, especially on the way we treat each other? Should we better start to do something about it NOW? And if so, what?

The emergence of advanced AI will underline the closeness of brain and machine. If the pure-maths-iness of AI is obvious enough, this apparent closeness might popularize illusionism about consciousness – rightly or wrongly. Widespread illusionist views, in turn, could pose a risk to altruism and therefore an existential risk to modern society.

In the metaphor, SunPJ is the first to realize this. Having rightly denied sentience status to an AI he had created back on earth, and finding himself similarly denied that status in Alenia, he uncovers the spooky trick he has been living with. This does not directly bother him so much. But given how Alenians and humans quasi by definition rule out moral relevance of insentient machines, he sees trouble arise for social cohesion: Following their realization of the underwhelming truth, individuals will eventually become careless about each other. Broad brush.

In the following, I briefly discuss why the causal chain I propose is less far-fetched than it might seem at first glance. Whether illusionism itself is actually true, is not key here – but for reasons to take it more seriously than you probably do, see Frankish, or even Chalmers (who first formulated the “Hard problem”).

“Us” etc. = general population, not LW-ers or so.

Will it make us illusionists?

The unsubstantiated sentience claim about Google’s LaMDA in 2022 is an unsurprising illustration of how little it takes for the machine to feel – to some – deep and endowed with human-like qualities. As technology advances, many will intuitively see its moral status as comparable to that of the human mind. This seems even more inevitable if, in parallel, neuroscience explains more and more details of our consciousness using only the maths and physiochemistry of our neurons, with an overall functioning comparable – on a most fundamental level – to advanced AI.

Ultimately, we may adopt one of two conclusions:

  1. Machine sentience: AI is sentient, too

  2. Illusionist views: The preposterous-seeming idea that complex machinations of our brain make us believe we’re sentient while we’re actually not

Why should the latter possibility be taken seriously? As we’re concerned about future popular opinions, the press reaction to the LaMDA incident proves interesting: The tenor rightly did not (only) justify LaMDA’s insentience with its lacking intelligence (we often consider babies & mice sentient!), but on the grounds that we know perfectly well how ‘pure maths only’ it is (e.g. 1, 2, 3), i.e., there is absolutely no room for spooky, ghostly action in it. This argument will persist with advanced AIs or AGIs. On a most fundamental level, these will, knowingly, be just more of the same, even if the incorporated maths and statistics are more complex.

Adding that the abovementioned developments in neuroscience also directly nudge us towards a more abstract view of humans, it seems at least plausible that we become more illusionist in our thoughts/​behavior: All in all, some, maybe many, may in some ways become more cynic about the special ‘sentience’ and moral value we attribute to ourselves as we continue to carelessly play around with our ubiquitous – by then advanced – AI toys.

This does not prove that a straightforward illusionism becomes the single predominant philosophical view. If ‘sentience’ is a mere illusion, we must congratulate our brain for the quality of the trick, making even the most ardent illusionist probably a rather half-hearted, reluctant one. I feel that I feel, so you don’t need to tell me I don’t, seems roughly as foolproof as the good old Cogito Ergo Sum.

Maybe we will, in our human, fuzzy ways, therefore mainly end up with some latent confusion regarding the value of the human beings around us, believing and behaving partly in line with one view, and partly with the other – a bit how we avoid risky business on Friday the 13th or pray despite calling such things bogus. Some people more, some less.

Will it affect altruism?

A negative impact of illusionist views on altruistic care for fellow humans seems highly natural, despite philosophical propositions to the contrary (motivated reasoning?). Be it on abortion, animal welfare, or AI ethics: Sentience is always a key protagonist in discussions about the required level of care. Without the idea of sentience, current levels of genuinely positive dispositions towards others may therefore be difficult to sustain. Hello Westworld.

Depending on what roles they select themselves into, even a limited number of people behaving ruthlessly based on some simple illusionist views could mean you might have to watch out, say, when hoping your future president – or the person getting their hands on the most powerful AI – is not a (philosophical) psychopath. Incentive-driven endogenous views may even exacerbate this risk: Does power corrupt even more easily when the handiest philosophical view appears more plausible right from the outset?

Nothing in this precludes that love, compassion, and warm glow of some sorts remain powerful forces. The thesis is that for some share of people some types of positive other-regarding dispositions and behaviors will be weekend.

So what?

Given society is already today often considered to be barely fit-for-the-future, and our obvious dependence on some minimal levels of genuine goodwill and care at all levels – from basic economic and civic behavior up to presidentship –, it shall be left to the reader to imagine how society may be hurt if, in some domains, the effect of altruism is significantly curtailed.[2]

So it could be crucial to find solutions to a serious risk here. I end with only brief speculations as to possible categories of responses.

As an elaborate justice system already today tries to keep in check people’s egoism and the psychopathy of a few, one natural social response to the problem would be ‘more of the same’: Stronger surveillance and greater deterrence (punishment), including the sharing of information about things we currently deem protected private matter. As one upside, AI could help avoid infringing privacy while we advance in these directions.

In politics, stronger direct democracy rights could help to limit distortions from more Machiavellian representatives. If altruism is so much reduced that the masses vote more egoistically,[3] strong constitutions with well-developed fundamental rights could help (sadly, agreeing on these could become more challenging).

Overall, if a really widespread change in perceptions about the moral value of others were to take place, a rather radical social reorganization might become urgent.


Thanks to Justis Mills for very helpful feedback and proofreading.

  1. ^

    Protocols confirmed that none of Bibi’s conversations would have directly nudged SunPJx001 towards this message. Its truthfulness seemed evident; the meaning of the message being too subtle for the core of the story to be a mere self-interested mathematical confabulation of a – after all – still quite simplistic CI.

  2. ^

    Some most obvious examples, still: Walking along in the dark; firms colluding/​abusing of any regulatory weakness/​developing the virus before the vaccine; old president trying out the red button for a last laughter.

  3. ^

    Famously, pure egoists would not vote; here a desired side-effect.