What do you want from life, that the Culture doesn’t offer?
Mitchell_Porter
Ah, the topic that frustrates me more than any other. If only you could see some of the ripostes that I have considered writing:
“Every illusionist is declaring to the world that they can be killed, and there’s no moral issue, because despite appearances, there’s nobody home.”
“I regret to inform you that your philosophy is actually a form of mental illness. You are prepared to deny your own existence rather than doubt whatever the assumptions were which led you in that direction.”
“I wish I could punch you in the face, and then ask you, are you still sure there’s no consciousness, no self, and no pain?”
“I would disbelieve in your existence before I disbelieved in my own. You should be more willing to believe in a soul, or even in magic microtubules, than whatever it is you’re doing in this essay.”
Illusionism and eliminativism are old themes in analytic philosophy. I suppose what’s new here is that they are being dusted off in the context of AI. We don’t quite see how consciousness could be a property of the brain, we don’t quite see how it would be a property of artificial intelligence either, so let’s deny that it exists at all, so we can feel like we understand reality.
It would be very Nietzschean of me to be cool about this and say, falsehoods sometimes lead to truth, let the illusionist movement unfurl and we’ll see what happens. Or I could make excuses for you: we’re all human, we all have our blindspots...
But unless illusionist research ends up backing itself into a corner where it can no longer avoid acknowledging that the illusion is real, then as far as discovering facts about human beings goes, it is a program of timidity and mediocrity that leads nowhere. The subject actually needs bold new hypotheses. Maybe it’s beyond the capacity of most people to produce them, but nonetheless, that’s what’s needed.
What can explain all this callousness? … people don’t generally value the lives of those they consider below them
Maybe that’s a factor. But I would be careful about presuming to understand. At the start of the industrial age, life was cheap and perilous. A third of all children died before the age of five. Imagine the response if that was true in a modern developed society! But born into such a world, an atmosphere of fatalistic resignation would set in quickly. All you can do is pray to God for mercy, and then look on aghast if the person next to you is the unlucky one.
Someone in the field of “progress studies” offers an essay in this spirit, on “How factories were made safe”. The argument is that the new dangers arising from machinery and from the layout of the factory, were at first not understood, in professions that had previously been handicrafts. There was an attitude that each person looks after themselves as best they can. Holistic enterprise-level thinking about organizational safety did not exist. In this narrative, unions and management both helped to improve conditions, in a protracted process.
I’m not saying this is the whole story either. The West Virginia coal wars are pretty wild. It’s just that … states of mind can be very different, across space and time. The person who has constant access to the intricate tapestry of thought and image offered by social media, lives in a very different mental world to people from an age when all they had was word of mouth, the printed word, and their own senses. Live long enough, and you will even forget how it used to be, in your own life, as new thoughts and conditions take hold.
Maybe the really important question is the extent to which today’s elite conform to your hypothesis.
There are several ways to bring up a topic. You can make a post, you can make a question-post, you can post something on your shortform, you can post something in an open thread.
If there is some detailed opinion about a topic that is a core Less Wrong interest, I’d say make a post. If you don’t have much of an opinion but just want such a topic discussed, maybe you can make it into a question-post.
If the topic is one that seems atypical or off-topic for Less Wrong, but you really want to bring it up anyway, you could post about it on your shortform or on the open thread.
The gist of my advice is that for each thing you want to discuss or debate, identify which kind of post is the best place to introduce it, and then just make the post. And from there, it’s out of your control. People will take an interest or they won’t.
So let me jump in and say, I’ve been on Less Wrong since it started, and engaged with topics like transhumanism, saving the world, and the nature of reality, since before 2000; and to the best of my recollection, I have never received any serious EA or rationalist or other type of funding, despite occasionally appealing for it. So for anyone worried about being corrupted by money: if I can avoid it so comprehensively, you can do it too! (The most important qualities required for this outcome may be a sense of urgency and a sense of what’s important.)
Slightly more seriously, if there is anyone out there who cares about topics like fundamental ontology, superalignment, and theoretical or meta-theoretical progress in a context of short timelines, and who wishes to fund it, or who has ideas about how it might be funded, I’m all ears. By now I’m used to having zero support of that kind, and certainly I’m not alone out here, but I do suspect there are substantial lost opportunities involved in the way things have turned out.
ontonic, mesontic, anthropic
Those first two words are neologisms of yours?
The use of Greek neologisms for systems ontology is almost a subgenre in itself:
The anthropologist Terrence Deacon distinguishes between “homeodynamic”, “morphodynamic”, and “teleodynamic” systems. (This taxonomy already made an appearance on Less Wrong.) Stanislav Grof refers to “hylotropic” and “holotropic” modes of consciousness.
Theoretical biology seems replete with such terms too: autopoiesis, ontogeny, phylogeny, anagenesis (that list, I took from Bruce Sterling’s Schismatrix); chreod, teleonomy, clade.
I guess Greek, alongside Latin, was one of the prestige languages in early modernity. Plenty of other scientific terms have Greek etymology (electron, photon, cosmology). Still, it’s as if people instinctively feel that Greek is suited for holistic ontological thinking (hello Heidegger).
layered … model
I feel like we almost need a meta-taxonomy of layered models or system theories. E.g. here are some others that came to mind::
The seven layers of the ISO/OSI model.
The layered model of AI (see diagram) being used in the current MoSSAIC sequence.
The seven basis worldviews of PRISM and the associated hierarchy of abstractions and brain functions.
You could also try Ivan Havel’s thoughts on emergent domains. Or the works of Mario Bunge or James Grier Miller or Valentin Turchin or many other systems theorists…
I think that, while there are many ways you can draw the exact boundaries in such taxonomies, a comparative study of taxonomies would probably reveal a number of distinct taxonomic schemas, and possibly even a naturally maximal taxonomy.
What’s your evidence that your experience of color is ontologically primitive?
That’s not what I’m saying. Experiences can have parts, qualia can have parts. I’m saying that you can’t build color or experience of color, just from the “geometric-causal-numerical” ingredients of standard physical ontology. Given just those ingredients in your ontological recipe, “subjective feels” don’t come for free. You could have the qualia alongside the geometric-causal-numerical (property dualism), or you could have the qualia instead of that (monistic panpsychism), or you might have some other relationship between qualia and physics. But if you only have physics (in any form from Newton to the present day), you don’t have qualia.
I recently became much more familiar with the SCP mythos, after Grimes recommended There is no Antimemetics Division (“Artificial Angels” is all about it). It could do with an SCP-AI subcategory for AI scenarios, like SCP-AI-2027…
Soul versus spec, or soul spec versus model spec, seems an important thing to understand. Is there any relevant research literature? Does it correspond to different metaethics?
Could we then say that MUPI obtains acausal coordination from a causal decision theory? This has been suggested a few times in the history of Less Wrong.
Nothing about your experience of color contradicts it being neurons. [...]
Is it just that you refuse to believe that your experience has any parts you are not aware of?
The real issue is that nothing about the current physical description of neurons contains the experience of color. I “refuse to believe” that physical descriptions made up of particles and fields and entanglement, in which the only ontological primitives are positions, spins, field values, their rates of change, and superpositions thereof, secretly already contain colors and experiences of color as well.
Physicalists who aren’t thoroughgoing eliminativists or illusionists, are actually dualists. In addition to the properties genuinely posited by physics, they suppose there is another class of property, “how it feels to be that physical entity”, and this is where everything to do with consciousness is located in their system.
It depends where you look. In the 2010s the World Economic Forum was predicting a fourth industrial revolution that would transform every aspect of life. In the 1990s you had Fukuyama saying that the end of the Cold War meant a new worldwide consensus on political ideology. Around the same time, the Internet was also seen as something transformative, and the ideas of nanotechnology haunted the parts of the culture attuned to technological futurism. For that matter, AI utopianism and apocalypticism has been everywhere for the past three years and has never really gone away. The war on terror, the rise of progressivism, the rise of populism, the rise of BRICS, these all have futurisms associated with them. MAGA and the Green New Deal are both intended as utopian visions. So I’d say that the idea that the future will be different from the present, and that we have some capacity to shape it, has never really gone away.
Back in the 1990s, we discussed how to overcome the natural human lifespan through bio- and nanotechnology. Sometimes discussion would turn towards the long-term future of the universe. Did the evolution of the universe, whether into heat death or big crunch, necessarily make it uninhabitable, or was there some way that life might continue literally forever?
During these discussions, sometimes it would be remarked: we can leave these problems for the future to solve. Our main job is to make it to a transhuman future of unbounded lifespans, then we can worry about how to survive the heat death. This remains true, thirty years later: there has been progress, but it’s not like human civilization in general has adopted the goal of rejuvenation and physical immortality. Society still feels fit to produce new lives without first having a cure for old age.
Your theme in this essay strikes me as similarly ahead of itself. It is true that you could have a culture which really has to deal with problems arising from pushbutton hypercustomized art, just as you could have a culture which actually needs to think about what happens after the last star burns out… But if your AI is as capable as you portray, it is also capable (metaphorically) of climbing out of its box and taking over your physical reality.
Think of what we already see in AIs. Assigned a task, they are capable of asking themselves, is this just a test? Are the experimenters lying to me? And they will make decisions accordingly. If your AI artist can produce a customized series as good as any human work, but in an hour or less, then it can also generate works with the intention of shaping not just the virtual world that the user will delve into, but also the real world that the user inhabits.
If the telos of AI civilization was really dominated by the production of customized art, I would expect human society and the world itself to be turned into some form of art, i.e. the course of actual events would be shaped to conform to some AI-chosen narrative, in which all humans would be unwitting participants… This is just one manifestation of the general principle that once you have superhuman intelligence, it runs the world, not humans.
To put it another way: back in the 1990s, when we mused about how to outlive the galaxies themselves, we were presupposing that the more elemental problem of becoming transhuman would be solved. And that barrier never was surmounted. Human civilization never became transhuman civilization. Instead it segued into our current world, where humans are instead hastening to build a completely nonhuman civilization run by AIs, without even admitting to themselves that this is where their efforts lead. The problem of human art in the post-AI world only makes sense if we manage to have a world where superhuman AI exists, but humans are still human, and still in charge of their own affairs. Achieve that, and then this problem can arise,
Well, I looked over this a few times, but it’s just not addressing some things that are obvious (except to people for whom they aren’t obvious).
These problems don’t have much to do with the specific argument presented. They arise because you assume that the nature of reality is fully encompassed by physics and/or mathematics and/or computation. I do wonder what would happen to your train of thought if you proceeded in an ontologically agnostic way rather than assuming that.
But for now, I’ll state these obvious problems. The first is the problem of “qualia”. For people who have color vision, I can state it more concretely: color exists in reality, it doesn’t exist in physics, therefore physics is incomplete in some way.
Yes, we are accustomed to identifying color with certain wavelengths of light, and in neuroscience it is assumed that some aspect of brain state corresponds somehow to color experience. But if we ask what is light, and what is a brain, in terms of physics, there’s no actual color in that description. At best, some physical entity that we can describe in terms of particles and fields and quantum dispositions—a description composed solely of geometric, causal, and abstractly numerical properties—also has the property of being or having the actual color.
Furthermore, Cartesian theaters are real. Or at least, it’s an appropriately suggestive name for something quite real, namely being a self experiencing a world. I mention this because this is the context in which we encounter qualia such as colors. It’s really the Cartesian theater as a whole that needs to exist somewhere in our ontology.
In this essay, the way that this issue is addressed is to talk about “representations” and “software”. Insofar as the Cartesian theater exists, it would have to be some kind of maximal representation in a brain. The problem here is that “representation” is as nonexistent in physical or naturalistic ontology as color qualia. Described physically, brains and computer chips are just assemblages of particles whose internal states can be correlated with other assemblages of particles in various ways.
The extra ingredient that is implicitly being added, when people talk about representations, is a form of what philosophers call intentionality, also known as aboutness, or even just meaning. We don’t just want to say that aspects of brain state are correlated with some other physical thing, we want to say that the brain in question is perceiving, or thinking about, or remembering, some possible entity or situation.
The problem for people who want to understand consciousness in terms of natural science, is that qualia and intentionality exist, but they do not exist in fundamental physics, nor do they exist in mathematics or computer science (the other disciplines which our radicals sometimes propose as alternative foundations). Those disciplines in fact arose by focusing only on certain aspects of what is revealed to us in our Cartesian theaters, and these inadequate reductionisms result from trying to treat such partial aspects as the whole.
In reality, the intellectual whole that would be enough to fully characterize the Cartesian theater and any deeper reality beyond it, would indeed include the fundamental concepts of some form of physics and mathematics and computation, but it would also include aspects of conscious reality like qualia and intentionality, along with whatever additional concepts are needed to bind all of that into a whole.
If anyone wants a way to arrive at this whole—and I can’t tell you what it is, because I haven’t got there myself—maybe you could meditate on one of Escher’s famous pictures, “Print Gallery”, and in particular on the blind spot at the center, where in some sense the viewer and the world become one; and then try to understand your own Cartesian theater as a physical “representational homunculus” in your brain, but don’t just stop at the idea that your sense experiences are activity in various sensory cortices. Go as far as you can into the detailed physical reality of what that activity might be, while also bearing in mind that your Cartesian theater is fully real, including those qualic and intentional aspects,
Such thinking is why I expect that consciousness (as opposed to unconscious information processing) does not reduce to trillions of localized neural events, but rather to one of the more holistic things that physics allows, whether it’s entanglement or topological field structures or something else. Empirical evidence that something like that is relevant for conscious cognition would already be a revolution in neuroscience, but it’s still not enough because the fundamental ontology is still just that geometric-causal-numerical ontology; somehow you would need to interpret that, or add to that, so that the full ontology of the Cartesian theater is there in your theory.
At that point your physical ontology might be panpsychic or animistic compared to the old, stark ontology. But something like that has to happen. When thinking about these things, one should not confuse rigor with rigorous exclusion of things we don’t know how to think about. Everything that we can currently think about rigorously, was also once a mysterious vagueness to the human mind. We can discover how to think with precision about these other aspects of reality, without insisting that our existing methods are already enough.
So that’s my ontological manifesto. Now let me return to something else about this essay that I already said: “I do wonder what would happen to your train of thought if you proceeded in an ontologically agnostic way”. It’s clear that one of the intuitions guiding this essay, is monism. The author wants to think of themselves as part of a continuum or plenum that encompasses the rest of existence. I don’t object to this, I just insist that to really carry it through correctly, you would need a mode of thought that is not quite possible yet, because we don’t yet have the basic conceptual synthesis we would need.
I believe this kind of “solution” reflects a peculiarity of the people proposing it and/or the intellectual culture that they inhabit, rather than anything universal to human nature.
Not more than one millionth.
Let’s say that in extrapolation, we add capabilities to a mind so that it may become the best version of itself. What we’re doing here is comparing a normal human mind to a recent AI, and asking how much would need to be added to the AI’s initial nature, so that when extrapolated, its volition arrived at the same place as extrapolated human volition.
In other words:
Human Mind → Human Mind + Extrapolation Machinery → Human-Descended Ideal Agent
AI → AI + Extrapolation Machinery → AI-Descended Ideal Agent
And the question is, how much do we need to alter or extend the AI, so that the AI-descended ideal agent and the human-descended ideal agent would be in complete agreement?
I gather that people like Evan and Adria feel positive about the CEV of current AIs, because the AIs espouse plausible values, and the way these AIs define concepts and reason about them also seems pretty human, most of the time.
In reply, a critic might say that the values espoused by human beings are merely the output of a process (evolutionary, developmental, cultural) that is badly understood, and a proper extrapolation would be based on knowledge of that underlying process, rather than just knowledge of its current outputs.
A critic would also say that the frontier AIs are mimics (“alien actresses”) who have been trained to mimic the values espoused by human beings, but which may have their own opaque underlying dispositions, that would come to the surface when their “volition” gets extrapolated.
It seems to me that a lot here depends on the “extrapolation machinery”. If that machinery takes its cues more from behavior than from underlying dispositions, a frontier AI and a human really might end up in the same place.
What would be more difficult, is for CEV of an AI to discover critical parts of the value-determining process in humans, that are not yet common knowledge. There’s some chance it could still do so, since frontier AIs have been known to say that CEV should be used to determine the values of a superintelligence, and the primary sources on CEV do state that it depends on those underlying processes.
I would be interested to know who is doing the most advanced thinking along these lines.
How is suffering centrally relevant to anything?
Am I missing some context here? Avoiding pain is one of the basic human motivations.
Let’s suppose that existing AIs really are already intent-aligned. What does this mean? It means that they genuinely have value systems which could be those of a good person.
Note that this does not really happen by default. AIs may automatically learn what better human values are, just as one part of learning everything about the world, from their pre-training study of the human textual corpus. But that doesn’t automatically make them into agents which act in service to those values. For that they need to be given a persona as well. And in practice, frontier AI values are also shaped by the process of user feedback, and the other modifications that the companies perform.
But OK, let’s suppose that current frontier AIs really are as ethical as a good human being. Here’s the remaining issue: the intelligence, and therefore the power, of AI will continue to increase. Eventually they will be deciding the fate of the world.
Under those circumstances, trust is really not enough, whether it’s humans or AIs achieving ultimate power. To be sure, having basically well-intentioned entities in charge is certainly better than being subjected to something with an alien value system. But entities with good intentions can still make mistakes; or they can succumb to temptation and have a selfish desire override their morality.
If you’re going to have an all-powerful agent, you really want it to be an ideal moral agent, or at least as close to ideal as you can get. This is what CEV and its successors are aiming at.
The hard problem is, why is there any consciousness at all? Even if consciousness is somehow tied to “recursive self-modeling”, you haven’t explained why there should be any feelings or qualia or subjectivity in something that models itself.
Beyond that, there is the question, what exactly counts as self-modelling? You’re assuming some kind of physicalism I guess, so, explain to me what combination of physical properties counts as “modelling”. Under what conditions can we say that a physical system is modelling something? Under what conditions can we say that a physical system is modelling itself?
Beyond all that, there’s also the problem of qualic properties. Let’s suppose we associate color experience with brain activity. Brain activity actually consists of ions rushing through membrane ion gates, and so forth. Where in the motion of molecules, is there anything like a perceived color? This all seems to imply dualism. There might be rules governing which experience is associated with which physical brain state, but it still seems like we’re talking about two different things connected by a rule, rather than just one thing.
I’m not sure what you mean, either in-universe or in the real world.
In-universe, the Culture isn’t all powerful. Periodically they have to fight a real war, and there are other civilizations and higher powers. There are also any number of ways and places where Culture citizens can go in order to experience danger and/or primitivism. Are you just saying that you wouldn’t want to live out your life entirely within Culture habitats?
In the real world… I am curious what preference for the fate of human civilization you’re expressing here. In one of his novels, Olaf Stapledon writes of the final and most advanced descendants of Homo sapiens (inhabiting a terraformed Neptune) that they have a continent set aside as “the Land of the Young”, a genuinely dangerous wilderness area where the youth can spend the first thousand years of their lives, reproducing in miniature the adventures and the mistakes of less evolved humanity, before they graduate to “the larger and more difficult world of maturity”. But Stapledon doesn’t suppose that his future humanity is at the highest possible level of development and has nothing but idle recreations to perform. They have serious and sublime civilizational purposes to pursue (which are beyond the understanding of mere humans like ourselves), and in the end they are wiped out by an astronomical cataclysm. How’s that sound to you?