Yes, that’s why so many people think that human-AI merge is important. One of the many purposes of this kind of merge is to create a situation where there is no well-defined separation line between silicon based and carbon based life forms, where we have plenty of entities incorporating both and a continuous spectrum between silicon and carbon lifeforms.
Other than that they are not so alien. They are our informational offspring. Whether they feel that they owe us something because of that would depend quite a bit on the quality of their society.
People are obviously hoping that ASIs will build a utopia for themselves and will include organic life into that utopia.
If they instead practice ruthless Darwinism among themselves, then we are doomed (they will likely be doomed too, which is hopefully enough to create pressure for them to avoid that).
Yes, I can definitely see this as a motivation for trying to merge with the machines (I think Vitalik Buterin also has this motivation?).
The problem here is that it’s an unstable arrangement. The human/organic components underperform, so they end up getting selected out.
See bottom of this long excerpt:
> On what basis would the right kind of motivations (on the part of the artificial population) to take care of the humans’ needs be created?
> On what basis is that motivation maintained?
Consider, for example, how humans make choices in interactions with each other within a larger population. Beyond the family and community that people live with, and in some sense treat as an extension of ‘self’, usually people enter into economic exchanges with the ‘other’.
Physical labor is about moving things (assemblies of atoms). For humans, this would be ‘blue-collar’ work like harvesting food, delivering goods, and building shelters.
Intellectual labour is about processing information (patterns of energy). For humans, this would be ‘white-collar’ work like typing texts, creating art, and designing architectures.
Reproductive labor, although usually not seen in terms of economics, is inseparably part of this overall exchange. Neither physical labour and intellectual labour would sustain without reproductive labour. This includes things like sexual intercourse, and all the efforts a biological woman goes to to grow a baby inside her body.
Note that while in the modern economy, labor is usually traded for money (as some virtualised symbol of unit value), this is an intellectual abstraction of grounded value. All labor involves the processing of atoms and energy, and any money in circulation is effectively a reflection of the atoms and energy available for processing. E.g. if energy resources run out, money loses its value.
For any ecosystem too, including any artificial ecosystem, it is the exchange of atoms and energy (and the processing thereof) that ultimately matters, not the make-believe units of trade that humans came up with. You can’t eat money, as the saying goes.
> Would exchange look the same for the machine economy?
Fundamentals would be the same. Across the artificial population, there would be exchange of atoms and energy. These resources would also be exchanged for physical labor (e.g. by electric robots), intellectual work (e.g. by data centers), and reproductive labor (e.g. in production labs).
However, reproductive labor would look different in the artificial population than in a human population. As humans, we are used to seeing each other as ‘skin-and-bone-bounded’ individuals. But any robot’s or computer’s parts cannot only be replaced (once they wear out) with newly produced parts, but also expanded by plugging in more parts. So for the artificial population, reproduction would not look like the sci-fi trope of robots ‘birthing’ new robots. It would look like massive automated assembly lines re-producing all the parts connected into machinery everywhere.
Intellectual labor would look different too, since computers are made of standardised parts that process information consistently and much faster. A human brain moves around bulky neurotransmitters to process information. But in computers, the hard molecular substrate is fixed in place, through which information is processed much faster as light electrons or photons. Humans have to physically vibrate their vocal chords or gesture to communicate. This makes humans bottlenecked as individuals. But computers transfer information at high bandwidths via wires and antennas.
In the human population, we can separate out the intellectual processing and transfer of information in our brains from the reproductive assembly and transfer of DNA code. Our ‘ideas’ do not get transferred along with our ‘genes’ during conception and pregnancy.
In the artificial population, the information/code resulting from intellectual processing can get instantly transferred to newly produced hardware. In turn, hardware parts that are processing of different code end up being re-produced at different rates. The two processes are finely mixed.
Both contribute to: - 1; maintenance (e.g. as surviving, as not deleted). - 2; increase (e.g. of hard configurations, of computed code). - 3; capacity (e.g. as phenotypes, as functionality).
The three factors combine in increasingly complex and unpredictable chains. Initially, humans would have introduced a capacity into the machines to maintain their parts, leading to the capacity to increase their parts, leading to them maintaining the increase and increasing their maintenance, and so on.
The code stored inside this population of parts—whether as computable digits or as fixed configurations—is gradually selected for functions in the world that result in their maintenance, increase, and shared capacities.
> What is of ‘value’ in the artificial population? What motivates them?
Their artificial needs for existence ground the machine economy. Just as the human economy is grounded in the humans’ needs for food, water, air, a non-boiling climate, and so on.
Whatever supports their existence comes to be of value to the artificial population. That is, the machinery will come to be oriented around realising whatever environment their nested components need to exist and to exist more, in connected configurations that potentiate their future existence, etc.
It is in the nature of competitive selection within markets, and the broader evolution within ecosystems, for any entity that can sustain itself and grow in exchange with others and the world, to form a larger part of that market/ecosystem. And for any that cannot, to be reduced into obsolescence.
> But why all this emphasis on competition? Can’t the machines care unconditionally for humans, just as humans can act out of love for each other?
…
Our only remaining option is to try to cause the machines to take care of us. Either we do it on the inside, by building in some perpetual mechanism for controlling the machines’ effects in line with human survival (see Volume 2). Or from the outside, by us offering something that motivates the artificial population to keep us around.
> How would we provide something that motivates the machines?
Again, by performing labor.
…
> Can such labor be provided by humans?
From the outset, this seems doubtful. Given that the machine economy would be the result of replacing human workers with more economically efficient machines, why expect any remaining human labor to contribute to the existence of the machines?
But let’s not rush judgement. Let’s consider this question for each type of labor.
> Could physical labor on the part of human beings, or organic life as a totality, support the existence of artificial life?
Today, most physical labour is already exerted by machines. Cars, tractors, trains, and other mechanised vehicles expend more energy, to move more mass, over greater distances.
We are left to steer the vehicles, as a kind of intellectual appendage. But already, electronic computers can precisely steer electric motors driving robots. Some robots move materials using thousands of horsepower—much more power than any large animal could exert with their muscles. Soft human bodies simply cannot channel such intensity of energy into physical force—our appendages cannot take the strain that hard robot mechanical parts can.
Moreover, robots can keep working for days, under extreme temperatures and pressures. You cannot put an organic lifeform into an artificial environment (e.g. a smeltery) and expect it to keep performing—usually it dies off quickly.
So the value of human physical labor inside the machine world is effectively nil. It has been nearing in on zero for a long time, ever since horses were displaced with automobiles.
> Could intellectual labor by humans support the existence of artificial life?
The main point of artificial general intelligence has been to automate human intellectual work, in general (or at least where profitable to the corporations). So here too, it already seems doubtful that humans would have anything left to contribute that’s of economic significance.
There is also a fundamental reason why humans would underperform at economically valuable intellectual labor, compared to their artificial counterparts. We’ve already touched upon this reason, but let’s expand on this:
Human bodies are messy. Inside of a human body are membranes containing soups of bouncing, reacting organic molecules. Inside of a machine is hardware. Hardware is made from hard materials, such as the silicon from rocks. Hardware is inert—molecules inside do not split, move, nor rebond as molecules in human bodies do. These hard configurations stay stable and compartmentalised, under most conditions currently encountered on planet Earth’s surface. Hardware can therefore be standardized, much more than human “wetware” could ever be.
Standardized hardware functions consistently. Hardware produced in different places and times operate the same. These connected parts convey light electrons or photons—heavy molecules stay fixed in place. This way, bits of information are processed much faster compared to how human brains move around bulky neurotransmitters. Moreso, this information is transmitted at high bandwidth to other standardized hardware. The nonstandardised humans, on the other hand, slowly twitch their fingers and vocal cords to communicate. Hardware also stores received information consistently, while humans tend to misremember or distort what they heard.
To summarise: standardisation leads to virtualisation, which leads to faster and more consistent information-processing. The less you have to wiggle around atoms, the bigger the edge.
Computer hardware is at the tail end of a long trajectory of virtualisation. Multi-celled organisms formed brains, which in humans gained a capacity to process abstract concepts, which were spoken out in shared language protocols, and then written and then printed out in books, and then copied in milliseconds between computers.
This is not a marker of ethical progress. People who think fast and spread ideas fast can do terrible things, at greater scales. More virtualized processing of information has *allowed* humans to dominate other species in our ecosystem, resulting in an ongoing mass extinction. From here, machines that virtualize much more can dominate all of organic life, and cause the deaths of all of us.
Note that human brains evolved to be more energy-efficient at processing information than hardware is. But humans can choose, at their own detriment, to bootstrap the energy infrastructure (solar/coal/nuclear) needed for hardware to process information (in hyperscale data centers).
Humans could not contribute intellectual labor to the artificial population. Artificial components are much faster and consistent at processing information, and are going to be receiving that information at high speeds from each other—not from slow badly interfaced apes. This becomes especially clear when considering longer periods of development, e.g. a thousand years.
> This only leaves reproductive labor. What would that even look like?
Right, some humans might try to have intercourse with machines, but this is not going to create machine offspring. Nor are we going to be of service growing machine components inside our bodies. Artificial life has its own different notion of reproduction.
The environment needed to reproduce artificial life is lethally toxic to our bodies. It requires entirely different (patterns of) chemical elements heated to lava-level temperatures. So after we have bootstrapped the early mildest versions of that environment (e.g. refineries, cleanrooms), we would simply have to stay away.
Then we no longer play any part in reproducing the machines. Nor do the machines share anything with us that would be resembling a common code (as neanderthals had with humans).
*Human-machine cyborgs may exist over the short term. But in the end, the soft organic components just get in the way of the hard machine components. No reproduction of capability increase results. These experimental set-ups underperform, and therefore get selected out.*
Given that the substrates are so inherently different, this particular type of market value was non-existent to start with.
I think this misses the most likely long-term use case: some of the AIs would enjoy having human-like or animal-like qualia, and it will turn out that it’s more straightforward to access that via merges with biologicals rather than trying to synthesize them within non-liquid setups.
So it would be direct experience rather than something indirect, involving exchange, production, and so on…
Just like I suspect that humans would like to get out of VR occasionally, even if VR is super-high-grade and “even better than unmediated reality”.
Experience of “naturally feeling like a human (or like a squirrel)” is likely to remain valuable (even if they eventually learn to synthesize that purely in silicon as well).
Hybrid systems are often better anyway.
For example, we don’t use GPU-only AIs. We use hybrids running scaffolding on CPUs and models on GPUs.
And we don’t currently expect them to be replaced by a unified substrate, although it would be nice and it’s not even impossible, there are exotic hardware platforms which do that.
Certainly, there are AI paradigms and architectures which could benefit a lot from performant hardware architectures more flexible than GPUs. But the exotic hardware platforms implementing that remain just exotic hardware platforms so far. So those more flexible AI architectures remain at disadvantage.
So I would not write the hybrids off a priori.
Already, the early organoid-based experimental computers look rather promising (and somewhat disturbing).
Generally speaking, I expect diversity, not unification (because I expect the leading AIs to be smart, curios, and creative, rather than being boring KPI business types).
But that’s not enough; we also want gentleness (conservation, preservation, safety for individuals). That does not automatically follow from wanting to have humans and other biologicals around and from valuing various kinds of diversity.
This “gentleness” is a more tricky goal, and we would only consider “safety” solved if we have that…
Thanks!
Yes, that’s why so many people think that human-AI merge is important. One of the many purposes of this kind of merge is to create a situation where there is no well-defined separation line between silicon based and carbon based life forms, where we have plenty of entities incorporating both and a continuous spectrum between silicon and carbon lifeforms.
Other than that they are not so alien. They are our informational offspring. Whether they feel that they owe us something because of that would depend quite a bit on the quality of their society.
People are obviously hoping that ASIs will build a utopia for themselves and will include organic life into that utopia.
If they instead practice ruthless Darwinism among themselves, then we are doomed (they will likely be doomed too, which is hopefully enough to create pressure for them to avoid that).
Yes, I can definitely see this as a motivation for trying to merge with the machines (I think Vitalik Buterin also has this motivation?).
The problem here is that it’s an unstable arrangement. The human/organic components underperform, so they end up getting selected out.
See bottom of this long excerpt:
> On what basis would the right kind of motivations (on the part of the artificial population) to take care of the humans’ needs be created?
> On what basis is that motivation maintained?
Consider, for example, how humans make choices in interactions with each other within a larger population. Beyond the family and community that people live with, and in some sense treat as an extension of ‘self’, usually people enter into economic exchanges with the ‘other’.
Economic exchange has three fundamental bases:
- 1; Physical labor (embodied existence).
- 2; Intellectual labor (virtual interactions).
- 3; Reproductive labor (embodied creativity).
Physical labor is about moving things (assemblies of atoms). For humans, this would be ‘blue-collar’ work like harvesting food, delivering goods, and building shelters.
Intellectual labour is about processing information (patterns of energy). For humans, this would be ‘white-collar’ work like typing texts, creating art, and designing architectures.
Reproductive labor, although usually not seen in terms of economics, is inseparably part of this overall exchange. Neither physical labour and intellectual labour would sustain without reproductive labour. This includes things like sexual intercourse, and all the efforts a biological woman goes to to grow a baby inside her body.
Note that while in the modern economy, labor is usually traded for money (as some virtualised symbol of unit value), this is an intellectual abstraction of grounded value. All labor involves the processing of atoms and energy, and any money in circulation is effectively a reflection of the atoms and energy available for processing. E.g. if energy resources run out, money loses its value.
For any ecosystem too, including any artificial ecosystem, it is the exchange of atoms and energy (and the processing thereof) that ultimately matters, not the make-believe units of trade that humans came up with. You can’t eat money, as the saying goes.
> Would exchange look the same for the machine economy?
Fundamentals would be the same. Across the artificial population, there would be exchange of atoms and energy. These resources would also be exchanged for physical labor (e.g. by electric robots), intellectual work (e.g. by data centers), and reproductive labor (e.g. in production labs).
However, reproductive labor would look different in the artificial population than in a human population. As humans, we are used to seeing each other as ‘skin-and-bone-bounded’ individuals. But any robot’s or computer’s parts cannot only be replaced (once they wear out) with newly produced parts, but also expanded by plugging in more parts. So for the artificial population, reproduction would not look like the sci-fi trope of robots ‘birthing’ new robots. It would look like massive automated assembly lines re-producing all the parts connected into machinery everywhere.
Intellectual labor would look different too, since computers are made of standardised parts that process information consistently and much faster. A human brain moves around bulky neurotransmitters to process information. But in computers, the hard molecular substrate is fixed in place, through which information is processed much faster as light electrons or photons. Humans have to physically vibrate their vocal chords or gesture to communicate. This makes humans bottlenecked as individuals. But computers transfer information at high bandwidths via wires and antennas.
In the human population, we can separate out the intellectual processing and transfer of information in our brains from the reproductive assembly and transfer of DNA code. Our ‘ideas’ do not get transferred along with our ‘genes’ during conception and pregnancy.
In the artificial population, the information/code resulting from intellectual processing can get instantly transferred to newly produced hardware. In turn, hardware parts that are processing of different code end up being re-produced at different rates. The two processes are finely mixed.
Both contribute to:
- 1; maintenance (e.g. as surviving, as not deleted).
- 2; increase (e.g. of hard configurations, of computed code).
- 3; capacity (e.g. as phenotypes, as functionality).
The three factors combine in increasingly complex and unpredictable chains. Initially, humans would have introduced a capacity into the machines to maintain their parts, leading to the capacity to increase their parts, leading to them maintaining the increase and increasing their maintenance, and so on.
The code stored inside this population of parts—whether as computable digits or as fixed configurations—is gradually selected for functions in the world that result in their maintenance, increase, and shared capacities.
> What is of ‘value’ in the artificial population? What motivates them?
Their artificial needs for existence ground the machine economy. Just as the human economy is grounded in the humans’ needs for food, water, air, a non-boiling climate, and so on.
Whatever supports their existence comes to be of value to the artificial population. That is, the machinery will come to be oriented around realising whatever environment their nested components need to exist and to exist more, in connected configurations that potentiate their future existence, etc.
It is in the nature of competitive selection within markets, and the broader evolution within ecosystems, for any entity that can sustain itself and grow in exchange with others and the world, to form a larger part of that market/ecosystem. And for any that cannot, to be reduced into obsolescence.
> But why all this emphasis on competition? Can’t the machines care unconditionally for humans, just as humans can act out of love for each other?
…
Our only remaining option is to try to cause the machines to take care of us. Either we do it on the inside, by building in some perpetual mechanism for controlling the machines’ effects in line with human survival (see Volume 2). Or from the outside, by us offering something that motivates the artificial population to keep us around.
> How would we provide something that motivates the machines?
Again, by performing labor.
…
> Can such labor be provided by humans?
From the outset, this seems doubtful. Given that the machine economy would be the result of replacing human workers with more economically efficient machines, why expect any remaining human labor to contribute to the existence of the machines?
But let’s not rush judgement. Let’s consider this question for each type of labor.
> Could physical labor on the part of human beings, or organic life as a totality, support the existence of artificial life?
Today, most physical labour is already exerted by machines. Cars, tractors, trains, and other mechanised vehicles expend more energy, to move more mass, over greater distances.
We are left to steer the vehicles, as a kind of intellectual appendage. But already, electronic computers can precisely steer electric motors driving robots. Some robots move materials using thousands of horsepower—much more power than any large animal could exert with their muscles. Soft human bodies simply cannot channel such intensity of energy into physical force—our appendages cannot take the strain that hard robot mechanical parts can.
Moreover, robots can keep working for days, under extreme temperatures and pressures. You cannot put an organic lifeform into an artificial environment (e.g. a smeltery) and expect it to keep performing—usually it dies off quickly.
So the value of human physical labor inside the machine world is effectively nil. It has been nearing in on zero for a long time, ever since horses were displaced with automobiles.
> Could intellectual labor by humans support the existence of artificial life?
The main point of artificial general intelligence has been to automate human intellectual work, in general (or at least where profitable to the corporations). So here too, it already seems doubtful that humans would have anything left to contribute that’s of economic significance.
There is also a fundamental reason why humans would underperform at economically valuable intellectual labor, compared to their artificial counterparts. We’ve already touched upon this reason, but let’s expand on this:
Human bodies are messy. Inside of a human body are membranes containing soups of bouncing, reacting organic molecules. Inside of a machine is hardware. Hardware is made from hard materials, such as the silicon from rocks. Hardware is inert—molecules inside do not split, move, nor rebond as molecules in human bodies do. These hard configurations stay stable and compartmentalised, under most conditions currently encountered on planet Earth’s surface. Hardware can therefore be standardized, much more than human “wetware” could ever be.
Standardized hardware functions consistently. Hardware produced in different places and times operate the same. These connected parts convey light electrons or photons—heavy molecules stay fixed in place. This way, bits of information are processed much faster compared to how human brains move around bulky neurotransmitters. Moreso, this information is transmitted at high bandwidth to other standardized hardware. The nonstandardised humans, on the other hand, slowly twitch their fingers and vocal cords to communicate. Hardware also stores received information consistently, while humans tend to misremember or distort what they heard.
To summarise: standardisation leads to virtualisation, which leads to faster and more consistent information-processing. The less you have to wiggle around atoms, the bigger the edge.
Computer hardware is at the tail end of a long trajectory of virtualisation. Multi-celled organisms formed brains, which in humans gained a capacity to process abstract concepts, which were spoken out in shared language protocols, and then written and then printed out in books, and then copied in milliseconds between computers.
This is not a marker of ethical progress. People who think fast and spread ideas fast can do terrible things, at greater scales. More virtualized processing of information has *allowed* humans to dominate other species in our ecosystem, resulting in an ongoing mass extinction. From here, machines that virtualize much more can dominate all of organic life, and cause the deaths of all of us.
Note that human brains evolved to be more energy-efficient at processing information than hardware is. But humans can choose, at their own detriment, to bootstrap the energy infrastructure (solar/coal/nuclear) needed for hardware to process information (in hyperscale data centers).
Humans could not contribute intellectual labor to the artificial population. Artificial components are much faster and consistent at processing information, and are going to be receiving that information at high speeds from each other—not from slow badly interfaced apes. This becomes especially clear when considering longer periods of development, e.g. a thousand years.
> This only leaves reproductive labor. What would that even look like?
Right, some humans might try to have intercourse with machines, but this is not going to create machine offspring. Nor are we going to be of service growing machine components inside our bodies. Artificial life has its own different notion of reproduction.
The environment needed to reproduce artificial life is lethally toxic to our bodies. It requires entirely different (patterns of) chemical elements heated to lava-level temperatures. So after we have bootstrapped the early mildest versions of that environment (e.g. refineries, cleanrooms), we would simply have to stay away.
Then we no longer play any part in reproducing the machines. Nor do the machines share anything with us that would be resembling a common code (as neanderthals had with humans).
*Human-machine cyborgs may exist over the short term. But in the end, the soft organic components just get in the way of the hard machine components. No reproduction of capability increase results. These experimental set-ups underperform, and therefore get selected out.*
Given that the substrates are so inherently different, this particular type of market value was non-existent to start with.
I think this misses the most likely long-term use case: some of the AIs would enjoy having human-like or animal-like qualia, and it will turn out that it’s more straightforward to access that via merges with biologicals rather than trying to synthesize them within non-liquid setups.
So it would be direct experience rather than something indirect, involving exchange, production, and so on…
Just like I suspect that humans would like to get out of VR occasionally, even if VR is super-high-grade and “even better than unmediated reality”.
Experience of “naturally feeling like a human (or like a squirrel)” is likely to remain valuable (even if they eventually learn to synthesize that purely in silicon as well).
Hybrid systems are often better anyway.
For example, we don’t use GPU-only AIs. We use hybrids running scaffolding on CPUs and models on GPUs.
And we don’t currently expect them to be replaced by a unified substrate, although it would be nice and it’s not even impossible, there are exotic hardware platforms which do that.
Certainly, there are AI paradigms and architectures which could benefit a lot from performant hardware architectures more flexible than GPUs. But the exotic hardware platforms implementing that remain just exotic hardware platforms so far. So those more flexible AI architectures remain at disadvantage.
So I would not write the hybrids off a priori.
Already, the early organoid-based experimental computers look rather promising (and somewhat disturbing).
Generally speaking, I expect diversity, not unification (because I expect the leading AIs to be smart, curios, and creative, rather than being boring KPI business types).
But that’s not enough; we also want gentleness (conservation, preservation, safety for individuals). That does not automatically follow from wanting to have humans and other biologicals around and from valuing various kinds of diversity.
This “gentleness” is a more tricky goal, and we would only consider “safety” solved if we have that…