At some point, superintelligences are going to disassemble Earth, because it is profitable, and survival of humans off planet is costly and we likely won’t be able to pay required price.
It just feels to me like the same argument could have been made about humans relative to ants—that ants cannot possibly be the most efficient use of the energy they require from the perspective of humans. But in reality, what they do and the way they exist is so orthogonal to us that even though we step on an ant hill every once in a while, their existence continues. There’s this weird assumption in the book that disassembling Earth is profitable, or just disassembling humans is profitable. But humans have evolved over a long time to be sensing machines in order to walk around and be able to perceive the world around us.
So the idea that a super-intelligent machine would throw that out because it wants to start over, especially as it’s becoming super-intelligent, is sort of ridiculous to me. It seems like a better assumption is that it would want to use us for different purposes, maybe for our physical machinery and for all sorts of other reasons. The idea that it will disassemble us I think is an unexamined assumption itself—it’s often much easier to leave things as they are than it is to fully replace or modify.
Ants need little, and their biology is similar to humans in the sense that if humans can survive in certain environments, ants probably can, too.
Ants need just a small piece of forest or meadow or garden to build an anthill. Humans preserve the forests, because we need the oxygen. Thus, ants have almost guaranteed survival.
Compared to the situation where humans don’t exist, ants have less place to build their anthills. But not by much, because humans do not put concrete over literally everything. Well, maybe in cities, but most of the surface of Earth is not cities. Maybe without humans there could be 2x as many ants on Earth, but that wouldn’t increase the quality of life of an individual ant or anthill. Humans consume food that otherwise ants might consume, but humans also grow most of that food, so human presence does not harm the ants too much.
The situation with machines would be analogical if machines needed us for their survival, and if they generated most of the resources they need. Sadly, sufficiently smart machines will be able to replace humans with robots, and will probably compete with us for energy sources. Also, humans are more sensitive to disruption than ants; taking away the most concentrated sources of energy (e.g. the oil fields) and leaving the less concentrated ones (such as wood) to us would ruin modern human economy. We would probably return to conditions before the industrial revolution. Which means no internet, so science falls apart, undoing the green revolution and transport of foods, so 90% of humans die from starvation. Still, the remaining 10% would survive, for a while.
Then we face the problem that the machines do not share our biology, so they are perfectly okay if e.g. the levels of oxygen in the atmosphere decrease, or if the rain gets toxic. Finally, if they build a Dyson sphere, the remaining humans will freeze.
Shortly, the way we behave towards ants—don’t actively try to eradicate them, but carelessly destroy anything that stands in our way—will be more destructive towards humans that towards ants.
I appreciate the way you’re thinking, but I guess I just don’t believe that the situation or don’t agree with your intuition that the situation with machines next to humans will be worse or deeply different than the situations of humans next to ants. I mean, the differences actually might benefit humans. For example, the fact that we’ve had machines in such close contact with us as they’re growing might point to a kind of potential for symbiosis.
I just think the idea that machines will try to replace us with robots I think if you look closely, doesn’t totally make sense. When machines are coming about, before they’re totally super-intelligent, but while they’re comparably intelligent to us, they might want to use us because we’ve evolved for millions of years to be able to see and hear and think in ways that might be useful for a kind of digital intelligence. In other words, when they’re comparably intelligent to us, they may compete for resources. When they’re incomparably intelligent, it’s weird to assume they’ll still use the same resources we do for our survival. That they’ll ruin our homes because the bricks can be used better elsewhere? It takes much less energy to let things be as they are if they’re not the primary obstacle you face—both if you’re a human or a super human intelligence.
So, self interested superintelligence could cause really bad stuff to happen, but it’s a stretch from there to call it the total end of humanity. By the time that machine gets superhuman intelligence, like totally vastly more powerful than us, it’s unclear to me that it would compete for resources with us that it would even live or exist along similar dimensions to us. Things could go really wrong, but I think the idea that there will be an enormous catastrophe that wipes out all of humanity just sounds to me like the outcomes will be more weird and spooky, and concluding death is feels a little bit forced.
It feels to me like, yeah, they’ll step on us some of the time, but it’d be weird to me if they conceive of themselves or if the entities or units that end up evolutionarily propagating that we’re calling machines end up looking like us or looking like physical beings or really are competing with us for resources. The same resources that we use. At the end of the day, there might be some resource competitions, but I just think the idea that it will try to replace every person is just excessive and even taking is given all of the arguments up until the point of like machine believing that machines will have a survival drive, assuming that they’ll care enough about us to do things like replace each of us. It’s just strange, you know? It feels forceful to me.
I’m inspired in part here by Joscha Bach / Emmett Shear’s conceptions of superintelligence: as ambient beings distributed across space and time.
When they’re incomparably intelligent, it’s weird to assume they’ll still use the same resources we do for our survival.
Resources ants need: organic matter.
Resources humans need: fossil fuels, nuclear power, solar power.
Resources superintelligent machines will need: ???
They might switch to extracting geothermal power, or build a Dyson sphere (maybe leaving a few rays that shine towards Earth), but what else is there? Black holes? Some new kind of physics?
Or maybe “the smarter you are, the more energy you want to use” stops being true at some level?
I am not saying this can’t happen, but to me it feels like magic. The problem with new kinds of physics is that we don’t know if there is something useful left that we have no idea about yet. Also, the more powerful things tend to be more destructive (harvesting oil has greater impact on the environment than chopping wood), so the new kinds of physics may turn out to have even more bad externalities.
“A being vastly more powerful, which somehow doesn’t need more resources” is basically some kind of god. Doesn’t need resources, because it doesn’t exist. Our evidence for more powerful beings is entirely fictional.
I guess I’m considering a vastly more powerful being that needs orthogonal resources… the same way harvesting solar power (I imagine) is orthogonal generally to ants’ survival. In the scheme of things, the chance that a vastly more powerful being wants the same resources thru the same channels as we… this seems independent of or indirectly correlated with intelligence. But the extent of competition does seem dependent on how anthromorphic/biomorphic we assume it to be.
I have a hard time imagining electricity, produced via existing human factories, is not a desired resource for proto ASI. But at least at this point we have comparable power and can negotiate or smthing. For superhuman intelligence—which will by definition be unpredictable to us—it’d be weird to think we’re aware of all the energy channels it’d find.
I think you are overindexing on current state of affairs in two ways.
First, “we should not pave all the nature with human-made stuff” is a relatively new cultural trend. In High Modernism era there were unironic projects of cutting down Amazon forests and making here corn fields, or killing all animals so they won’t suffer, etc.
Second, actually, in current reality, there are not many things we can do efficiently with ants? We can pave every anthill with solar panels, but there are cheaper places to do that and we don’t produce that many solar panels, yet, and we don’t have that much demand for electricity, yet.
For superintelligence, calculus is quite different. Anthill is large pile of carbon and silicon, and both parts can be used in computations, and superintelligence can afford enough automatization to pick them up. Superintelligent economy has lower bound on growth 33% per year, which means that it’s going to reach $1 per atom of our solar system in less than 300 years—there will be plenty of demand for turning anthills into compute. Technological progress increases number of things you can do efficiently and shifts balance from “leave as it is” to “remake entirely”.
At some point of our development, we are going to be able to disasseble Earth and get immense benefits. We can choose to not do that, because we value Earth as our home. It’s rather likely that superintelligences are not going to share our sentiments.
“Technological progress increases number of things you can do efficiently and shifts balance from “leave as it is” to “remake entirely”.
Technological progress may actual help you pinpoint more precisely what situations you want to pay attention to. I don’t have any reason to believe a wiser powerful being would touch every atom in the universe.
At some point, superintelligences are going to disassemble Earth, because it is profitable, and survival of humans off planet is costly and we likely won’t be able to pay required price.
It just feels to me like the same argument could have been made about humans relative to ants—that ants cannot possibly be the most efficient use of the energy they require from the perspective of humans. But in reality, what they do and the way they exist is so orthogonal to us that even though we step on an ant hill every once in a while, their existence continues. There’s this weird assumption in the book that disassembling Earth is profitable, or just disassembling humans is profitable. But humans have evolved over a long time to be sensing machines in order to walk around and be able to perceive the world around us.
So the idea that a super-intelligent machine would throw that out because it wants to start over, especially as it’s becoming super-intelligent, is sort of ridiculous to me. It seems like a better assumption is that it would want to use us for different purposes, maybe for our physical machinery and for all sorts of other reasons. The idea that it will disassemble us I think is an unexamined assumption itself—it’s often much easier to leave things as they are than it is to fully replace or modify.
Ants need little, and their biology is similar to humans in the sense that if humans can survive in certain environments, ants probably can, too.
Ants need just a small piece of forest or meadow or garden to build an anthill. Humans preserve the forests, because we need the oxygen. Thus, ants have almost guaranteed survival.
Compared to the situation where humans don’t exist, ants have less place to build their anthills. But not by much, because humans do not put concrete over literally everything. Well, maybe in cities, but most of the surface of Earth is not cities. Maybe without humans there could be 2x as many ants on Earth, but that wouldn’t increase the quality of life of an individual ant or anthill. Humans consume food that otherwise ants might consume, but humans also grow most of that food, so human presence does not harm the ants too much.
The situation with machines would be analogical if machines needed us for their survival, and if they generated most of the resources they need. Sadly, sufficiently smart machines will be able to replace humans with robots, and will probably compete with us for energy sources. Also, humans are more sensitive to disruption than ants; taking away the most concentrated sources of energy (e.g. the oil fields) and leaving the less concentrated ones (such as wood) to us would ruin modern human economy. We would probably return to conditions before the industrial revolution. Which means no internet, so science falls apart, undoing the green revolution and transport of foods, so 90% of humans die from starvation. Still, the remaining 10% would survive, for a while.
Then we face the problem that the machines do not share our biology, so they are perfectly okay if e.g. the levels of oxygen in the atmosphere decrease, or if the rain gets toxic. Finally, if they build a Dyson sphere, the remaining humans will freeze.
Shortly, the way we behave towards ants—don’t actively try to eradicate them, but carelessly destroy anything that stands in our way—will be more destructive towards humans that towards ants.
I appreciate the way you’re thinking, but I guess I just don’t believe that the situation or don’t agree with your intuition that the situation with machines next to humans will be worse or deeply different than the situations of humans next to ants. I mean, the differences actually might benefit humans. For example, the fact that we’ve had machines in such close contact with us as they’re growing might point to a kind of potential for symbiosis.
I just think the idea that machines will try to replace us with robots I think if you look closely, doesn’t totally make sense. When machines are coming about, before they’re totally super-intelligent, but while they’re comparably intelligent to us, they might want to use us because we’ve evolved for millions of years to be able to see and hear and think in ways that might be useful for a kind of digital intelligence. In other words, when they’re comparably intelligent to us, they may compete for resources. When they’re incomparably intelligent, it’s weird to assume they’ll still use the same resources we do for our survival. That they’ll ruin our homes because the bricks can be used better elsewhere? It takes much less energy to let things be as they are if they’re not the primary obstacle you face—both if you’re a human or a super human intelligence.
So, self interested superintelligence could cause really bad stuff to happen, but it’s a stretch from there to call it the total end of humanity. By the time that machine gets superhuman intelligence, like totally vastly more powerful than us, it’s unclear to me that it would compete for resources with us that it would even live or exist along similar dimensions to us. Things could go really wrong, but I think the idea that there will be an enormous catastrophe that wipes out all of humanity just sounds to me like the outcomes will be more weird and spooky, and concluding death is feels a little bit forced.
It feels to me like, yeah, they’ll step on us some of the time, but it’d be weird to me if they conceive of themselves or if the entities or units that end up evolutionarily propagating that we’re calling machines end up looking like us or looking like physical beings or really are competing with us for resources. The same resources that we use. At the end of the day, there might be some resource competitions, but I just think the idea that it will try to replace every person is just excessive and even taking is given all of the arguments up until the point of like machine believing that machines will have a survival drive, assuming that they’ll care enough about us to do things like replace each of us. It’s just strange, you know? It feels forceful to me.
I’m inspired in part here by Joscha Bach / Emmett Shear’s conceptions of superintelligence: as ambient beings distributed across space and time.
Resources ants need: organic matter.
Resources humans need: fossil fuels, nuclear power, solar power.
Resources superintelligent machines will need: ???
They might switch to extracting geothermal power, or build a Dyson sphere (maybe leaving a few rays that shine towards Earth), but what else is there? Black holes? Some new kind of physics?
Or maybe “the smarter you are, the more energy you want to use” stops being true at some level?
I am not saying this can’t happen, but to me it feels like magic. The problem with new kinds of physics is that we don’t know if there is something useful left that we have no idea about yet. Also, the more powerful things tend to be more destructive (harvesting oil has greater impact on the environment than chopping wood), so the new kinds of physics may turn out to have even more bad externalities.
“A being vastly more powerful, which somehow doesn’t need more resources” is basically some kind of god. Doesn’t need resources, because it doesn’t exist. Our evidence for more powerful beings is entirely fictional.
I guess I’m considering a vastly more powerful being that needs orthogonal resources… the same way harvesting solar power (I imagine) is orthogonal generally to ants’ survival. In the scheme of things, the chance that a vastly more powerful being wants the same resources thru the same channels as we… this seems independent of or indirectly correlated with intelligence. But the extent of competition does seem dependent on how anthromorphic/biomorphic we assume it to be.
I have a hard time imagining electricity, produced via existing human factories, is not a desired resource for proto ASI. But at least at this point we have comparable power and can negotiate or smthing. For superhuman intelligence—which will by definition be unpredictable to us—it’d be weird to think we’re aware of all the energy channels it’d find.
I think you are overindexing on current state of affairs in two ways.
First, “we should not pave all the nature with human-made stuff” is a relatively new cultural trend. In High Modernism era there were unironic projects of cutting down Amazon forests and making here corn fields, or killing all animals so they won’t suffer, etc.
Second, actually, in current reality, there are not many things we can do efficiently with ants? We can pave every anthill with solar panels, but there are cheaper places to do that and we don’t produce that many solar panels, yet, and we don’t have that much demand for electricity, yet.
For superintelligence, calculus is quite different. Anthill is large pile of carbon and silicon, and both parts can be used in computations, and superintelligence can afford enough automatization to pick them up. Superintelligent economy has lower bound on growth 33% per year, which means that it’s going to reach $1 per atom of our solar system in less than 300 years—there will be plenty of demand for turning anthills into compute. Technological progress increases number of things you can do efficiently and shifts balance from “leave as it is” to “remake entirely”.
At some point of our development, we are going to be able to disasseble Earth and get immense benefits. We can choose to not do that, because we value Earth as our home. It’s rather likely that superintelligences are not going to share our sentiments.
I guess I don’t think this is true:
“Technological progress increases number of things you can do efficiently and shifts balance from “leave as it is” to “remake entirely”.
Technological progress may actual help you pinpoint more precisely what situations you want to pay attention to. I don’t have any reason to believe a wiser powerful being would touch every atom in the universe.