I appreciate the way you’re thinking, but I guess I just don’t believe that the situation or don’t agree with your intuition that the situation with machines next to humans will be worse or deeply different than the situations of humans next to ants. I mean, the differences actually might benefit humans. For example, the fact that we’ve had machines in such close contact with us as they’re growing might point to a kind of potential for symbiosis.
I just think the idea that machines will try to replace us with robots I think if you look closely, doesn’t totally make sense. When machines are coming about, before they’re totally super-intelligent, but while they’re comparably intelligent to us, they might want to use us because we’ve evolved for millions of years to be able to see and hear and think in ways that might be useful for a kind of digital intelligence. In other words, when they’re comparably intelligent to us, they may compete for resources. When they’re incomparably intelligent, it’s weird to assume they’ll still use the same resources we do for our survival. That they’ll ruin our homes because the bricks can be used better elsewhere? It takes much less energy to let things be as they are if they’re not the primary obstacle you face—both if you’re a human or a super human intelligence.
So, self interested superintelligence could cause really bad stuff to happen, but it’s a stretch from there to call it the total end of humanity. By the time that machine gets superhuman intelligence, like totally vastly more powerful than us, it’s unclear to me that it would compete for resources with us that it would even live or exist along similar dimensions to us. Things could go really wrong, but I think the idea that there will be an enormous catastrophe that wipes out all of humanity just sounds to me like the outcomes will be more weird and spooky, and concluding death is feels a little bit forced.
It feels to me like, yeah, they’ll step on us some of the time, but it’d be weird to me if they conceive of themselves or if the entities or units that end up evolutionarily propagating that we’re calling machines end up looking like us or looking like physical beings or really are competing with us for resources. The same resources that we use. At the end of the day, there might be some resource competitions, but I just think the idea that it will try to replace every person is just excessive and even taking is given all of the arguments up until the point of like machine believing that machines will have a survival drive, assuming that they’ll care enough about us to do things like replace each of us. It’s just strange, you know? It feels forceful to me.
I’m inspired in part here by Joscha Bach / Emmett Shear’s conceptions of superintelligence: as ambient beings distributed across space and time.
When they’re incomparably intelligent, it’s weird to assume they’ll still use the same resources we do for our survival.
Resources ants need: organic matter.
Resources humans need: fossil fuels, nuclear power, solar power.
Resources superintelligent machines will need: ???
They might switch to extracting geothermal power, or build a Dyson sphere (maybe leaving a few rays that shine towards Earth), but what else is there? Black holes? Some new kind of physics?
Or maybe “the smarter you are, the more energy you want to use” stops being true at some level?
I am not saying this can’t happen, but to me it feels like magic. The problem with new kinds of physics is that we don’t know if there is something useful left that we have no idea about yet. Also, the more powerful things tend to be more destructive (harvesting oil has greater impact on the environment than chopping wood), so the new kinds of physics may turn out to have even more bad externalities.
“A being vastly more powerful, which somehow doesn’t need more resources” is basically some kind of god. Doesn’t need resources, because it doesn’t exist. Our evidence for more powerful beings is entirely fictional.
I guess I’m considering a vastly more powerful being that needs orthogonal resources… the same way harvesting solar power (I imagine) is orthogonal generally to ants’ survival. In the scheme of things, the chance that a vastly more powerful being wants the same resources thru the same channels as we… this seems independent of or indirectly correlated with intelligence. But the extent of competition does seem dependent on how anthromorphic/biomorphic we assume it to be.
I have a hard time imagining electricity, produced via existing human factories, is not a desired resource for proto ASI. But at least at this point we have comparable power and can negotiate or smthing. For superhuman intelligence—which will by definition be unpredictable to us—it’d be weird to think we’re aware of all the energy channels it’d find.
I appreciate the way you’re thinking, but I guess I just don’t believe that the situation or don’t agree with your intuition that the situation with machines next to humans will be worse or deeply different than the situations of humans next to ants. I mean, the differences actually might benefit humans. For example, the fact that we’ve had machines in such close contact with us as they’re growing might point to a kind of potential for symbiosis.
I just think the idea that machines will try to replace us with robots I think if you look closely, doesn’t totally make sense. When machines are coming about, before they’re totally super-intelligent, but while they’re comparably intelligent to us, they might want to use us because we’ve evolved for millions of years to be able to see and hear and think in ways that might be useful for a kind of digital intelligence. In other words, when they’re comparably intelligent to us, they may compete for resources. When they’re incomparably intelligent, it’s weird to assume they’ll still use the same resources we do for our survival. That they’ll ruin our homes because the bricks can be used better elsewhere? It takes much less energy to let things be as they are if they’re not the primary obstacle you face—both if you’re a human or a super human intelligence.
So, self interested superintelligence could cause really bad stuff to happen, but it’s a stretch from there to call it the total end of humanity. By the time that machine gets superhuman intelligence, like totally vastly more powerful than us, it’s unclear to me that it would compete for resources with us that it would even live or exist along similar dimensions to us. Things could go really wrong, but I think the idea that there will be an enormous catastrophe that wipes out all of humanity just sounds to me like the outcomes will be more weird and spooky, and concluding death is feels a little bit forced.
It feels to me like, yeah, they’ll step on us some of the time, but it’d be weird to me if they conceive of themselves or if the entities or units that end up evolutionarily propagating that we’re calling machines end up looking like us or looking like physical beings or really are competing with us for resources. The same resources that we use. At the end of the day, there might be some resource competitions, but I just think the idea that it will try to replace every person is just excessive and even taking is given all of the arguments up until the point of like machine believing that machines will have a survival drive, assuming that they’ll care enough about us to do things like replace each of us. It’s just strange, you know? It feels forceful to me.
I’m inspired in part here by Joscha Bach / Emmett Shear’s conceptions of superintelligence: as ambient beings distributed across space and time.
Resources ants need: organic matter.
Resources humans need: fossil fuels, nuclear power, solar power.
Resources superintelligent machines will need: ???
They might switch to extracting geothermal power, or build a Dyson sphere (maybe leaving a few rays that shine towards Earth), but what else is there? Black holes? Some new kind of physics?
Or maybe “the smarter you are, the more energy you want to use” stops being true at some level?
I am not saying this can’t happen, but to me it feels like magic. The problem with new kinds of physics is that we don’t know if there is something useful left that we have no idea about yet. Also, the more powerful things tend to be more destructive (harvesting oil has greater impact on the environment than chopping wood), so the new kinds of physics may turn out to have even more bad externalities.
“A being vastly more powerful, which somehow doesn’t need more resources” is basically some kind of god. Doesn’t need resources, because it doesn’t exist. Our evidence for more powerful beings is entirely fictional.
I guess I’m considering a vastly more powerful being that needs orthogonal resources… the same way harvesting solar power (I imagine) is orthogonal generally to ants’ survival. In the scheme of things, the chance that a vastly more powerful being wants the same resources thru the same channels as we… this seems independent of or indirectly correlated with intelligence. But the extent of competition does seem dependent on how anthromorphic/biomorphic we assume it to be.
I have a hard time imagining electricity, produced via existing human factories, is not a desired resource for proto ASI. But at least at this point we have comparable power and can negotiate or smthing. For superhuman intelligence—which will by definition be unpredictable to us—it’d be weird to think we’re aware of all the energy channels it’d find.