Eradicating mosquitoes would be incredibly difficult from a logistical standpoint. Even if we could accomplish this goal, doing so would cause large harm to the environment, which humans would prefer to avoid. By contrast, providing a steady stored supply of blood to feed all the mosquitoes that would have otherwise fed on humans would be relatively easy for humans to accomplish. Note that, for most mosquito species, we could use blood from domesticated mammals like cattle or pigs, not just human blood.
When deciding whether to take an action, a rational agent does not merely consider whether that action would achieve their goal. Instead, they identify which action would achieve their desired outcome at the lowest cost. In this case, trading blood with mosquitoes would be cheaper than attempting to eradicate them, even if we assigned zero value to mosquito welfare. The reason we do not currently trade with mosquitoes is not that eradication would be cheaper. Rather, it is because trade is not feasible.
You might argue that future technological progress will make eradication the cheaper option. However, to make this argument, you would need to explain why technological progress will reduce the cost of eradication without simultaneously reducing the cost of producing stored blood at a comparable rate. If both technologies advance together, trade would remain relatively cheaper than extermination. The key question is not whether an action is possible. The key question is which strategy achieves our goal at the lowest relative cost.
If you predict that eradication will become far cheaper while trade will not become proportionally cheaper, thereby making eradication the rational choice, then I think you’d simply be making a speculative assertion. Unless it were backed up by something rigorous, this prediction would not constitute meaningful empirical evidence about how trade functions in the real world.
I was approaching the mosquito analogy on its own terms but at this level of granularity it does just break down.
Firstly, mosquitos directly use human bodies as resources (as well as various natural environments which we voluntarily choose to keep around) while we can’t suck nutrients out of an ASI.
Secondly, mosquitos cause harm to humans and the proposed trade involves them stopping harming us which is different to proposed trades with ASI.
An ASI would experience some cost to keeping us around (sunlight for plants, space, temperature regulation) which needs to be balanced by benefits we can give it. If it can use the space and energy we take up to have more GPUs (or whatever future chip it runs on) and those GPUs give it more value than we do, it would want to kill us.
If you want arguments as to whether it would be more costly to kill humans vs keep us around, just look at the amount of resources and space humans currently take up on the planet. This is OOMs more resources than an ASI would need to kill us, especially once you consider it only needs to pay the cost to kill us once, then it gets the benefits of that extra energy essentially forever. If you don’t think an ASI could definitely make a profit from getting us out of the picture, then we just have extremely different pictures of the world.
I was approaching the mosquito analogy on its own terms but at this level of granularity it does just break down.
My goal in my original comment was narrow: to demonstrate that a commonly held model of trade is incorrect. This naive model claims (roughly): “Entities do not trade with each other when one party is vastly more powerful than the other. Instead, in such cases, the more powerful entity rationally wipes out the weaker one.” This model fails to accurately describe the real world. Despite being false, this model appears popular, as I have repeatedly encountered people asserting it, or something like it, including in the post I was replying to.
I have some interest in discussing how this analysis applies to future trade between humans and AIs. However, that discussion would require extensive additional explanation, as I operate from very different background assumptions than most people on LessWrong regarding what constraints future AIs will face and what forms they will take. I even question whether the idea of “an ASI” is a meaningful concept. Without establishing this shared context first, any attempt to discuss whether humans will trade with AIs would likely derail the narrow point I was trying to make.
If you don’t think an ASI could definitely make a profit from getting us out of the picture, then we just have extremely different pictures of the world.
Indeed, we likely do have extremely different pictures of the world.
Eradicating mosquitoes would be incredibly difficult from a logistical standpoint. Even if we could accomplish this goal, doing so would cause large harm to the environment, which humans would prefer to avoid. By contrast, providing a steady stored supply of blood to feed all the mosquitoes that would have otherwise fed on humans would be relatively easy for humans to accomplish. Note that, for most mosquito species, we could use blood from domesticated mammals like cattle or pigs, not just human blood.
When deciding whether to take an action, a rational agent does not merely consider whether that action would achieve their goal. Instead, they identify which action would achieve their desired outcome at the lowest cost. In this case, trading blood with mosquitoes would be cheaper than attempting to eradicate them, even if we assigned zero value to mosquito welfare. The reason we do not currently trade with mosquitoes is not that eradication would be cheaper. Rather, it is because trade is not feasible.
You might argue that future technological progress will make eradication the cheaper option. However, to make this argument, you would need to explain why technological progress will reduce the cost of eradication without simultaneously reducing the cost of producing stored blood at a comparable rate. If both technologies advance together, trade would remain relatively cheaper than extermination. The key question is not whether an action is possible. The key question is which strategy achieves our goal at the lowest relative cost.
If you predict that eradication will become far cheaper while trade will not become proportionally cheaper, thereby making eradication the rational choice, then I think you’d simply be making a speculative assertion. Unless it were backed up by something rigorous, this prediction would not constitute meaningful empirical evidence about how trade functions in the real world.
I was approaching the mosquito analogy on its own terms but at this level of granularity it does just break down.
Firstly, mosquitos directly use human bodies as resources (as well as various natural environments which we voluntarily choose to keep around) while we can’t suck nutrients out of an ASI.
Secondly, mosquitos cause harm to humans and the proposed trade involves them stopping harming us which is different to proposed trades with ASI.
An ASI would experience some cost to keeping us around (sunlight for plants, space, temperature regulation) which needs to be balanced by benefits we can give it. If it can use the space and energy we take up to have more GPUs (or whatever future chip it runs on) and those GPUs give it more value than we do, it would want to kill us.
If you want arguments as to whether it would be more costly to kill humans vs keep us around, just look at the amount of resources and space humans currently take up on the planet. This is OOMs more resources than an ASI would need to kill us, especially once you consider it only needs to pay the cost to kill us once, then it gets the benefits of that extra energy essentially forever. If you don’t think an ASI could definitely make a profit from getting us out of the picture, then we just have extremely different pictures of the world.
My goal in my original comment was narrow: to demonstrate that a commonly held model of trade is incorrect. This naive model claims (roughly): “Entities do not trade with each other when one party is vastly more powerful than the other. Instead, in such cases, the more powerful entity rationally wipes out the weaker one.” This model fails to accurately describe the real world. Despite being false, this model appears popular, as I have repeatedly encountered people asserting it, or something like it, including in the post I was replying to.
I have some interest in discussing how this analysis applies to future trade between humans and AIs. However, that discussion would require extensive additional explanation, as I operate from very different background assumptions than most people on LessWrong regarding what constraints future AIs will face and what forms they will take. I even question whether the idea of “an ASI” is a meaningful concept. Without establishing this shared context first, any attempt to discuss whether humans will trade with AIs would likely derail the narrow point I was trying to make.
Indeed, we likely do have extremely different pictures of the world.