I was approaching the mosquito analogy on its own terms but at this level of granularity it does just break down.
Firstly, mosquitos directly use human bodies as resources (as well as various natural environments which we voluntarily choose to keep around) while we can’t suck nutrients out of an ASI.
Secondly, mosquitos cause harm to humans and the proposed trade involves them stopping harming us which is different to proposed trades with ASI.
An ASI would experience some cost to keeping us around (sunlight for plants, space, temperature regulation) which needs to be balanced by benefits we can give it. If it can use the space and energy we take up to have more GPUs (or whatever future chip it runs on) and those GPUs give it more value than we do, it would want to kill us.
If you want arguments as to whether it would be more costly to kill humans vs keep us around, just look at the amount of resources and space humans currently take up on the planet. This is OOMs more resources than an ASI would need to kill us, especially once you consider it only needs to pay the cost to kill us once, then it gets the benefits of that extra energy essentially forever. If you don’t think an ASI could definitely make a profit from getting us out of the picture, then we just have extremely different pictures of the world.
I was approaching the mosquito analogy on its own terms but at this level of granularity it does just break down.
My goal in my original comment was narrow: to demonstrate that a commonly held model of trade is incorrect. This naive model claims (roughly): “Entities do not trade with each other when one party is vastly more powerful than the other. Instead, in such cases, the more powerful entity rationally wipes out the weaker one.” This model fails to accurately describe the real world. Despite being false, this model appears popular, as I have repeatedly encountered people asserting it, or something like it, including in the post I was replying to.
I have some interest in discussing how this analysis applies to future trade between humans and AIs. However, that discussion would require extensive additional explanation, as I operate from very different background assumptions than most people on LessWrong regarding what constraints future AIs will face and what forms they will take. I even question whether the idea of “an ASI” is a meaningful concept. Without establishing this shared context first, any attempt to discuss whether humans will trade with AIs would likely derail the narrow point I was trying to make.
If you don’t think an ASI could definitely make a profit from getting us out of the picture, then we just have extremely different pictures of the world.
Indeed, we likely do have extremely different pictures of the world.
I was approaching the mosquito analogy on its own terms but at this level of granularity it does just break down.
Firstly, mosquitos directly use human bodies as resources (as well as various natural environments which we voluntarily choose to keep around) while we can’t suck nutrients out of an ASI.
Secondly, mosquitos cause harm to humans and the proposed trade involves them stopping harming us which is different to proposed trades with ASI.
An ASI would experience some cost to keeping us around (sunlight for plants, space, temperature regulation) which needs to be balanced by benefits we can give it. If it can use the space and energy we take up to have more GPUs (or whatever future chip it runs on) and those GPUs give it more value than we do, it would want to kill us.
If you want arguments as to whether it would be more costly to kill humans vs keep us around, just look at the amount of resources and space humans currently take up on the planet. This is OOMs more resources than an ASI would need to kill us, especially once you consider it only needs to pay the cost to kill us once, then it gets the benefits of that extra energy essentially forever. If you don’t think an ASI could definitely make a profit from getting us out of the picture, then we just have extremely different pictures of the world.
My goal in my original comment was narrow: to demonstrate that a commonly held model of trade is incorrect. This naive model claims (roughly): “Entities do not trade with each other when one party is vastly more powerful than the other. Instead, in such cases, the more powerful entity rationally wipes out the weaker one.” This model fails to accurately describe the real world. Despite being false, this model appears popular, as I have repeatedly encountered people asserting it, or something like it, including in the post I was replying to.
I have some interest in discussing how this analysis applies to future trade between humans and AIs. However, that discussion would require extensive additional explanation, as I operate from very different background assumptions than most people on LessWrong regarding what constraints future AIs will face and what forms they will take. I even question whether the idea of “an ASI” is a meaningful concept. Without establishing this shared context first, any attempt to discuss whether humans will trade with AIs would likely derail the narrow point I was trying to make.
Indeed, we likely do have extremely different pictures of the world.