Other than fertility rate, what other harms are there, and to whom, such that it’s any of society’s business at all? Are you thinking of it as being like addiction, with people choosing something they (initially) think is good for them but isn’t?
Yes, I discussed in this comment how people could perceive settling for AI partners less wholesome life because they didn’t pay their duty to society (regardless of whether this actually matters from your theoretical ethics point of view, if people have deeply held, culturally embedded idea about this in their head, they could be sad or unsettled for real). I don’t venture to estimate how prevalent this will be, and therefore how this will weigh against net good for personal satisfaction of people who would have no issue with settling for AI partners whatsoever.
Kaj Sotala suggested in this comment that this “duty for society” could be satisfied through platonic co-parenting. I think this is definitely interesting, could work for some people, and I think loudable that people do this, but I have doubts how widespread this practice could become. It might be that parenting and romantic involvement with the co-parent are pinned too strongly to each other in many people’s minds.
First, though, I don’t think the scenario you’re proposing is anywhere near as bad for fertility as suggested. [...] And I imagine a lot of people would still want kids with an AI partner as well.
This is the same type of statement that many other people have made here (“people won’t be that addicted to this”, “people will still seek human partners even while using this thing”, etc.), to all of which I reply: it should be AI romance startups’ responsibility to demonstrate that the negative effect will be small, not my responsibility to prove that the effect will be huge (which I obviously couldn’t do). Currently, it’s all opinion versus opinion.
At least the maximum conceivable potential is huge: AI romance startups obviously would like nearly everyone to use their products (just like currently, nearly everyone watches porn, and soon nearly everyone will use general-purpose chatbots like ChatGPT). If the AI partners will be so attractive that about 20% of men are falling for them so hard that they don’t even want to date any women anymore through the rest of their lives, we are talking about 7-10% of drop in fertility (less than 20% because not all of these men would counterfactually have kids anyway, also “spare” women could decide to have kids alone, some will platonically co-parent, etc.)
Plenty of real-world people and partners incapable of having biological children together have a sufficiently high desire to do so to go to great lengths to make it happen. Plenty of others want to do so but limit themselves for non-biological reasons, often financial. Society could do a lot more to facilitate them getting what they want, but doesn’t.
I agree with all this, and this is all extremely sad. But this seems to be irrelevant to the question of AI partners: if there are other problems that depress fertility rate, it doesn’t mean that we shouldn’t deal with this upcoming problem. Moreover, while problems like from financial inequality (and precarity), reducing biological fertility of men and women due to stress and environmental pollution, etc., are big systemic problems that are very hard and very expensive to fix, it’s currently relatively very cheap to prevent further potential fertility drop resulting from widespread adoption of AI partners by under-30′s: just pass a regulation in major countries!
It being a genuine problem would require that no intervention succeed in turning it around [...] It would also require there not be any substantial subsets of the global population that consistently choose biological partners or lager families.
In my ethics, more conscious observers matter today, it doesn’t make a difference from a normative perspective than “later in the future” the population will recover. Also, following this logic, saving people from death (e.g., from malaria) today makes very limited sense, because all it prevents is transient few days of suffering of a person. But really I think EAs value more the “rescued years of mostly happy experience later”.
Similarly, the fact that somewhere else in the world some people procreate a lot doesn’t somehow make shrinking population in other parts of the world “less bad”.
It being a genuine problem would require that [...] no further AI advances lead to AGI sufficient for AIs to “count” as population for the relevant purposes that make population growth desirable. [...]
Third, I think there’s a lot of other downstream impacts of a world with sufficiently-good-for-this-to-be-an-issue AI romantic partners that make this less of a concern for society. [...]
I also think this cluster of arguments are not applicable. Following this logic, pretty much nothing matters if we expect world to become unrecognisably weird soon. I also expect this to happen, with very high probability, but in discussing societal impacts of AI, global health and poverty, environmental destruction, and other systemic issues, we have to take a sort of deontological stance and imagine that this weirdness won’t happen. If it will happen, all bets are off, anyway. But in discussing current “mundane” global issues, we should condition that weirdness doesn’t happen for some reason (which is not totally implausible: global ban on AGI development still could happen, for instance, and I would even support it).
Besides, as I noted in the beginning of my post, I think AI partners based on even today’s AI capabilities (LLMs like GPT-4, text-to-speech, text-to-image, etc.) could already be used to make an extremely compelling AI partner, much more attractive than today’s AI partners. It’s still very early days of these products, but in a few years they will catch up.
I agree with all this, and this is all extremely sad. But this seems to be irrelevant to the question of AI partners: if there are other problems that depress fertility rate, it doesn’t mean that we shouldn’t deal with this upcoming problem. Moreover, while problems like from financial inequality (and precarity), reducing biological fertility of men and women due to stress and environmental pollution, etc., are big systemic problems that are very hard and very expensive to fix, it’s currently relatively very cheap to prevent further potential fertility drop resulting from widespread adoption of AI partners by under-30′s: just pass a regulation in major countries!
I don’t think I agree. That might be cheap financially, yes. But unless there’s a strong argument that AI partners cause harm to the humans using them, then I don’t think society has a sufficiently compelling reason to justify a ban. In particular, I don’t think (and I assume most agree?) that it’s a good idea to coerce people into having children they don’t want, so the relevant question for me is, can everyone who wants children have the number of children they want? And relatedly, will AI partners cause more people who want children to become unable to have them? From which the societal intervention should be, how do we help ensure that those who want children can have them? Maybe trying to address that still leads to consistent below-replacement fertility, in which case, sure, we should consider other paths. But we’re not actually doing that.
I think an adequate social and tech policy for the 21st century should
Recognise that needs/wants/desires/beliefs and new social constructs could be manufactured and to discuss this phenomenon explicitly, and
Deal with this social engineering consistently, either by really going out of the way to protect people’s agency and self-determination (today, people’s wants, needs, beliefs, and personalities are sculptured by different actors from when they are toddlers and start watching videos on iPads, and then only strengthens), or by allowing a “free market of influences”, but also participating in it, by subsidising the projects that will benefit the society itself.
USA seems to be much closer to the latter option, but when people discuss policy in the US, it’s conventional not to acknowledge (see The Elephant in The Brain) the real social engineering that is already perpetuated by both state and non-state actors (from the pledge of allegiance to church to Instagram to Coca-Cola), and to presume that social engineering done by the state itself is a taboo or at least a tool of last resort.
This is just not what is already happening: apart from the pledge of allegiance, there are also many other ways in which the state (and other state-adjacent institutions and structures) is (was) proactive to manufacture people’s beliefs or wants in a certain way, or to prevent people’s beliefs or wants to be manufactured in a certain way: the Red Scare, various forms of official and unofficial (yet institutionalised) censorship, and the regulation of nicotine marketing are a few examples that came first to my mind.
Now, assuming that personal relationships is a “sacred libertarian range” and avoiding the state to weigh any influence on how people’s wants and needs around personal relationships are formed (even if through the recommended school curriculum, which is albeit a very ineffective approach to social engineering), yet allowing any corporate actors (such as AI partner startups and online dating platforms) to shape these needs and even rewire the society in whatever way they please, is just an inconsistent and a self-defeating strategy for the society and therefore for the state, too.
The state should better realise that its strength rests not only on the overt patriotism/nationalism, military/law enforcement “national security”, and the economy, but on the health and the strength of the society, too.
P. S. All the above doesn’t mean that I really prefer the “second option”. The first option, that is, human agency being protected, seems much more beautiful and “truly liberal” to me. However, this vision is completely incompatible with the present-form capitalism (to start, it probably means that ads should probably be banned completely, the entire educational system completely changed, and the need for labour-to-earn-a-living resolved through AI and automation), so it doesn’t make much practical sense to discuss this option here.
Yes, I discussed in this comment how people could perceive settling for AI partners less wholesome life because they didn’t pay their duty to society (regardless of whether this actually matters from your theoretical ethics point of view, if people have deeply held, culturally embedded idea about this in their head, they could be sad or unsettled for real). I don’t venture to estimate how prevalent this will be, and therefore how this will weigh against net good for personal satisfaction of people who would have no issue with settling for AI partners whatsoever.
Kaj Sotala suggested in this comment that this “duty for society” could be satisfied through platonic co-parenting. I think this is definitely interesting, could work for some people, and I think loudable that people do this, but I have doubts how widespread this practice could become. It might be that parenting and romantic involvement with the co-parent are pinned too strongly to each other in many people’s minds.
This is the same type of statement that many other people have made here (“people won’t be that addicted to this”, “people will still seek human partners even while using this thing”, etc.), to all of which I reply: it should be AI romance startups’ responsibility to demonstrate that the negative effect will be small, not my responsibility to prove that the effect will be huge (which I obviously couldn’t do). Currently, it’s all opinion versus opinion.
At least the maximum conceivable potential is huge: AI romance startups obviously would like nearly everyone to use their products (just like currently, nearly everyone watches porn, and soon nearly everyone will use general-purpose chatbots like ChatGPT). If the AI partners will be so attractive that about 20% of men are falling for them so hard that they don’t even want to date any women anymore through the rest of their lives, we are talking about 7-10% of drop in fertility (less than 20% because not all of these men would counterfactually have kids anyway, also “spare” women could decide to have kids alone, some will platonically co-parent, etc.)
I agree with all this, and this is all extremely sad. But this seems to be irrelevant to the question of AI partners: if there are other problems that depress fertility rate, it doesn’t mean that we shouldn’t deal with this upcoming problem. Moreover, while problems like from financial inequality (and precarity), reducing biological fertility of men and women due to stress and environmental pollution, etc., are big systemic problems that are very hard and very expensive to fix, it’s currently relatively very cheap to prevent further potential fertility drop resulting from widespread adoption of AI partners by under-30′s: just pass a regulation in major countries!
In my ethics, more conscious observers matter today, it doesn’t make a difference from a normative perspective than “later in the future” the population will recover. Also, following this logic, saving people from death (e.g., from malaria) today makes very limited sense, because all it prevents is transient few days of suffering of a person. But really I think EAs value more the “rescued years of mostly happy experience later”.
Similarly, the fact that somewhere else in the world some people procreate a lot doesn’t somehow make shrinking population in other parts of the world “less bad”.
I also think this cluster of arguments are not applicable. Following this logic, pretty much nothing matters if we expect world to become unrecognisably weird soon. I also expect this to happen, with very high probability, but in discussing societal impacts of AI, global health and poverty, environmental destruction, and other systemic issues, we have to take a sort of deontological stance and imagine that this weirdness won’t happen. If it will happen, all bets are off, anyway. But in discussing current “mundane” global issues, we should condition that weirdness doesn’t happen for some reason (which is not totally implausible: global ban on AGI development still could happen, for instance, and I would even support it).
Besides, as I noted in the beginning of my post, I think AI partners based on even today’s AI capabilities (LLMs like GPT-4, text-to-speech, text-to-image, etc.) could already be used to make an extremely compelling AI partner, much more attractive than today’s AI partners. It’s still very early days of these products, but in a few years they will catch up.
I don’t think I agree. That might be cheap financially, yes. But unless there’s a strong argument that AI partners cause harm to the humans using them, then I don’t think society has a sufficiently compelling reason to justify a ban. In particular, I don’t think (and I assume most agree?) that it’s a good idea to coerce people into having children they don’t want, so the relevant question for me is, can everyone who wants children have the number of children they want? And relatedly, will AI partners cause more people who want children to become unable to have them? From which the societal intervention should be, how do we help ensure that those who want children can have them? Maybe trying to address that still leads to consistent below-replacement fertility, in which case, sure, we should consider other paths. But we’re not actually doing that.
I think an adequate social and tech policy for the 21st century should
Recognise that needs/wants/desires/beliefs and new social constructs could be manufactured and to discuss this phenomenon explicitly, and
Deal with this social engineering consistently, either by really going out of the way to protect people’s agency and self-determination (today, people’s wants, needs, beliefs, and personalities are sculptured by different actors from when they are toddlers and start watching videos on iPads, and then only strengthens), or by allowing a “free market of influences”, but also participating in it, by subsidising the projects that will benefit the society itself.
USA seems to be much closer to the latter option, but when people discuss policy in the US, it’s conventional not to acknowledge (see The Elephant in The Brain) the real social engineering that is already perpetuated by both state and non-state actors (from the pledge of allegiance to church to Instagram to Coca-Cola), and to presume that social engineering done by the state itself is a taboo or at least a tool of last resort.
This is just not what is already happening: apart from the pledge of allegiance, there are also many other ways in which the state (and other state-adjacent institutions and structures) is (was) proactive to manufacture people’s beliefs or wants in a certain way, or to prevent people’s beliefs or wants to be manufactured in a certain way: the Red Scare, various forms of official and unofficial (yet institutionalised) censorship, and the regulation of nicotine marketing are a few examples that came first to my mind.
Now, assuming that personal relationships is a “sacred libertarian range” and avoiding the state to weigh any influence on how people’s wants and needs around personal relationships are formed (even if through the recommended school curriculum, which is albeit a very ineffective approach to social engineering), yet allowing any corporate actors (such as AI partner startups and online dating platforms) to shape these needs and even rewire the society in whatever way they please, is just an inconsistent and a self-defeating strategy for the society and therefore for the state, too.
The state should better realise that its strength rests not only on the overt patriotism/nationalism, military/law enforcement “national security”, and the economy, but on the health and the strength of the society, too.
P. S. All the above doesn’t mean that I really prefer the “second option”. The first option, that is, human agency being protected, seems much more beautiful and “truly liberal” to me. However, this vision is completely incompatible with the present-form capitalism (to start, it probably means that ads should probably be banned completely, the entire educational system completely changed, and the need for labour-to-earn-a-living resolved through AI and automation), so it doesn’t make much practical sense to discuss this option here.