Hi Roman. Pretty exciting conversation thread and I had a few questions about your specific assumptions here.
In your world model, obviously today, young men have many distractions from their duties that were not the case in the past. [ social media, video games, television, movies, anime, porn, complex and demanding schooling, …]
And today, current AI models are not engrossing enough, for most people, to outcompete the set of the list above. You’re projecting:
within the next 2-3 years
human-level emotional intelligence and the skill of directing the dialogue
What bothered me about this is that this would be an extremely fragile illusion. Without expanding/replacing the underlying llm architecture to include critical elements like [ online learning, memory, freeform practice on humans to develop emotional intelligence (not just fine tuning but actual interactive practice), and all the wiring between the LLM and the state of the digital avatar you need ], it probably will not be convincing for very long. You need general emotional intelligence and that’s a harder ask, closer to actual AGI. (in fact you could cause an AGI singularity of exponential growth without ever solving emotional intelligence as it is not needed to control robots or solve tasks with empirically measurable objectives)
For most people it’s a fun toy, then the illusion breaks, and they are back to all the distractions above.
But let’s take your idea seriously. Boom, human level emotional intelligence in 2-3 years. 100% probability. The consequence is the majority of young men (and women and everyone) is distracted away from all their other activities to spend every leisure moment talking to their virtual partner.
If that’s the case, umm, I gotta ask, the breakthroughs in ML architecture and compute efficiency you would have made to get there would probably make a lot of other tasks easier.
Such as automating a lot of rote office tasks. And robotics would probably be much easier as well.
Essentially you would be able to automate a large percentage, somewhere between 50-90 percent, of all the jobs currently on earth, if you had the underlying ML breakthroughs that also give general human level emotional intelligence.
Combining your assumptions together: what does it matter if the population declines? You would have a large surplus of people.
I had another thought. Suppose your goal, as a policymaker, is to stem the population decline. What should you do? Well it seems to me that a policy that directly accomplishes your goal is more likely to work than one that works indirectly. You are more likely to make any progress at all. (recall how government policies frequently have unintended consequences, aka cobras in India)
So, why not pay young women surrogate mother rates for children (about 100k)? And allow the state to take primary custody so that young women can finish school/start early career/have more children. (the father would be whoever the woman chooses and would not owe child support conditional on the woman giving up primary custody)
I think before you can justify your conditional probability dependent indirect policy you would need to show why the direct and obvious policy is a bad idea. What is a bad idea with paying women directly?
Epistemic status : I have no emotional stake in the idea of paying women, this is not a political discussion, I simply took 5 minutes and wondered what you could do about declining population levels. I am completely open to any explanation why this is not a good idea.
Maybe we understand different things by “emotional intelligence”. To me, this is just the ability to correctly infer the emotional state of the interlocutor, based on the context, text messages that they send, and the pauses between messages.
I don’t think this requires any breakthroughs in AI. GPT-4 is basically able to do this already, if we take out of the question the task of “baseline adjustment” (different people have different conversational styles, some are cheerful and use smile emojis profusely, others are the opposite, use smileys only to mark strong emotions) and the task of intelligent summarisation of the context of the dialogue. The latter are exactly the types of tasks I expect AI romance tech to be ironing out in the next few years.
Detecting emotions from the video of human face or the recording of their speech is in some ways even simpler, there are apparently already simple supervised ML systems that do this. But I don’t expect AI partners to be in a video dialogue with users yet because I don’t think video generation will become sufficiently fast for realtime video, yet. So I don’t assume that AI will receive the stream of user’s video and audio, either.
So, why not pay young women surrogate mother rates for children (about 100k)? And allow the state to take primary custody so that young women can finish school/start early career/have more children. (the father would be whoever the woman chooses and would not owe child support conditional on the woman giving up primary custody)
In general, paying people for parenting (I would emphasise this instead of pure “childbirth”), i.e., considering parenting a “job”, I think is a reasonable idea and perhaps, soon this will be inevitable in developed countries with plummeting fertility rates and increasing efficiency of labour (the latter due to AI and automation).
The caveat is that the policy that you proposed will cost the government a lot of money initially, while the policy that I proposed costs nothing.
Hi Roman. Pretty exciting conversation thread and I had a few questions about your specific assumptions here.
In your world model, obviously today, young men have many distractions from their duties that were not the case in the past. [ social media, video games, television, movies, anime, porn, complex and demanding schooling, …]
And today, current AI models are not engrossing enough, for most people, to outcompete the set of the list above. You’re projecting:
What bothered me about this is that this would be an extremely fragile illusion. Without expanding/replacing the underlying llm architecture to include critical elements like [ online learning, memory, freeform practice on humans to develop emotional intelligence (not just fine tuning but actual interactive practice), and all the wiring between the LLM and the state of the digital avatar you need ], it probably will not be convincing for very long. You need general emotional intelligence and that’s a harder ask, closer to actual AGI. (in fact you could cause an AGI singularity of exponential growth without ever solving emotional intelligence as it is not needed to control robots or solve tasks with empirically measurable objectives)
For most people it’s a fun toy, then the illusion breaks, and they are back to all the distractions above.
But let’s take your idea seriously. Boom, human level emotional intelligence in 2-3 years. 100% probability. The consequence is the majority of young men (and women and everyone) is distracted away from all their other activities to spend every leisure moment talking to their virtual partner.
If that’s the case, umm, I gotta ask, the breakthroughs in ML architecture and compute efficiency you would have made to get there would probably make a lot of other tasks easier.
Such as automating a lot of rote office tasks. And robotics would probably be much easier as well.
Essentially you would be able to automate a large percentage, somewhere between 50-90 percent, of all the jobs currently on earth, if you had the underlying ML breakthroughs that also give general human level emotional intelligence.
Combining your assumptions together: what does it matter if the population declines? You would have a large surplus of people.
I had another thought. Suppose your goal, as a policymaker, is to stem the population decline. What should you do? Well it seems to me that a policy that directly accomplishes your goal is more likely to work than one that works indirectly. You are more likely to make any progress at all. (recall how government policies frequently have unintended consequences, aka cobras in India)
So, why not pay young women surrogate mother rates for children (about 100k)? And allow the state to take primary custody so that young women can finish school/start early career/have more children. (the father would be whoever the woman chooses and would not owe child support conditional on the woman giving up primary custody)
I think before you can justify your conditional probability dependent indirect policy you would need to show why the direct and obvious policy is a bad idea. What is a bad idea with paying women directly?
Epistemic status : I have no emotional stake in the idea of paying women, this is not a political discussion, I simply took 5 minutes and wondered what you could do about declining population levels. I am completely open to any explanation why this is not a good idea.
Maybe we understand different things by “emotional intelligence”. To me, this is just the ability to correctly infer the emotional state of the interlocutor, based on the context, text messages that they send, and the pauses between messages.
I don’t think this requires any breakthroughs in AI. GPT-4 is basically able to do this already, if we take out of the question the task of “baseline adjustment” (different people have different conversational styles, some are cheerful and use smile emojis profusely, others are the opposite, use smileys only to mark strong emotions) and the task of intelligent summarisation of the context of the dialogue. The latter are exactly the types of tasks I expect AI romance tech to be ironing out in the next few years.
Detecting emotions from the video of human face or the recording of their speech is in some ways even simpler, there are apparently already simple supervised ML systems that do this. But I don’t expect AI partners to be in a video dialogue with users yet because I don’t think video generation will become sufficiently fast for realtime video, yet. So I don’t assume that AI will receive the stream of user’s video and audio, either.
In general, paying people for parenting (I would emphasise this instead of pure “childbirth”), i.e., considering parenting a “job”, I think is a reasonable idea and perhaps, soon this will be inevitable in developed countries with plummeting fertility rates and increasing efficiency of labour (the latter due to AI and automation).
The caveat is that the policy that you proposed will cost the government a lot of money initially, while the policy that I proposed costs nothing.