I think you should have a kid if you would have wanted one without recent AI progress. Timelines are still very uncertain, and strong AGI could still be decades away. Parenthood is strongly value creating and extremely rewarding (if hard at times) and that’s true in many many worlds.
In fact it’s hard to find probable worlds where having kids is a really bad idea, IMO. If we solve alignment and end up in AI utopia, having kids is great! If we don’t solve alignment and EY is right about what happens in a fast takeoff world, it doesn’t really matter if you have kids or not.
In that sense, it’s basically a freeroll, though of course there are intermediate outcomes. I don’t immediately see any strong argument in favor of not having kids if you would otherwise want them.
If we don’t solve alignment and EY is right about what happens in a fast takeoff world, it doesn’t really matter if you have kids or not.
This IMO misses the obvious fact that you spend your life with a lot more anguish if you think that not just you, but your kid is going to die too. I don’t have a kid but everyone who does seems to describe a feeling of protectiveness that transcends any standard “I really care about this person” one you could experience with just about anyone else.
I’m sure this varies by kid, but I just asked my two older kids, age 9 and 7, and they both said they’re very glad that we decided to have them even if the world ends and everyone dies at some point in the next few years.
Which makes lots of sense to me: they seem quite happy, and it’s not surprising they would be opposed to never getting to exist even if it isn’t a full lifetime.
I think the idea here was sort of “if the kid is unaware and death comes suddenly and swiftly they at least got a few years of life out of it”… cold as it sounds. But anyway this also assume the EY kind of FOOM scenario rather than one of the many others in which people are around, and the world just gets shittier and shittier.
It’s a pretty difficult topic to grasp with, especially given how much regret can come with not having had children in hindsight. Can’t say I have any answers for it. But it’s obviously not as simple as this answer makes it.
Yeah, but assuming your p(doom) isn’t really high, this needs to balanced against the chance that AI goes well, and your kid has a really, really, really good life.
I don’t expect my daughter to ever have a job, but think that in more than half of worlds that seem possible to me right now, she has a very satisfying life—one that is better than it would be otherwise in part because she never has a job.
If your timelines are short-ish, you could likely have a child afterwards, because even if you’re a bit on the old side, hey, what, you don’t expect the ASI to find ways to improve health and fertility later in life?
I think the most important scenario to balance against is “nothing happens”, which is where you get shafted if you wait too long to have a child.
I don’t agree with that. I’m a parent of a 4-year-old who takes AI risk seriously. I think childhood is great in and of itself, and if the fate of my kid is to live until 20 and then experience some unthinkable AI apocalypse, that was 20 more good years of life than he would have had if I didn’t do anything. If that’s the deal of life it’s a pretty good deal and I don’t think there’s any reason to be particularly anguished about it on your kid’s behalf.
I mean this goes into the philosophical problem of whether it makes sense to compare utility of existent and virtual, non-existent agents but that would get long.
Do you think there could be an amount of suffering at the end of of a life that would outweigh 20 good years? (Including that this end could take very long.)
Yes, I basically am not considering that because I am not aware of the arguments for why that’s a likely kind of risk (vs. the risk of simple annihilation, which I understand the basic arguments for.) If you think the future will be super miserable rather than simply nonexistent, then I understand why you might not have a kid.
I think the “stable totalitarianism” scenario is less science-fiction than the annihilation scenario, because you only need an extremely totalitarian state (something that already exists or existed) enhanced by AI. It is possible that this would come along with random torture. This would be possible with a misguided AI as well.
This argument works against any thing you could do besides AI work and thus has to be considered in that wider frame. Going to the gym does mean less time for AI go well. Building a house. Watching Netflix. Some of these are longer time investments and some shorter, but the question still remains. Answer the question first how much effort you want to invest into AI go well vs. all other things you can do and then consider the fraction for children.
Perhaps people who can’t contribute to AI alignment directly could help indirectly by providing free babysitting for the people working on AI alignment?
Not AI risk specifically. But I had lengthy discussions with a friend about the general question of whether it is ethical to have children. The concerns in our discussions were overpopulation and how bad the world is in general and in Germany in particular. These much weaker concerns compared to extinction were enough for him not to have children. He also mentioned The Voluntary Human Extinction Movement. We still disagree on this. Mostly we disagree on how had and failed the world is. I think it is not worse than it has been most of the time since forever. Maybe because I perceive less suffering (in myself and in others) than he does. We also disagree on how to deal with overpopulation. Whether to take local population into account. Whether to weigh by consumption. Whether to see this as an individual obligation, or a collective one. Or as an obligation at all. Still, we are good friends. Maybe that tells you something.
In fact it’s hard to find probable worlds where having kids is a really bad idea, IMO.
One scenario where you might want to have kids in general, but not if timelines are short, is if you feel positive about having kids, but you view the first few years of having kids as a chore (ie, it costs you time, sleep, and money). So if you view kids as an investment of the form “take a hit to your happiness now, get more happiness back later”, then not having kids now seems justifiable. But I think that this sort of reasoning requires pretty short timelines (which I have), with high confidence (which I don’t have), and high confidence that the first few years of having kids is net-negative happiness for you (which I don’t have).
(But overall I endorse the claim that, mostly, if you would have otherwise wanted kids, you should still have them.)
My anecdotal evidence from relatives with toddlers is that the first few years of having your first child is indeed the most stressful experience of your life. I barely even meet them anymore, because all their free time is eaten by childcare. Not sure about happiness, but people who openly admit to regretting having their kids face huge social stigma, and I doubt you could get honest answer on that question.
I think you should have a kid if you would have wanted one without recent AI progress. Timelines are still very uncertain, and strong AGI could still be decades away. Parenthood is strongly value creating and extremely rewarding (if hard at times) and that’s true in many many worlds.
In fact it’s hard to find probable worlds where having kids is a really bad idea, IMO. If we solve alignment and end up in AI utopia, having kids is great! If we don’t solve alignment and EY is right about what happens in a fast takeoff world, it doesn’t really matter if you have kids or not.
In that sense, it’s basically a freeroll, though of course there are intermediate outcomes. I don’t immediately see any strong argument in favor of not having kids if you would otherwise want them.
This IMO misses the obvious fact that you spend your life with a lot more anguish if you think that not just you, but your kid is going to die too. I don’t have a kid but everyone who does seems to describe a feeling of protectiveness that transcends any standard “I really care about this person” one you could experience with just about anyone else.
+ the obvious fact that it might matter to the kid that they’re going to die
(edit: fwiw I broadly think people who want to have kids should have kids)
I’m sure this varies by kid, but I just asked my two older kids, age 9 and 7, and they both said they’re very glad that we decided to have them even if the world ends and everyone dies at some point in the next few years.
Which makes lots of sense to me: they seem quite happy, and it’s not surprising they would be opposed to never getting to exist even if it isn’t a full lifetime.
I think the idea here was sort of “if the kid is unaware and death comes suddenly and swiftly they at least got a few years of life out of it”… cold as it sounds. But anyway this also assume the EY kind of FOOM scenario rather than one of the many others in which people are around, and the world just gets shittier and shittier.
It’s a pretty difficult topic to grasp with, especially given how much regret can come with not having had children in hindsight. Can’t say I have any answers for it. But it’s obviously not as simple as this answer makes it.
Yeah, but assuming your p(doom) isn’t really high, this needs to balanced against the chance that AI goes well, and your kid has a really, really, really good life.
I don’t expect my daughter to ever have a job, but think that in more than half of worlds that seem possible to me right now, she has a very satisfying life—one that is better than it would be otherwise in part because she never has a job.
If your timelines are short-ish, you could likely have a child afterwards, because even if you’re a bit on the old side, hey, what, you don’t expect the ASI to find ways to improve health and fertility later in life?
I think the most important scenario to balance against is “nothing happens”, which is where you get shafted if you wait too long to have a child.
Could you please briefly describe the median future you expect?
I agree that it’s bad to raise a child in an environment of extreme anxiety. Don’t do that.
Also try to avoid being very doomy and anxious in general, it’s not a healthy state to be in. (Easier said than done, I realize.)
I don’t agree with that. I’m a parent of a 4-year-old who takes AI risk seriously. I think childhood is great in and of itself, and if the fate of my kid is to live until 20 and then experience some unthinkable AI apocalypse, that was 20 more good years of life than he would have had if I didn’t do anything. If that’s the deal of life it’s a pretty good deal and I don’t think there’s any reason to be particularly anguished about it on your kid’s behalf.
I mean this goes into the philosophical problem of whether it makes sense to compare utility of existent and virtual, non-existent agents but that would get long.
Do you think there could be an amount of suffering at the end of of a life that would outweigh 20 good years? (Including that this end could take very long.)
Yes, I basically am not considering that because I am not aware of the arguments for why that’s a likely kind of risk (vs. the risk of simple annihilation, which I understand the basic arguments for.) If you think the future will be super miserable rather than simply nonexistent, then I understand why you might not have a kid.
I think the “stable totalitarianism” scenario is less science-fiction than the annihilation scenario, because you only need an extremely totalitarian state (something that already exists or existed) enhanced by AI. It is possible that this would come along with random torture. This would be possible with a misguided AI as well.
Having kids does mean less time to help AI go well, so maybe it’s not so much of a good idea if you’re one of the people doing alignment work.
This argument works against any thing you could do besides AI work and thus has to be considered in that wider frame. Going to the gym does mean less time for AI go well. Building a house. Watching Netflix. Some of these are longer time investments and some shorter, but the question still remains. Answer the question first how much effort you want to invest into AI go well vs. all other things you can do and then consider the fraction for children.
Perhaps people who can’t contribute to AI alignment directly could help indirectly by providing free babysitting for the people working on AI alignment?
strong AGI could still be decades away
Heh, that’s why I put “strong” in there!
I agree with this take. I already have four children, and I wouldn’t decide against children because of AI risks.
Did you take such things into account when you made the decision, or decisions?
Not AI risk specifically. But I had lengthy discussions with a friend about the general question of whether it is ethical to have children. The concerns in our discussions were overpopulation and how bad the world is in general and in Germany in particular. These much weaker concerns compared to extinction were enough for him not to have children. He also mentioned The Voluntary Human Extinction Movement. We still disagree on this. Mostly we disagree on how had and failed the world is. I think it is not worse than it has been most of the time since forever. Maybe because I perceive less suffering (in myself and in others) than he does. We also disagree on how to deal with overpopulation. Whether to take local population into account. Whether to weigh by consumption. Whether to see this as an individual obligation, or a collective one. Or as an obligation at all. Still, we are good friends. Maybe that tells you something.
One scenario where you might want to have kids in general, but not if timelines are short, is if you feel positive about having kids, but you view the first few years of having kids as a chore (ie, it costs you time, sleep, and money). So if you view kids as an investment of the form “take a hit to your happiness now, get more happiness back later”, then not having kids now seems justifiable. But I think that this sort of reasoning requires pretty short timelines (which I have), with high confidence (which I don’t have), and high confidence that the first few years of having kids is net-negative happiness for you (which I don’t have).
(But overall I endorse the claim that, mostly, if you would have otherwise wanted kids, you should still have them.)
My anecdotal evidence from relatives with toddlers is that the first few years of having your first child is indeed the most stressful experience of your life. I barely even meet them anymore, because all their free time is eaten by childcare. Not sure about happiness, but people who openly admit to regretting having their kids face huge social stigma, and I doubt you could get honest answer on that question.