Your Substack subtitle is “I won’t get to raise a family because of AGI”. It should instead be “I don’t want to raise a family because of AGI”
I think it’s >90% likely that if you want and try to, you can raise a family in a relatively normal way (i.e. your wife gives birth to your biological children and you both look after them until they are adults) in your lifetime.
Not wanting to do this because those children will live in a world dissimilar to today’s is another matter, but note that your parents also raised you to live in a world very dissimilar from the world they grew up in, but were motivated to do it anyway! So far, over many generations, people have been motivated to build families not by the confidence that their children will live in the same way as they did, but rather by other drives (whether it’s a drive towards reproduction, love, curiosity, norm-following, etc.).
I also think you’re very overconfident about superintelligence appearing in our lifetimes, and X-risk being high, but I don’t see why either of those things stop you from having a family.
I thought this was going to take the tack that it’s still okay to birth people who are definitely going to die soon. I think on the margin I’d like to lose a war with one more person on my team, one more child I love. I reckon it’s a valid choice to have a child you expect to die at like 10 or 20. In some sense, every person born dies young (compared to a better society where people live to 1,000).
I’m not having a family because I’m busy and too poor to hire lots of childcare, but I’d strongly consider doing it if I had a million dollars.
(indeed, historically around half of children ever born died before the age of 15, so if a 50% chance of them not surviving to adulthood were a good reason not to have children then no-one “should” have had children until industrial times)
Having a child probably brings online lots of protectiveness drives. I don’t think I would enjoy feeling helpless to defend my recently born child from misaligned superintelligence, especially knowing that what little I can do to avert their death and that of everyone else I know is much harder now that I have to take care of a child.
I disagree. Perhaps I’m biased because I’m an Antinatalist, but I don’t personally think it’s ethical to create a thinking, feeling life that you know will end in less time than average.
Yes, it is true that people do die young. You can’t guarantee that your child won’t die of cancer at 10 or in a car crash at 20. But the difference is that no one sets out to create a child that they Know will die of cancer at 10, no matter how badly they want a child.
Imagine being that child and being told that your parent did not expect you to have some of the same age-based experiences as them ( learning to drive, first kiss, trying alcohol). I’m very sure you would feel like a cruel joke had been played on you.
There’s a cut of Blade runner where Rutger Hauer’s character tells his creator
″ I want more life, Father”
Yes, people have had kids in the past where the life expectancy was lower. But it’s important to note that they were under the impression that it was impossible to live much longer than they had seen people live. As far as they were concerned when you turned 70 you were as good as dead.
But they did not expect their children’s lives to be cut short. Certainly, an illness or accident could take them ( not to mention infant mortality), but the assumption was that their children would eventually have children of their own. For most of human history we have lived in “normal conditions” where the above assumption would be correct in the vast majority of cases.
We of the 21st century do not live in normal conditions. In short, I believe creating any human life is unethical, but creating one you fully expect to end quickly is even more unethical.
In “less time than average”, which average? In the “create a child that they know will die of cancer at 10″ thought experiment, the child is destined to die sooner than other children born that day. Whereas in the “human extinction in 10 years” thought experiment, the child is destined to die at about the same time as other children born that day, so they are not going to have “less time than average” in that sense. Those thought experiments have different answers by my intuitions.
My intuitions about what children think are also different to yours. There are many children who are angry at adults for the state of the world into which they were born. Mostly they are not angry at their parents for creating them in a fallen world. Children have many different takes on the Adam and Eve story, but I’ve not heard a child argue that Adam and Eve should not have had children because their children’s lives would necessarily be shorter and less pleasant than their own had been.
I don’t see why either of those things stop you from having a family.
I think we might be using different operationalizations of “having a family” here. I was imagining it to mean something that at least includes “raise kids from the age of ~0 to 18″. If x-risk were to materialize within the next ~19 years, I would be literally stopped from “having a family” by all of us getting killed.
But under a definition of “have a family” which is means “raise a child from the age of ~0 to 1″, then yeah, I think P(doom) is <20% in the next 2 years and I’m probably not literally getting stopped.
Also to be clear, my P(ASI within our lifetimes) is like 85%, and my P(doom) is like 2⁄3.
Thinking “superintelligence”, as described by Yudkowsky et al., will not be built in the next 20 years. “AGI” means too many different things, in some sense we already have AGI, and I predict continued progress in AI development.
Thinking the kind of stronger AI we’ll see in the next 20 years is highly unlikely to kill everyone. Less certain about true superintelligence, but even in that case, I’m far less pessimistic than most lesswrongers.
Very rough numbers would be p(superintelligence within 20 years) = 1%, p(superintelligence kills everyone within 100 years of being built) = 5%, though it’s very hard to put numbers on such things while lacking info, so take this as gesturing at a general ballpark.
I haven’t written much about (1). Some of it is intuition from working in the field and using AI a lot. (Edit: see this from Andrej Karpathy that gestures some of this intuition).
Re (2), I’ve written a couple relevant posts (post 1, post 2 - review of IABIED), though I’m somewhat dissatisfied with their level of completeness. The TLDR is that I’m very skeptical of appeals to coherence argument style reasoning, which is central to most misalignment-related doom stories (relevant discussion with Raemon).
You should’ve said this in your original comment. You obviously have a very different idea of AI development and x-risk than this guy, or even most people on lesswrong.
It should instead be “I don’t want to raise a family because of AGI”
Feels like you’re in Norway in medieval times, and some dude says he doesn’t know if he can start a family because of this plauge thats supposedly wreaking havoc in Europe, and worries it could come to Norway. And you’re like “Well, stuff will change a lot, but your parents also had tons of worries before giving birth to you. ”, and then later its revealed you don’t think the plague actually exists, or if exists is not worse than the cold or something.
She said >90% he can raise a family in a somewhat normal way, she did not say the <<1% part.
Whatever. Do you not get the point of what I’m trying to say? I’m not criticizing he for having a low p(superintelligence soon) or p(ASI kills everyone soon). I’m criticizing her for not making it clear enough that those are the differences in belief causing her to disagree with him.
She disclosed that she disagreed both about superintelligence appearing within our lifetimes and about x-risk being high. If you missed that paragraph, that’s fine, but it’s not her error.
She did not disclose the information I’m specifically mentioning? Post a screenshot of her original comment and underline in red where she says <<1% pdoom.
Why are you accusing me of not reading her comment? That’s rude. I would tell you if I reread her comment and noticed information I’d missed.
You didn’t say “you didn’t say your probability was <1%”, you said “You should’ve said this in your original comment. You obviously have a very different idea of AI development and x-risk than this guy, or even most people on lesswrong.” However, the fact that she has a very different perspective on AI risk than the OP or most people on lesswrong was evident from the fact that she stated as much in the original comment. (It was also clear that she didn’t think superintelligence would be built within 20 years, because she said that, and that she didn’t think superintelligence was likely to kill everyone because she said that too).
I said she should’ve said “this” in her original comment, when I was commenting on her reply explaining the extent of the difference. The fact that this difference was is/was load-bearing is clear from the fact that Nikola replied with
I think we might be using different operationalizations of “having a family” here. I was imagining it to mean something that at least includes “raise kids from the age of ~0 to 18″. If x-risk were to materialize within the next ~19 years, I would be literally stopped from “having a family” by all of us getting killed.
But she said the same things in her original comment as in that reply, just with less detail. Nikola did reply with that, presumably because Nikola believes we’re all doomed, but Nina did say in her original comment that she thinks Nikola is way overconfident about us all being doomed.
Well, for one, AGI is just likely to supercharge the economy and result in massive, albeit manageable (assuming democratic institutions survive) societal change. ASI is another thing altogether, in which case widespread death becomes orders of magnitude more likely.
I also think you’re very overconfident about superintelligence appearing in our lifetimes, and X-risk being high, but I don’t see why either of those things stop you from having a family.
The “also” in this sentence seems to imply that the disagreement about timelines and the level of risk posed by advanced AI is not your main point?
Correct. Though when writing the original comment I didn’t realize Nikola’s p(doom) within 19yrs was literally >50%. My main point was that even if your p(doom) is relatively high, but <50%, you can expect to be able to raise a family. Even at Nikola’s p(doom) there’s some chance he can raise children to adulthood (15% according to him), which makes it not a completely doomed pursuit if he really wanted them.
I think its reasonable to say you “can’t have a family” if you expect whatever children and partner you have to be killed off fairly shortly if you try.
Like a couple with genetic problems making the likely outcome of them having a child be that the child dies when they’re 3 years old can reasonably say “we can’t have (biological) children”.
Even though its technically true that they could have children if they really wanted to.
Huh, even assuming business as usual I’d guess the baseline probability of someone’s family dying is not <<0.05%/year (assuming the standard meaning of “<<” as “at least around an order of magnitude less”)
(at least in the US—though guessing from his name Nikola Jurkovic might live somewhere less car-dependent than that)
The diff is distributed between the diffs of P(AGI[1] in n years) and P(doom | AGI in n years), so Nina might just have a lower timeline-conditional p(doom) and not significantly longer timelines.
Your Substack subtitle is “I won’t get to raise a family because of AGI”. It should instead be “I don’t want to raise a family because of AGI”
I think it’s >90% likely that if you want and try to, you can raise a family in a relatively normal way (i.e. your wife gives birth to your biological children and you both look after them until they are adults) in your lifetime.
Not wanting to do this because those children will live in a world dissimilar to today’s is another matter, but note that your parents also raised you to live in a world very dissimilar from the world they grew up in, but were motivated to do it anyway! So far, over many generations, people have been motivated to build families not by the confidence that their children will live in the same way as they did, but rather by other drives (whether it’s a drive towards reproduction, love, curiosity, norm-following, etc.).
I also think you’re very overconfident about superintelligence appearing in our lifetimes, and X-risk being high, but I don’t see why either of those things stop you from having a family.
I thought this was going to take the tack that it’s still okay to birth people who are definitely going to die soon. I think on the margin I’d like to lose a war with one more person on my team, one more child I love. I reckon it’s a valid choice to have a child you expect to die at like 10 or 20. In some sense, every person born dies young (compared to a better society where people live to 1,000).
I’m not having a family because I’m busy and too poor to hire lots of childcare, but I’d strongly consider doing it if I had a million dollars.
I mean, I also think it’s OK to birth people who will die soon. But indeed that wasn’t my main point.
(indeed, historically around half of children ever born died before the age of 15, so if a 50% chance of them not surviving to adulthood were a good reason not to have children then no-one “should” have had children until industrial times)
Having a child probably brings online lots of protectiveness drives. I don’t think I would enjoy feeling helpless to defend my recently born child from misaligned superintelligence, especially knowing that what little I can do to avert their death and that of everyone else I know is much harder now that I have to take care of a child.
Excited to be a parent post singularity when I can give them a safe and healthy environment, and have a print-out of https://www.smbc-comics.com/comic/2013-09-08 to remind myself of this.
I disagree. Perhaps I’m biased because I’m an Antinatalist, but I don’t personally think it’s ethical to create a thinking, feeling life that you know will end in less time than average.
Yes, it is true that people do die young. You can’t guarantee that your child won’t die of cancer at 10 or in a car crash at 20. But the difference is that no one sets out to create a child that they Know will die of cancer at 10, no matter how badly they want a child.
Imagine being that child and being told that your parent did not expect you to have some of the same age-based experiences as them ( learning to drive, first kiss, trying alcohol). I’m very sure you would feel like a cruel joke had been played on you.
There’s a cut of Blade runner where Rutger Hauer’s character tells his creator
″ I want more life, Father”
Yes, people have had kids in the past where the life expectancy was lower. But it’s important to note that they were under the impression that it was impossible to live much longer than they had seen people live. As far as they were concerned when you turned 70 you were as good as dead.
But they did not expect their children’s lives to be cut short. Certainly, an illness or accident could take them ( not to mention infant mortality), but the assumption was that their children would eventually have children of their own. For most of human history we have lived in “normal conditions” where the above assumption would be correct in the vast majority of cases.
We of the 21st century do not live in normal conditions. In short, I believe creating any human life is unethical, but creating one you fully expect to end quickly is even more unethical.
If my parents had known in advance that I would die at ten years old, I would still prefer them to have created me.
In “less time than average”, which average? In the “create a child that they know will die of cancer at 10″ thought experiment, the child is destined to die sooner than other children born that day. Whereas in the “human extinction in 10 years” thought experiment, the child is destined to die at about the same time as other children born that day, so they are not going to have “less time than average” in that sense. Those thought experiments have different answers by my intuitions.
My intuitions about what children think are also different to yours. There are many children who are angry at adults for the state of the world into which they were born. Mostly they are not angry at their parents for creating them in a fallen world. Children have many different takes on the Adam and Eve story, but I’ve not heard a child argue that Adam and Eve should not have had children because their children’s lives would necessarily be shorter and less pleasant than their own had been.
I think we might be using different operationalizations of “having a family” here. I was imagining it to mean something that at least includes “raise kids from the age of ~0 to 18″. If x-risk were to materialize within the next ~19 years, I would be literally stopped from “having a family” by all of us getting killed.
But under a definition of “have a family” which is means “raise a child from the age of ~0 to 1″, then yeah, I think P(doom) is <20% in the next 2 years and I’m probably not literally getting stopped.
Also to be clear, my P(ASI within our lifetimes) is like 85%, and my P(doom) is like 2⁄3.
Yeah I think it’s very unlikely your family would die in the next 20 years (<<1%) so that’s the crux re. whether or not you can raise a family
Huh, those are very confident AGI timelines. Have you written anything on your reasons for that? (No worries if not, am just curious).
The <1% comes from a combination of:
Thinking “superintelligence”, as described by Yudkowsky et al., will not be built in the next 20 years. “AGI” means too many different things, in some sense we already have AGI, and I predict continued progress in AI development.
Thinking the kind of stronger AI we’ll see in the next 20 years is highly unlikely to kill everyone. Less certain about true superintelligence, but even in that case, I’m far less pessimistic than most lesswrongers.
Very rough numbers would be p(superintelligence within 20 years) = 1%, p(superintelligence kills everyone within 100 years of being built) = 5%, though it’s very hard to put numbers on such things while lacking info, so take this as gesturing at a general ballpark.
I haven’t written much about (1). Some of it is intuition from working in the field and using AI a lot. (Edit: see this from Andrej Karpathy that gestures some of this intuition).
Re (2), I’ve written a couple relevant posts (post 1, post 2 - review of IABIED), though I’m somewhat dissatisfied with their level of completeness. The TLDR is that I’m very skeptical of appeals to coherence argument style reasoning, which is central to most misalignment-related doom stories (relevant discussion with Raemon).
You should’ve said this in your original comment. You obviously have a very different idea of AI development and x-risk than this guy, or even most people on lesswrong.
Feels like you’re in Norway in medieval times, and some dude says he doesn’t know if he can start a family because of this plauge thats supposedly wreaking havoc in Europe, and worries it could come to Norway. And you’re like “Well, stuff will change a lot, but your parents also had tons of worries before giving birth to you. ”, and then later its revealed you don’t think the plague actually exists, or if exists is not worse than the cold or something.
She did say this in her original comment. And it’s not really similar to denying the black death, because the black death, cruciallly, existed.
She said >90% he can raise a family in a somewhat normal way, she did not say the <<1% part.
Whatever. Do you not get the point of what I’m trying to say? I’m not criticizing he for having a low p(superintelligence soon) or p(ASI kills everyone soon). I’m criticizing her for not making it clear enough that those are the differences in belief causing her to disagree with him.
She disclosed that she disagreed both about superintelligence appearing within our lifetimes and about x-risk being high. If you missed that paragraph, that’s fine, but it’s not her error.
She did not disclose the information I’m specifically mentioning? Post a screenshot of her original comment and underline in red where she says <<1% pdoom.
Why are you accusing me of not reading her comment? That’s rude. I would tell you if I reread her comment and noticed information I’d missed.
You didn’t say “you didn’t say your probability was <1%”, you said “You should’ve said this in your original comment. You obviously have a very different idea of AI development and x-risk than this guy, or even most people on lesswrong.” However, the fact that she has a very different perspective on AI risk than the OP or most people on lesswrong was evident from the fact that she stated as much in the original comment. (It was also clear that she didn’t think superintelligence would be built within 20 years, because she said that, and that she didn’t think superintelligence was likely to kill everyone because she said that too).
I said she should’ve said “this” in her original comment, when I was commenting on her reply explaining the extent of the difference. The fact that this difference was is/was load-bearing is clear from the fact that Nikola replied with
But she said the same things in her original comment as in that reply, just with less detail. Nikola did reply with that, presumably because Nikola believes we’re all doomed, but Nina did say in her original comment that she thinks Nikola is way overconfident about us all being doomed.
Okay there is a difference between thinking we’re not all doomed and thinking p(doom) << 1%
Well, for one, AGI is just likely to supercharge the economy and result in massive, albeit manageable (assuming democratic institutions survive) societal change. ASI is another thing altogether, in which case widespread death becomes orders of magnitude more likely.
The “also” in this sentence seems to imply that the disagreement about timelines and the level of risk posed by advanced AI is not your main point?
Correct. Though when writing the original comment I didn’t realize Nikola’s p(doom) within 19yrs was literally >50%. My main point was that even if your p(doom) is relatively high, but <50%, you can expect to be able to raise a family. Even at Nikola’s p(doom) there’s some chance he can raise children to adulthood (15% according to him), which makes it not a completely doomed pursuit if he really wanted them.
I think its reasonable to say you “can’t have a family” if you expect whatever children and partner you have to be killed off fairly shortly if you try.
Like a couple with genetic problems making the likely outcome of them having a child be that the child dies when they’re 3 years old can reasonably say “we can’t have (biological) children”.
Even though its technically true that they could have children if they really wanted to.
Huh, even assuming business as usual I’d guess the baseline probability of someone’s family dying is not <<0.05%/year (assuming the standard meaning of “<<” as “at least around an order of magnitude less”)
(at least in the US—though guessing from his name Nikola Jurkovic might live somewhere less car-dependent than that)
OMG, someone on LW with longer timelines than me.
The diff is distributed between the diffs of P(AGI[1] in n years) and P(doom | AGI in n years), so Nina might just have a lower timeline-conditional p(doom) and not significantly longer timelines.
or ASI or whatever