Yeah, fair enough. I don’t count Musk as a rationalist rationalist. He’s just very confused about anything that doesn’t give you real-world feedback quickly. He’s a weird case of a human who is exceptional in a ≤4-year time horizon and then has median human thinking abilities for anything beyond that. (Though noteworthy that he has taken no steps towards technical solutions for the birth rate issue, which, ya know, revealed preferences…)
If I were to steelman… hm. If every nation was at South Korea levels in 200 years we’d likely be back to pre-industrial revolution levels of technological & industrial development because lots of tacit knowledge that is required to keep civilization going is stored in people’s heads, and can’t be squeezed into fewer people, furthermore declining population leaves less slack for R&D because everyone is busy caring for mostly-nonproductive elders instead of innovating. This might plausibly start by 2050 or so, so unless one has fairly long AI timelines it is a non-issue.
I guess any AI pause that goes that far out has a similar issue, unless we allow for genetic engineering+exowombs to proliferate (and even then it feels like a toss-up to me, bracketing more AI progress).
A simple steelman is something like “if we’re very wrong about A[G/S]I, then birth rates are a big issue, so we better invest some resources into it, in case we’re wrong”.
This steelman is a valid position to have, but is not good as a steelman in this context, because attributing this view to people like Musk is probably a great stretch (and probably also to other people the OP is referring to, but I’m not tracking that kind of stuff, so unsure).
I guess any AI pause that goes that far out has a similar issue
If the issue to be fixed is just population (growth), then yeah.
But if we’re fine with a lower population[1] without the demographic collapse degrading everything the way it would degrade by default as a consequence, then a gradual development and adoption of “sub-AGI” AI could automate a bunch of stuff in a way that roughly catches up with the declining population. (Assuming you meant “AGI-ward progress pause”, which maybe you didn’t mean.)
A simple steelman is something like “if we’re very wrong about A[G/S]I, then birth rates are a big issue, so we better invest some resources into it, in case we’re wrong”.
This would be understandable if it weren’t for the timelines here. Let’s say AGI takes ~10x the amount of time (40 years instead of 4 years from the 2026 date) and the few billion people (which to note is just the population of the 1900s) happens in 100 years instead of 200, that would be 2066 vs 2126.
Despite being absurdly friendly on the timelines, it’s still not even close! That suggests to me a very rocky confidence level being held about AGI actually happening, unless one believes that the superintelligence smarter than all humans put together wouldn’t be able to help the birthrate.
Edit: Actually I’m messing up the math a little because I’m mixing up scenarios and hypotheticals. If the whole world had South Korea levels it would be much lower than a few billion in 100 years. But that’s already the unrealistic worst case scenario, the world overall right now still has a positive replacement rare and estimates put around 10 billion people by 2084 which is still two decades past our order of magnitude AGI predictions.
People try not to be too explicit about this in public, but the realconcern is about dysgenics and human capital, not mere underpopulation. Having ten billion people in 2084 may not be good if they’re statistically less of the people needed to deal with the problems of ten billion people in 2084.
Though noteworthy that he has taken no steps towards technical solutions for the birth rate issue
I mean, as individual men in the West go, having fourteen children is pretty above-average, and he seems to have gotten the process down to a science. He’s not a Saudi oil baron with three digits of offspring, but he’s certainly taken a Silicon Valley approach to it.
Pro-Natalists, in general, seem to take a ‘lead-by-example’ tack, which isn’t horrible considering that it demonstrates an understanding of the consequences of materially encouraging people who wouldn’t otherwise want kids to have them. I’d also say that none of the proposed policy approaches currently taken seriously by major governments have demonstrated much, if any, success, so “lead by example” would seem to be the default.
I wonder why he hasn’t tried to clone himself. His younger twins would be likely to have similar priorities once they’ve grown up. Probably technical and legal hurdles.
Musk also has four children with Shivon Zilis, director of operations and special projects at Neuralink: twins born via IVF in 2021, a child born in 2024 via surrogacy and a child born in 2025.
Yeah, fair enough. I don’t count Musk as a rationalist rationalist. He’s just very confused about anything that doesn’t give you real-world feedback quickly. He’s a weird case of a human who is exceptional in a ≤4-year time horizon and then has median human thinking abilities for anything beyond that. (Though noteworthy that he has taken no steps towards technical solutions for the birth rate issue, which, ya know, revealed preferences…)
If I were to steelman… hm. If every nation was at South Korea levels in 200 years we’d likely be back to pre-industrial revolution levels of technological & industrial development because lots of tacit knowledge that is required to keep civilization going is stored in people’s heads, and can’t be squeezed into fewer people, furthermore declining population leaves less slack for R&D because everyone is busy caring for mostly-nonproductive elders instead of innovating. This might plausibly start by 2050 or so, so unless one has fairly long AI timelines it is a non-issue.
I guess any AI pause that goes that far out has a similar issue, unless we allow for genetic engineering+exowombs to proliferate (and even then it feels like a toss-up to me, bracketing more AI progress).
A simple steelman is something like “if we’re very wrong about A[G/S]I, then birth rates are a big issue, so we better invest some resources into it, in case we’re wrong”.
This steelman is a valid position to have, but is not good as a steelman in this context, because attributing this view to people like Musk is probably a great stretch (and probably also to other people the OP is referring to, but I’m not tracking that kind of stuff, so unsure).
If the issue to be fixed is just population (growth), then yeah.
But if we’re fine with a lower population[1] without the demographic collapse degrading everything the way it would degrade by default as a consequence, then a gradual development and adoption of “sub-AGI” AI could automate a bunch of stuff in a way that roughly catches up with the declining population. (Assuming you meant “AGI-ward progress pause”, which maybe you didn’t mean.)
I’m not in favor, but also wouldn’t consider it a great tragedy if we can intervene out the consequences.
This would be understandable if it weren’t for the timelines here. Let’s say AGI takes ~10x the amount of time (40 years instead of 4 years from the 2026 date) and the few billion people (which to note is just the population of the 1900s) happens in 100 years instead of 200, that would be 2066 vs 2126.
Despite being absurdly friendly on the timelines, it’s still not even close! That suggests to me a very rocky confidence level being held about AGI actually happening, unless one believes that the superintelligence smarter than all humans put together wouldn’t be able to help the birthrate.
Edit: Actually I’m messing up the math a little because I’m mixing up scenarios and hypotheticals. If the whole world had South Korea levels it would be much lower than a few billion in 100 years. But that’s already the unrealistic worst case scenario, the world overall right now still has a positive replacement rare and estimates put around 10 billion people by 2084 which is still two decades past our order of magnitude AGI predictions.
People try not to be too explicit about this in public, but the real concern is about dysgenics and human capital, not mere underpopulation. Having ten billion people in 2084 may not be good if they’re statistically less of the people needed to deal with the problems of ten billion people in 2084.
I mean, as individual men in the West go, having fourteen children is pretty above-average, and he seems to have gotten the process down to a science. He’s not a Saudi oil baron with three digits of offspring, but he’s certainly taken a Silicon Valley approach to it.
Pro-Natalists, in general, seem to take a ‘lead-by-example’ tack, which isn’t horrible considering that it demonstrates an understanding of the consequences of materially encouraging people who wouldn’t otherwise want kids to have them. I’d also say that none of the proposed policy approaches currently taken seriously by major governments have demonstrated much, if any, success, so “lead by example” would seem to be the default.
I wonder why he hasn’t tried to clone himself. His younger twins would be likely to have similar priorities once they’ve grown up. Probably technical and legal hurdles.
Isn’t the rumour that he has many IVF+embryo-selected kids with different women? (Is there a better source for this?)
Wiki only says: