Thinking “superintelligence”, as described by Yudkowsky et al., will not be built in the next 20 years. “AGI” means too many different things, in some sense we already have AGI, and I predict continued progress in AI development.
Thinking the kind of stronger AI we’ll see in the next 20 years is highly unlikely to kill everyone. Less certain about true superintelligence, but even in that case, I’m far less pessimistic than most lesswrongers.
Very rough numbers would be p(superintelligence within 20 years) = 1%, p(superintelligence kills everyone within 100 years of being built) = 5%, though it’s very hard to put numbers on such things while lacking info, so take this as gesturing at a general ballpark.
I haven’t written much about (1). Some of it is intuition from working in the field and using AI a lot. (Edit: see this from Andrej Karpathy that gestures some of this intuition).
Re (2), I’ve written a couple relevant posts (post 1, post 2 - review of IABIED), though I’m somewhat dissatisfied with their level of completeness. The TLDR is that I’m very skeptical of appeals to coherence argument style reasoning, which is central to most misalignment-related doom stories (relevant discussion with Raemon).
You should’ve said this in your original comment. You obviously have a very different idea of AI development and x-risk than this guy, or even most people on lesswrong.
It should instead be “I don’t want to raise a family because of AGI”
Feels like you’re in Norway in medieval times, and some dude says he doesn’t know if he can start a family because of this plauge thats supposedly wreaking havoc in Europe, and worries it could come to Norway. And you’re like “Well, stuff will change a lot, but your parents also had tons of worries before giving birth to you. ”, and then later its revealed you don’t think the plague actually exists, or if exists is not worse than the cold or something.
She said >90% he can raise a family in a somewhat normal way, she did not say the <<1% part.
Whatever. Do you not get the point of what I’m trying to say? I’m not criticizing he for having a low p(superintelligence soon) or p(ASI kills everyone soon). I’m criticizing her for not making it clear enough that those are the differences in belief causing her to disagree with him.
She disclosed that she disagreed both about superintelligence appearing within our lifetimes and about x-risk being high. If you missed that paragraph, that’s fine, but it’s not her error.
She did not disclose the information I’m specifically mentioning? Post a screenshot of her original comment and underline in red where she says <<1% pdoom.
Why are you accusing me of not reading her comment? That’s rude. I would tell you if I reread her comment and noticed information I’d missed.
You didn’t say “you didn’t say your probability was <1%”, you said “You should’ve said this in your original comment. You obviously have a very different idea of AI development and x-risk than this guy, or even most people on lesswrong.” However, the fact that she has a very different perspective on AI risk than the OP or most people on lesswrong was evident from the fact that she stated as much in the original comment. (It was also clear that she didn’t think superintelligence would be built within 20 years, because she said that, and that she didn’t think superintelligence was likely to kill everyone because she said that too).
I said she should’ve said “this” in her original comment, when I was commenting on her reply explaining the extent of the difference. The fact that this difference was is/was load-bearing is clear from the fact that Nikola replied with
I think we might be using different operationalizations of “having a family” here. I was imagining it to mean something that at least includes “raise kids from the age of ~0 to 18″. If x-risk were to materialize within the next ~19 years, I would be literally stopped from “having a family” by all of us getting killed.
But she said the same things in her original comment as in that reply, just with less detail. Nikola did reply with that, presumably because Nikola believes we’re all doomed, but Nina did say in her original comment that she thinks Nikola is way overconfident about us all being doomed.
The <1% comes from a combination of:
Thinking “superintelligence”, as described by Yudkowsky et al., will not be built in the next 20 years. “AGI” means too many different things, in some sense we already have AGI, and I predict continued progress in AI development.
Thinking the kind of stronger AI we’ll see in the next 20 years is highly unlikely to kill everyone. Less certain about true superintelligence, but even in that case, I’m far less pessimistic than most lesswrongers.
Very rough numbers would be p(superintelligence within 20 years) = 1%, p(superintelligence kills everyone within 100 years of being built) = 5%, though it’s very hard to put numbers on such things while lacking info, so take this as gesturing at a general ballpark.
I haven’t written much about (1). Some of it is intuition from working in the field and using AI a lot. (Edit: see this from Andrej Karpathy that gestures some of this intuition).
Re (2), I’ve written a couple relevant posts (post 1, post 2 - review of IABIED), though I’m somewhat dissatisfied with their level of completeness. The TLDR is that I’m very skeptical of appeals to coherence argument style reasoning, which is central to most misalignment-related doom stories (relevant discussion with Raemon).
You should’ve said this in your original comment. You obviously have a very different idea of AI development and x-risk than this guy, or even most people on lesswrong.
Feels like you’re in Norway in medieval times, and some dude says he doesn’t know if he can start a family because of this plauge thats supposedly wreaking havoc in Europe, and worries it could come to Norway. And you’re like “Well, stuff will change a lot, but your parents also had tons of worries before giving birth to you. ”, and then later its revealed you don’t think the plague actually exists, or if exists is not worse than the cold or something.
She did say this in her original comment. And it’s not really similar to denying the black death, because the black death, cruciallly, existed.
She said >90% he can raise a family in a somewhat normal way, she did not say the <<1% part.
Whatever. Do you not get the point of what I’m trying to say? I’m not criticizing he for having a low p(superintelligence soon) or p(ASI kills everyone soon). I’m criticizing her for not making it clear enough that those are the differences in belief causing her to disagree with him.
She disclosed that she disagreed both about superintelligence appearing within our lifetimes and about x-risk being high. If you missed that paragraph, that’s fine, but it’s not her error.
She did not disclose the information I’m specifically mentioning? Post a screenshot of her original comment and underline in red where she says <<1% pdoom.
Why are you accusing me of not reading her comment? That’s rude. I would tell you if I reread her comment and noticed information I’d missed.
You didn’t say “you didn’t say your probability was <1%”, you said “You should’ve said this in your original comment. You obviously have a very different idea of AI development and x-risk than this guy, or even most people on lesswrong.” However, the fact that she has a very different perspective on AI risk than the OP or most people on lesswrong was evident from the fact that she stated as much in the original comment. (It was also clear that she didn’t think superintelligence would be built within 20 years, because she said that, and that she didn’t think superintelligence was likely to kill everyone because she said that too).
I said she should’ve said “this” in her original comment, when I was commenting on her reply explaining the extent of the difference. The fact that this difference was is/was load-bearing is clear from the fact that Nikola replied with
But she said the same things in her original comment as in that reply, just with less detail. Nikola did reply with that, presumably because Nikola believes we’re all doomed, but Nina did say in her original comment that she thinks Nikola is way overconfident about us all being doomed.
Okay there is a difference between thinking we’re not all doomed and thinking p(doom) << 1%