She did not disclose the information I’m specifically mentioning? Post a screenshot of her original comment and underline in red where she says <<1% pdoom.
Why are you accusing me of not reading her comment? That’s rude. I would tell you if I reread her comment and noticed information I’d missed.
You didn’t say “you didn’t say your probability was <1%”, you said “You should’ve said this in your original comment. You obviously have a very different idea of AI development and x-risk than this guy, or even most people on lesswrong.” However, the fact that she has a very different perspective on AI risk than the OP or most people on lesswrong was evident from the fact that she stated as much in the original comment. (It was also clear that she didn’t think superintelligence would be built within 20 years, because she said that, and that she didn’t think superintelligence was likely to kill everyone because she said that too).
I said she should’ve said “this” in her original comment, when I was commenting on her reply explaining the extent of the difference. The fact that this difference was is/was load-bearing is clear from the fact that Nikola replied with
I think we might be using different operationalizations of “having a family” here. I was imagining it to mean something that at least includes “raise kids from the age of ~0 to 18″. If x-risk were to materialize within the next ~19 years, I would be literally stopped from “having a family” by all of us getting killed.
But she said the same things in her original comment as in that reply, just with less detail. Nikola did reply with that, presumably because Nikola believes we’re all doomed, but Nina did say in her original comment that she thinks Nikola is way overconfident about us all being doomed.
She did not disclose the information I’m specifically mentioning? Post a screenshot of her original comment and underline in red where she says <<1% pdoom.
Why are you accusing me of not reading her comment? That’s rude. I would tell you if I reread her comment and noticed information I’d missed.
You didn’t say “you didn’t say your probability was <1%”, you said “You should’ve said this in your original comment. You obviously have a very different idea of AI development and x-risk than this guy, or even most people on lesswrong.” However, the fact that she has a very different perspective on AI risk than the OP or most people on lesswrong was evident from the fact that she stated as much in the original comment. (It was also clear that she didn’t think superintelligence would be built within 20 years, because she said that, and that she didn’t think superintelligence was likely to kill everyone because she said that too).
I said she should’ve said “this” in her original comment, when I was commenting on her reply explaining the extent of the difference. The fact that this difference was is/was load-bearing is clear from the fact that Nikola replied with
But she said the same things in her original comment as in that reply, just with less detail. Nikola did reply with that, presumably because Nikola believes we’re all doomed, but Nina did say in her original comment that she thinks Nikola is way overconfident about us all being doomed.
Okay there is a difference between thinking we’re not all doomed and thinking p(doom) << 1%