My understanding of Dean’s position is that he totally rules out the possibility of AI wiping out humanity mainly based on this “superintelligence is not omnipotent” argument.
But again, overall I agree with your points—I just think it’s better not to be insulting about it, and give people like Dean who are engaging in good faith the benefit of the doubt.
While I sort of see your point, I do think Dean has a higher rate of randomly insulting other people (including myself) than pretty much anybody else respected within rationalist circles for their discourse norms[1].
(I’m not including people who, e.g., respect Musk for his ability to Get Things Done; I don’t think many people here think of Musk as an ideal debate participant).
Someone else doing something I think is ineffective doesn’t imply that it’s effective to do it! And yes, the fact that one side of a debate does the ineffective thing makes it more likely the other side will as well, but that’s not any sort of vindication! To quote myself:
Manheim’s Law of Positive-Sum Badness:
In polarized disputes, evidence that one side is stupid, malicious, or evil increases the probability that the opposing side is too.
Do you have an example you could share here or privately of him being rude? The more I look into his stuff seems he regularly mocks other people and blocks anyone challenging him. I mean I appreciate him saying the obvious on the anthropic dow situation as a former trump admin guy.
I think you are misreading him here. From reading the rest of his stuff and his response I would say he is merely referring to AI causing a “catastrophe” like a major disaster. similar to a tornado ripping through.a town, AI hacking all the airports.
Thanks for the tweets again but I don’t see clear evidence here that engaging with the community on twitter has updated him much.
If you don’t see a difference between “totally rules out” and “highly unlikely”, you REALLY need to go read the sequences.
I don’t see clear evidence here that engaging with the community on twitter has updated him much.
I didn’t say he engaged with the community on twitter. If you need direct evidence, show up at Lighthaven events he’s attending. Or go read posts where community members talk about Dean engaging with them—they disagree, sure, but looking at his updates over the past 2 years are how many bits of evidence towards our view against his prior views? (Again, read the sequences.)
I see the difference, and have updated my comment accordingly. he believes it is highly unlikely not impossible, though unclear what he means exactly (<1% perhaps?). I didn’t say he didn’t engage, just from your tweets it is not so clear he updated meaningfully; he did update his timelines probably based on recent advances. I still assume he talks about catastrophe in the “major disaster” kind of way, which is an unfortunate effect of using an unclear term here. Dean isn’t shy to use partisan/mocking language himself. i don’t like the idea of being talked about with mocking language but being unable to shoot back in a similar style but opinions may vary..
Thanks for picking out these quotes from him. However, I do think they seem to support my model of his views pretty much.
To be clear: I understand that he believes ai could cause catastrophic risk, but he’s probably thinking about serious catastrophic events here. Like money is lost, some people die.
that’s fair, i’ll think about softening this piece. though i don’t think he is engaging very well with other people here. Like the way he talks about the “doomers” is clearly mocking too:
“One common assumption (though less prevalent with time) among many people in “the AI safety community” is that artificial superintelligence will be able to “do anything.” Now, most people in this world are much too smart to say literally these words, and so it might be fairer to put my criticism this way: “many people in ‘the AI safety community’ are way too willing to resort to extreme levels of hand-waviness when it comes to the supposed capabilities of superintelligent AI.” The tautological pattern of the AI safetyist mind is easy enough to recognize once you encounter it a few times: “Well of course superintelligence will be able to do that. After all, it’s superintelligence. And because superintelligence will obviously be able to do that, you must agree with me that banning superintelligence is an urgent necessity.”” – from his 2023 text (that’s the name, he just posted it)
He said something like that in the past, but has updated greatly, and since then said AI causing human extinction is only “highly unlikely”, then even more recently said that “ai present catastrophic risks” and “alignment may become a more central issue for me again depending on how well alignment seems to work for smarter-than-human widely deployed ai”.
But again, overall I agree with your points—I just think it’s better not to be insulting about it, and give people like Dean who are engaging in good faith the benefit of the doubt.
While I sort of see your point, I do think Dean has a higher rate of randomly insulting other people (including myself) than pretty much anybody else respected within rationalist circles for their discourse norms[1].
(I’m not including people who, e.g., respect Musk for his ability to Get Things Done; I don’t think many people here think of Musk as an ideal debate participant).
Someone else doing something I think is ineffective doesn’t imply that it’s effective to do it! And yes, the fact that one side of a debate does the ineffective thing makes it more likely the other side will as well, but that’s not any sort of vindication! To quote myself:
Fair, it depends a lot on whether you think the badness of insulting someone is due to intrinsic vs game-theoretic reasons.
Do you have an example you could share here or privately of him being rude? The more I look into his stuff seems he regularly mocks other people and blocks anyone challenging him. I mean I appreciate him saying the obvious on the anthropic dow situation as a former trump admin guy.
DM’d
Thanks for the quote
He is simply updating his timelines
Basically still supporting my thesis, I don’t see any reason he updated here because he says highly unlikely now.
I think you are misreading him here. From reading the rest of his stuff and his response I would say he is merely referring to AI causing a “catastrophe” like a major disaster. similar to a tornado ripping through.a town, AI hacking all the airports.
Thanks for the tweets again but I don’t see clear evidence here that engaging with the community on twitter has updated him much.
If you don’t see a difference between “totally rules out” and “highly unlikely”, you REALLY need to go read the sequences.
I didn’t say he engaged with the community on twitter. If you need direct evidence, show up at Lighthaven events he’s attending. Or go read posts where community members talk about Dean engaging with them—they disagree, sure, but looking at his updates over the past 2 years are how many bits of evidence towards our view against his prior views? (Again, read the sequences.)
I see the difference, and have updated my comment accordingly. he believes it is highly unlikely not impossible, though unclear what he means exactly (<1% perhaps?). I didn’t say he didn’t engage, just from your tweets it is not so clear he updated meaningfully; he did update his timelines probably based on recent advances. I still assume he talks about catastrophe in the “major disaster” kind of way, which is an unfortunate effect of using an unclear term here. Dean isn’t shy to use partisan/mocking language himself. i don’t like the idea of being talked about with mocking language but being unable to shoot back in a similar style but opinions may vary..
Thanks for picking out these quotes from him. However, I do think they seem to support my model of his views pretty much.
To be clear: I understand that he believes ai could cause catastrophic risk, but he’s probably thinking about serious catastrophic events here. Like money is lost, some people die.
that’s fair, i’ll think about softening this piece. though i don’t think he is engaging very well with other people here. Like the way he talks about the “doomers” is clearly mocking too:
“One common assumption (though less prevalent with time) among many people in “the AI safety community” is that artificial superintelligence will be able to “do anything.” Now, most people in this world are much too smart to say literally these words, and so it might be fairer to put my criticism this way: “many people in ‘the AI safety community’ are way too willing to resort to extreme levels of hand-waviness when it comes to the supposed capabilities of superintelligent AI.” The tautological pattern of the AI safetyist mind is easy enough to recognize once you encounter it a few times: “Well of course superintelligence will be able to do that. After all, it’s superintelligence. And because superintelligence will obviously be able to do that, you must agree with me that banning superintelligence is an urgent necessity.””
– from his 2023 text (that’s the name, he just posted it)
...which he updated heavily from since, after engaging honestly and reasonably with people from the community!
What exactly did he update? I saw that post where he apparently shortened his timelines?