I think the points here are good, but it would be much better as a post if it was more respectful of Ball’s position, attempting to understand it instead of just attacking it. (Especially the conclusion.)
I agree that he’s not thinking about superintelligence, and I think the actual argument is about how much intelligence, even superintelligence, translates into ability to do useful work. Being really smart and working really hard simply isn’t enough to do things that are actually implausibly difficult. And if so the question is whether things that cause existential risk are implausibly difficult. (In Biorisk, the answer may be yes, though it’s very unclear. But for exfiltration, persuasion, and scheming, the answer is pretty clearly no.)
To answer briefly here: My understanding of Dean’s position is that he totally rules out believes it is highly unlikely that AI could wipe out humanity mainly based on this “superintelligence is not omnipotent” argument. we specifically seems to believe that superintelligence won’t ever gain the capability to do so. This is the superintelligence is going to be weak view. But it is pretty apparent to me that much less than superintelligence is sufficient to kill us. I don’t believe that AI strictly need to something along the lines of “exfiltration, persuasion, and scheming”, there are many ways for it to win. Clearly, there exist such ways, it is not impossible purely because ASI isn’t omnipotent. Edit: he believes it is “highly unlikely” not impossible
My understanding of Dean’s position is that he totally rules out the possibility of AI wiping out humanity mainly based on this “superintelligence is not omnipotent” argument.
But again, overall I agree with your points—I just think it’s better not to be insulting about it, and give people like Dean who are engaging in good faith the benefit of the doubt.
While I sort of see your point, I do think Dean has a higher rate of randomly insulting other people (including myself) than pretty much anybody else respected within rationalist circles for their discourse norms[1].
(I’m not including people who, e.g., respect Musk for his ability to Get Things Done; I don’t think many people here think of Musk as an ideal debate participant).
Someone else doing something I think is ineffective doesn’t imply that it’s effective to do it! And yes, the fact that one side of a debate does the ineffective thing makes it more likely the other side will as well, but that’s not any sort of vindication! To quote myself:
Manheim’s Law of Positive-Sum Badness:
In polarized disputes, evidence that one side is stupid, malicious, or evil increases the probability that the opposing side is too.
Do you have an example you could share here or privately of him being rude? The more I look into his stuff seems he regularly mocks other people and blocks anyone challenging him. I mean I appreciate him saying the obvious on the anthropic dow situation as a former trump admin guy.
I think you are misreading him here. From reading the rest of his stuff and his response I would say he is merely referring to AI causing a “catastrophe” like a major disaster. similar to a tornado ripping through.a town, AI hacking all the airports.
Thanks for the tweets again but I don’t see clear evidence here that engaging with the community on twitter has updated him much.
If you don’t see a difference between “totally rules out” and “highly unlikely”, you REALLY need to go read the sequences.
I don’t see clear evidence here that engaging with the community on twitter has updated him much.
I didn’t say he engaged with the community on twitter. If you need direct evidence, show up at Lighthaven events he’s attending. Or go read posts where community members talk about Dean engaging with them—they disagree, sure, but looking at his updates over the past 2 years are how many bits of evidence towards our view against his prior views? (Again, read the sequences.)
I see the difference, and have updated my comment accordingly. he believes it is highly unlikely not impossible, though unclear what he means exactly (<1% perhaps?). I didn’t say he didn’t engage, just from your tweets it is not so clear he updated meaningfully; he did update his timelines probably based on recent advances. I still assume he talks about catastrophe in the “major disaster” kind of way, which is an unfortunate effect of using an unclear term here. Dean isn’t shy to use partisan/mocking language himself. i don’t like the idea of being talked about with mocking language but being unable to shoot back in a similar style but opinions may vary..
Thanks for picking out these quotes from him. However, I do think they seem to support my model of his views pretty much.
To be clear: I understand that he believes ai could cause catastrophic risk, but he’s probably thinking about serious catastrophic events here. Like money is lost, some people die.
that’s fair, i’ll think about softening this piece. though i don’t think he is engaging very well with other people here. Like the way he talks about the “doomers” is clearly mocking too:
“One common assumption (though less prevalent with time) among many people in “the AI safety community” is that artificial superintelligence will be able to “do anything.” Now, most people in this world are much too smart to say literally these words, and so it might be fairer to put my criticism this way: “many people in ‘the AI safety community’ are way too willing to resort to extreme levels of hand-waviness when it comes to the supposed capabilities of superintelligent AI.” The tautological pattern of the AI safetyist mind is easy enough to recognize once you encounter it a few times: “Well of course superintelligence will be able to do that. After all, it’s superintelligence. And because superintelligence will obviously be able to do that, you must agree with me that banning superintelligence is an urgent necessity.”” – from his 2023 text (that’s the name, he just posted it)
it is pretty apparent to me that much less than superintelligence is sufficient to kill us.
That seems incredibly not obvious, and I’d call it a straw man of your position if you hadn’t literally said it.
Do you mean “something less than a very strong superintelligence is sufficient” or do you mean “sufficient to do something that probably could kill us, if humans don’t pay much attention, and not with anything like certainty”?
Is it your position that it is not obvious that a new species can causally drive another species extinct without being orders of magnitude more intelligent? Because the earth has had millions of existence proofs of that in the history of life.
I mean there are paths where non-superintelligence kills us. Like looks plausibly that we will just hand AI control over the military and give it direct access to bio labs.
I don’t know but like it spawns a new pathogen each week, each very contagious and deadly. then it spreads pathogens that cause mass crop death. then come ai drones picking up larger groups of survivors. Then ground robots, small airborne drones. Then climate change +10C. One after the other. Whats impossible here?
“This instinct seems to infect the far left across lots of domains: immigration, crime fighting, and the national debt to name a few. You can tell they’re just sort of yearning to submit our society to outside forces: mobs, international councils, or communist China. … They don’t believe in order, except brutal order under their heels.” – blaming resistance to AI datacenters on far left lunatics.
This new post is also not exactly free of mocking language:
““the AI safety community” is that artificial superintelligence will be able to “do anything.” Now, most people in this world are much too smart to say literally these words, and so it might be fairer to put my criticism this way: “many people in ‘the AI safety community’ are way too willing to resort to extreme levels of hand-waviness when it comes to the supposed capabilities of superintelligent AI.” The tautological pattern of the AI safetyist mind is easy enough to recognize once you encounter it a few times: “Well of course superintelligence will be able to do that. After all, it’s superintelligence. And because superintelligence will obviously be able to do that, you must agree with me that banning superintelligence is an urgent necessity.””
So I feel like he should be able to handle my tone here, but will possibly adjust it a bit.
I think the points here are good, but it would be much better as a post if it was more respectful of Ball’s position, attempting to understand it instead of just attacking it. (Especially the conclusion.)
I agree that he’s not thinking about superintelligence, and I think the actual argument is about how much intelligence, even superintelligence, translates into ability to do useful work. Being really smart and working really hard simply isn’t enough to do things that are actually implausibly difficult. And if so the question is whether things that cause existential risk are implausibly difficult. (In Biorisk, the answer may be yes, though it’s very unclear. But for exfiltration, persuasion, and scheming, the answer is pretty clearly no.)
To answer briefly here: My understanding of Dean’s position is that he
totally rules outbelieves it is highly unlikely that AI could wipe out humanity mainly based on this “superintelligence is not omnipotent” argument. we specifically seems to believe that superintelligence won’t ever gain the capability to do so. This is the superintelligence is going to be weak view. But it is pretty apparent to me that much less than superintelligence is sufficient to kill us. I don’t believe that AI strictly need to something along the lines of “exfiltration, persuasion, and scheming”, there are many ways for it to win. Clearly, there exist such ways, it is not impossible purely because ASI isn’t omnipotent.Edit: he believes it is “highly unlikely” not impossible
He said something like that in the past, but has updated greatly, and since then said AI causing human extinction is only “highly unlikely”, then even more recently said that “ai present catastrophic risks” and “alignment may become a more central issue for me again depending on how well alignment seems to work for smarter-than-human widely deployed ai”.
But again, overall I agree with your points—I just think it’s better not to be insulting about it, and give people like Dean who are engaging in good faith the benefit of the doubt.
While I sort of see your point, I do think Dean has a higher rate of randomly insulting other people (including myself) than pretty much anybody else respected within rationalist circles for their discourse norms[1].
(I’m not including people who, e.g., respect Musk for his ability to Get Things Done; I don’t think many people here think of Musk as an ideal debate participant).
Someone else doing something I think is ineffective doesn’t imply that it’s effective to do it! And yes, the fact that one side of a debate does the ineffective thing makes it more likely the other side will as well, but that’s not any sort of vindication! To quote myself:
Fair, it depends a lot on whether you think the badness of insulting someone is due to intrinsic vs game-theoretic reasons.
Do you have an example you could share here or privately of him being rude? The more I look into his stuff seems he regularly mocks other people and blocks anyone challenging him. I mean I appreciate him saying the obvious on the anthropic dow situation as a former trump admin guy.
DM’d
Thanks for the quote
He is simply updating his timelines
Basically still supporting my thesis, I don’t see any reason he updated here because he says highly unlikely now.
I think you are misreading him here. From reading the rest of his stuff and his response I would say he is merely referring to AI causing a “catastrophe” like a major disaster. similar to a tornado ripping through.a town, AI hacking all the airports.
Thanks for the tweets again but I don’t see clear evidence here that engaging with the community on twitter has updated him much.
If you don’t see a difference between “totally rules out” and “highly unlikely”, you REALLY need to go read the sequences.
I didn’t say he engaged with the community on twitter. If you need direct evidence, show up at Lighthaven events he’s attending. Or go read posts where community members talk about Dean engaging with them—they disagree, sure, but looking at his updates over the past 2 years are how many bits of evidence towards our view against his prior views? (Again, read the sequences.)
I see the difference, and have updated my comment accordingly. he believes it is highly unlikely not impossible, though unclear what he means exactly (<1% perhaps?). I didn’t say he didn’t engage, just from your tweets it is not so clear he updated meaningfully; he did update his timelines probably based on recent advances. I still assume he talks about catastrophe in the “major disaster” kind of way, which is an unfortunate effect of using an unclear term here. Dean isn’t shy to use partisan/mocking language himself. i don’t like the idea of being talked about with mocking language but being unable to shoot back in a similar style but opinions may vary..
Thanks for picking out these quotes from him. However, I do think they seem to support my model of his views pretty much.
To be clear: I understand that he believes ai could cause catastrophic risk, but he’s probably thinking about serious catastrophic events here. Like money is lost, some people die.
that’s fair, i’ll think about softening this piece. though i don’t think he is engaging very well with other people here. Like the way he talks about the “doomers” is clearly mocking too:
“One common assumption (though less prevalent with time) among many people in “the AI safety community” is that artificial superintelligence will be able to “do anything.” Now, most people in this world are much too smart to say literally these words, and so it might be fairer to put my criticism this way: “many people in ‘the AI safety community’ are way too willing to resort to extreme levels of hand-waviness when it comes to the supposed capabilities of superintelligent AI.” The tautological pattern of the AI safetyist mind is easy enough to recognize once you encounter it a few times: “Well of course superintelligence will be able to do that. After all, it’s superintelligence. And because superintelligence will obviously be able to do that, you must agree with me that banning superintelligence is an urgent necessity.””
– from his 2023 text (that’s the name, he just posted it)
...which he updated heavily from since, after engaging honestly and reasonably with people from the community!
What exactly did he update? I saw that post where he apparently shortened his timelines?
Also:
That seems incredibly not obvious, and I’d call it a straw man of your position if you hadn’t literally said it.
Do you mean “something less than a very strong superintelligence is sufficient” or do you mean “sufficient to do something that probably could kill us, if humans don’t pay much attention, and not with anything like certainty”?
Is it your position that it is not obvious that a new species can causally drive another species extinct without being orders of magnitude more intelligent? Because the earth has had millions of existence proofs of that in the history of life.
I mean there are paths where non-superintelligence kills us. Like looks plausibly that we will just hand AI control over the military and give it direct access to bio labs.
I don’t think that either of those guarantees that most of humanity dies, much less everyone. Especially the latter, given what is actually possible.
The study seems to be about what is predicted by experts to be possible, not what is possible afaict?
I don’t know but like it spawns a new pathogen each week, each very contagious and deadly. then it spreads pathogens that cause mass crop death. then come ai drones picking up larger groups of survivors. Then ground robots, small airborne drones. Then climate change +10C. One after the other. Whats impossible here?
This is the kind of rhetoric Dean supports and praises: https://x.com/deanwball/status/2026325817291104728
“This instinct seems to infect the far left across lots of domains: immigration, crime fighting, and the national debt to name a few. You can tell they’re just sort of yearning to submit our society to outside forces: mobs, international councils, or communist China. … They don’t believe in order, except brutal order under their heels.” – blaming resistance to AI datacenters on far left lunatics.
This new post is also not exactly free of mocking language:
““the AI safety community” is that artificial superintelligence will be able to “do anything.” Now, most people in this world are much too smart to say literally these words, and so it might be fairer to put my criticism this way: “many people in ‘the AI safety community’ are way too willing to resort to extreme levels of hand-waviness when it comes to the supposed capabilities of superintelligent AI.” The tautological pattern of the AI safetyist mind is easy enough to recognize once you encounter it a few times: “Well of course superintelligence will be able to do that. After all, it’s superintelligence. And because superintelligence will obviously be able to do that, you must agree with me that banning superintelligence is an urgent necessity.””
So I feel like he should be able to handle my tone here, but will possibly adjust it a bit.