Edit #2: Severe identity error on my part! I seem to have been confusing who’s who from memory badly, I made the video summary when I first saw the video and then turned that summary into a chattable bot today, and lost track of who was who in the process! I stand by the point made here, but it seems to be somewhat out of context, as it is merely related to the kind of thing Melanie Mitchell said by nature of being made by people who make similar points. I’m not going to delete this comment, but I’d appreciate folks wiping out all the upvotes.
I think that the AI x-risk crowd is continuing to catastrophically misunderstand a key point in the Mitchell AI Ethics researchers’ view: their claim that there are, in fact, critical present-day harms from ai, they should be acknowledged, and they should in fact be solved very urgently. I happen to think that x-risk from AI is made of the same type of threat; but even if they weren’t, I think that 1. Mitchell crowd AI Ethics crowd are being completely unreasonable in dismissing x-risk. you think that somehow capitalism is going to kill us all, and AI won’t supercharge capitalism? what the hell are you smoking? 2. also, even if not for threat from capitalism, AI will do the same sort of stuff that makes capitalism bad but so much harder that even capitalism won’t be able to take it.
We can’t have people going “no, capitalism is fine actually” to someone whose whole point is that capitalist oppression is a problem. They’ll just roll their eyes. Capitalism is unpopular actually!
Also, I don’t really need to define the word for the type of argument one would have with MMitchell Gebru and Bender, she’d know what it means; but I would define the problem behaviors as optimization towards a numeric goal (increase investor payout) without regard for the human individuals in the system (workers, customers; even really investors don’t get a good deal besides the money number going up). That’s exactly what we’re worried about with AI—but now without humans in the loop. Her claims that it’s just hype are nonsense, she believes lecun’s disinformation—and he’s an agent of one of the nastiest capitalist orgs around!
Melanie Mitchell and Meg Mitchell are different people. Melanie was the participant in this debate, but you seem to be ascribing Meg’s opinions to her, including linking to video interviews with Meg in your comments.
Wait, whoops. Let me retrace identity here, sounds like a big mistake, sorry bout that Meg & Melanie when you see this post someday, heh.
edit: oops! the video I linked doesn’t contain a Mitchell at all! It’s Emily M. Bender and Timnit Gebru, both of whom I have a high opinion of for their commentary on near-term AI harms, and both of whom I am frustrated with for not recognizing how catastrophic those very harms could become if they were to keep on getting worse.
a key point in the Mitchell view: that there are, in fact, critical present-day harms from ai, they should be acknowledged, and they should in fact be solved very urgently.
I didn’t watch the debate, but seems to me that the right approach would be to agree with Mitchell about the short-term harms, and then say something like “smaller AIs—smaller problems, larger AIs—larger problems”. (EDIT) She agrees with the first part, and it would be difficult to claim that AIs will never get stronger, or that stronger AIs cannot create greater problems (at least, the same kind of problems, on greater scale).
Optionally, but this is a dirty move, ask Mitchell about (EDIT) her opinion on global warming. It’s also just models and hypotheses, plus there actually exist catastrophic sci-fi movies about it. Try to make an analogy of (EDIT) her response for the AI x-risk.
Mitchell gebru and bender express their opinions on such things in more detail in the video I linked. Here’s the overcompressed summary, which badly miscompresses the video, but which is a reasonable pitch for why you should watch it to get the full thing in order to respond to the points eloquently rather than using the facsimile. If you can put your annoyance at them missing the point about x-risk on hold and just try to empathize with their position having also been trying to ring alarm bells and being dismissed, and see how they’re feeling like the x-risk crowd is just controlled opposition being used to dismiss their warnings, I think it could be quite instructive.
I also strongly recommend watching this video—timestamp is about 30sec before the part I’m referencing—where Bengio and Tegmark have a discussion with, among others, Tawana Petty, and they also completely miss the point about present-day harms. In particular, note that as far as I can tell she’s not frustrated that they’re speaking up, she’s frustrated that they seem to be oblivious in conversation to what the present day harms even are; when she brings it up, they defend themselves as having already done something, which in my view misses the point because she was looking for action on present day harms to be weaved into the action they’re demanding from the start. “Why didn’t they speak up when Timnit got fired?” or so. She’s been pushing for people like them to speak up for years, and she appears to feel frustrated that even when they bring it up they won’t mention the things she sees as the core problems. Whether or not she’s right that the present day problems are the core, I agree enthusiastically that present day problems are intensely terrible and are a major issue we should in fact acknowledge and integrate into plans to take action as best we can. This will remain a point of tension, as some won’t want to “dilute” the issue by bringing up “controversial” issues like racism. But I’d like to at least zoom in on this core point of conflict, since it seems to get repeatedly missed. We need to not be redirecting away from this, but rather integrating. I don’t know how to do that off the top of my head. Tegmark responds to this, but I feel like it’s a pretty crappy response that was composed on the fly, and it’d be worth the time to ponder asynchronously how to respond more constructively.
“This has been killing people!”
“Yes, but it might kill all people!”
“Yes, but it’s killing people!”
“Of course, sure, whatever, it’s killing people, but it might kill all people!”
You can see how this is not a satisfying response. I don’t pretend to know what would be.
“Of course, sure, whatever, it’s killing people, but it might kill all people!”
But this isn’t the actual back-and-forth, the third point should be “no it won’t, you’re distracting from the people currently being killed!”. This is all a game to subtly beg the question. If AI is an existential threat, all current mundane threats like misinformation, job loss, AI bias, etc. are rounding errors to the total harm, the only situation where you’d talk about them is if you’ve already granted that the existential risks don’t exist.
If a large comet is heading towards Earth, and some group thinks it won’t actual hit Earth, but merely pass harmlessly close-by, and they start talking about the sun’s reflections off the asteroid making life difficult for people with sensitive eyes… they are trying to get you to assume the conclusion.
Sure, I agree, the asteroid is going to kill us all. But it would be courteous to acknowledge that it’s going to hit a poor area first, and they’ll die a few minutes earlier. Also, uh, all of us are going to die, I think that’s the core thing! we should save the poor area, and also all the other areas!
rounding errors to the total harm, the only situation where you’d talk about them is if you’ve already granted that the existential risks don’t exist
It’s possible to consider relatively irrelevant things, such as everything in ordinary human experience, even when there is an apocalypse on the horizon. The implied contextualizing norm asks for inability to consider them, or at least increases the cost.
Edit #2: Severe identity error on my part! I seem to have been confusing who’s who from memory badly, I made the video summary when I first saw the video and then turned that summary into a chattable bot today, and lost track of who was who in the process! I stand by the point made here, but it seems to be somewhat out of context, as it is merely related to the kind of thing Melanie Mitchell said by nature of being made by people who make similar points. I’m not going to delete this comment, but I’d appreciate folks wiping out all the upvotes.
I think that the AI x-risk crowd is continuing to catastrophically misunderstand a key point in the
MitchellAI Ethics researchers’ view: their claim that there are, in fact, critical present-day harms from ai, they should be acknowledged, and they should in fact be solved very urgently. I happen to think that x-risk from AI is made of the same type of threat; but even if they weren’t, I think that 1.Mitchell crowdAI Ethics crowd are being completely unreasonable in dismissing x-risk. you think that somehow capitalism is going to kill us all, and AI won’t supercharge capitalism? what the hell are you smoking? 2. also, even if not for threat from capitalism, AI will do the same sort of stuff that makes capitalism bad but so much harder that even capitalism won’t be able to take it.We can’t have people going “no, capitalism is fine actually” to someone whose whole point is that capitalist oppression is a problem. They’ll just roll their eyes. Capitalism is unpopular actually!
Also, I don’t really need to define the word for the type of argument one would have with
MMitchellGebru and Bender, she’d know what it means; but I would define the problem behaviors as optimization towards a numeric goal (increase investor payout) without regard for the human individuals in the system (workers, customers; even really investors don’t get a good deal besides the money number going up). That’s exactly what we’re worried about with AI—but now without humans in the loop. Her claims that it’s just hype are nonsense, she believes lecun’s disinformation—and he’s an agent of one of the nastiest capitalist orgs around!Edit: here’s a poe bot which embeds an awkward summary of a video interview they did. I have hidden the prompt and directed claude to not represent itself as being an accurate summary; however, claude is already inclined to express views similar to theirs (especially in contrast to ChatGPT, which does not), so I think claude could be an interesting debate partner, especially for those who look down on their views. Here’s an example conversation where I pasted an older version of this comment I made before I realized I’d gotten identities wrong. I also strongly recommend watching the video it’s based on, probably at 2x speed.
Melanie Mitchell and Meg Mitchell are different people. Melanie was the participant in this debate, but you seem to be ascribing Meg’s opinions to her, including linking to video interviews with Meg in your comments.
Wait, whoops. Let me retrace identity here, sounds like a big mistake, sorry bout that Meg & Melanie when you see this post someday, heh.
edit: oops! the video I linked doesn’t contain a Mitchell at all! It’s Emily M. Bender and Timnit Gebru, both of whom I have a high opinion of for their commentary on near-term AI harms, and both of whom I am frustrated with for not recognizing how catastrophic those very harms could become if they were to keep on getting worse.
I didn’t watch the debate, but seems to me that the right approach would be to agree with Mitchell about the short-term harms, and then say something like “smaller AIs—smaller problems, larger AIs—larger problems”. (EDIT) She agrees with the first part, and it would be difficult to claim that AIs will never get stronger, or that stronger AIs cannot create greater problems (at least, the same kind of problems, on greater scale).
Optionally, but this is a dirty move, ask Mitchell about (EDIT) her opinion on global warming. It’s also just models and hypotheses, plus there actually exist catastrophic sci-fi movies about it. Try to make an analogy of (EDIT) her response for the AI x-risk.
Mitchellgebru and bender express their opinions on such things in more detail in the video I linked. Here’s the overcompressed summary, which badly miscompresses the video, but which is a reasonable pitch for why you should watch it to get the full thing in order to respond to the points eloquently rather than using the facsimile. If you can put your annoyance at them missing the point about x-risk on hold and just try to empathize with their position having also been trying to ring alarm bells and being dismissed, and see how they’re feeling like the x-risk crowd is just controlled opposition being used to dismiss their warnings, I think it could be quite instructive.I also strongly recommend watching this video—timestamp is about 30sec before the part I’m referencing—where Bengio and Tegmark have a discussion with, among others, Tawana Petty, and they also completely miss the point about present-day harms. In particular, note that as far as I can tell she’s not frustrated that they’re speaking up, she’s frustrated that they seem to be oblivious in conversation to what the present day harms even are; when she brings it up, they defend themselves as having already done something, which in my view misses the point because she was looking for action on present day harms to be weaved into the action they’re demanding from the start. “Why didn’t they speak up when Timnit got fired?” or so. She’s been pushing for people like them to speak up for years, and she appears to feel frustrated that even when they bring it up they won’t mention the things she sees as the core problems. Whether or not she’s right that the present day problems are the core, I agree enthusiastically that present day problems are intensely terrible and are a major issue we should in fact acknowledge and integrate into plans to take action as best we can. This will remain a point of tension, as some won’t want to “dilute” the issue by bringing up “controversial” issues like racism. But I’d like to at least zoom in on this core point of conflict, since it seems to get repeatedly missed. We need to not be redirecting away from this, but rather integrating. I don’t know how to do that off the top of my head. Tegmark responds to this, but I feel like it’s a pretty crappy response that was composed on the fly, and it’d be worth the time to ponder asynchronously how to respond more constructively.
“This has been killing people!”
“Yes, but it might kill all people!”
“Yes, but it’s killing people!”
“Of course, sure, whatever, it’s killing people, but it might kill all people!”
You can see how this is not a satisfying response. I don’t pretend to know what would be.
But this isn’t the actual back-and-forth, the third point should be “no it won’t, you’re distracting from the people currently being killed!”. This is all a game to subtly beg the question. If AI is an existential threat, all current mundane threats like misinformation, job loss, AI bias, etc. are rounding errors to the total harm, the only situation where you’d talk about them is if you’ve already granted that the existential risks don’t exist.
If a large comet is heading towards Earth, and some group thinks it won’t actual hit Earth, but merely pass harmlessly close-by, and they start talking about the sun’s reflections off the asteroid making life difficult for people with sensitive eyes… they are trying to get you to assume the conclusion.
Sure, I agree, the asteroid is going to kill us all. But it would be courteous to acknowledge that it’s going to hit a poor area first, and they’ll die a few minutes earlier. Also, uh, all of us are going to die, I think that’s the core thing! we should save the poor area, and also all the other areas!
It’s possible to consider relatively irrelevant things, such as everything in ordinary human experience, even when there is an apocalypse on the horizon. The implied contextualizing norm asks for inability to consider them, or at least increases the cost.
Her, Mitchell is a woman
Thanks, fixed.