To what extent, if any, will this course acknowledge that some people disagree very vigorously with what I take to be the positions you’re generally advocating for?
(I ask not because I think those people are right and you’re wrong—I think those people are often wrong and sometimes very silly indeed and expect I would favour your position over theirs at least 80% of the time—but because I think it’s important that your students be able to distinguish “this is uncontroversial fact about which basically no one disagrees” from “this is something I am very confident of, but if you talked to some of the other faculty they might think I’m as crazy as I think they are” from “this is my best guess and I am not terribly sure it’s right”, and the fact that pretty much all the required reading is from an LW-ish EA-ish perspective makes me wonder whether you’re making those distinctions clearly. My apologies in advance if I turn out to be being too uncharitable, which I may well be.)
In addition to acknowledging uncertainty, I think the proper way to address this is to ‘teach the controversy.’ Have some articles and tweets by Yann LeCun peppered throughout, for example. Also that Nature article: “Stop Worrying About AI Doomsday.” Etc.
I’m not sure how much space to give the more unreasonable criticisms like the ones you point out. My call would be that the most high quality considerations in all directions should be prioritized over critics being influential or figures of authority—although of course that these voices exist deserves mention, although it might illustrate less the factual dimension than the social one.
I agree those criticisms are pretty unreasonable. However I think they are representative of the discourse—e.g. Yann LeCun is a very important and influential person, and also an AI expert, so he’s not cherry-picked.
I will say this: those pieces all make a case for extraordinary risks from AI (albeit in different ways); I am somewhat surprised that I have not been able to find a work of similar intellectual depth arguing that the risks posed by ASI are mostly of “ordinary” types which humanity knows how to deal with. This is often asserted as “obviously” true, and given a brief treatment; unfortunately-often the rebuttal is mere proof by ridicule, or by lack-of-imagination (often people whose main motivation appears to be that people they don’t like are worried about ASI xrisk). It’s perhaps not so surprising: “the sky is not falling” is not an obvious target for a serious book-length treatment. Still, I hope someone insightful and imaginative will fill the gap. Three brief-but-stimulating shorter treatments are: Anthony Zador and Yann LeCun, Don’t Fear the Terminator (2019); Katja Grace, Counterarguments to the basic AI x-risk case (2022); and: David Krueger, A list of good heuristics that the case for AI x-risk fails (2019).↩︎
i.e. he thinks there just isn’t much actually good criticism out there, to the point where he thinks LeCun is one of the top three!!!! (And note that the other two aren’t exactly harsh critics, they are kinda AI safety people playing devil’s advocate...)
Completely agreed on the state of the discourse. I think the more interesting discussions start once you acknowledge at least the vague general possibility of serious risk (see e.g. the recent debate posts on the EA forum). I still think these are wrong, but at least worth engaging with.
If I was giving a course, I just wouldn’t really know what to do with actively bad opinions beyond “this person says XYZ” and maybe having the students reason about it as an exercise. But if you do this too much it feels like gloating.
Honestly I think the strongest criticism will come from someone arguing that there’s not enough leverage in our world for superintelligence to be much more powerful than us, for good or bad. People who argue that ASI is absolutely necessary because it will make us immortal and colonise the stars but doesn’t warrant any worry about the possibility it may direct its vast power to less desirable goals are just unserious though. Also obviously the possibility that AGI may actually be still far off, but that doesn’t say much about whether it’s dangerous, just whether the danger is imminent.
To what extent, if any, will this course acknowledge that some people disagree very vigorously with what I take to be the positions you’re generally advocating for?
(I ask not because I think those people are right and you’re wrong—I think those people are often wrong and sometimes very silly indeed and expect I would favour your position over theirs at least 80% of the time—but because I think it’s important that your students be able to distinguish “this is uncontroversial fact about which basically no one disagrees” from “this is something I am very confident of, but if you talked to some of the other faculty they might think I’m as crazy as I think they are” from “this is my best guess and I am not terribly sure it’s right”, and the fact that pretty much all the required reading is from an LW-ish EA-ish perspective makes me wonder whether you’re making those distinctions clearly. My apologies in advance if I turn out to be being too uncharitable, which I may well be.)
In addition to acknowledging uncertainty, I think the proper way to address this is to ‘teach the controversy.’ Have some articles and tweets by Yann LeCun peppered throughout, for example. Also that Nature article: “Stop Worrying About AI Doomsday.” Etc.
I’m not sure how much space to give the more unreasonable criticisms like the ones you point out. My call would be that the most high quality considerations in all directions should be prioritized over critics being influential or figures of authority—although of course that these voices exist deserves mention, although it might illustrate less the factual dimension than the social one.
I agree those criticisms are pretty unreasonable. However I think they are representative of the discourse—e.g. Yann LeCun is a very important and influential person, and also an AI expert, so he’s not cherry-picked.
Also see this recent review from someone who seems thoughtful and respected: Notes on Existential Risk from Artificial Superintelligence (michaelnotebook.com) who says
i.e. he thinks there just isn’t much actually good criticism out there, to the point where he thinks LeCun is one of the top three!!!! (And note that the other two aren’t exactly harsh critics, they are kinda AI safety people playing devil’s advocate...)
Completely agreed on the state of the discourse. I think the more interesting discussions start once you acknowledge at least the vague general possibility of serious risk (see e.g. the recent debate posts on the EA forum). I still think these are wrong, but at least worth engaging with.
If I was giving a course, I just wouldn’t really know what to do with actively bad opinions beyond “this person says XYZ” and maybe having the students reason about it as an exercise. But if you do this too much it feels like gloating.
Honestly I think the strongest criticism will come from someone arguing that there’s not enough leverage in our world for superintelligence to be much more powerful than us, for good or bad. People who argue that ASI is absolutely necessary because it will make us immortal and colonise the stars but doesn’t warrant any worry about the possibility it may direct its vast power to less desirable goals are just unserious though. Also obviously the possibility that AGI may actually be still far off, but that doesn’t say much about whether it’s dangerous, just whether the danger is imminent.