I’d love to hear your reasoning for why making a video is bad.
I didn’t say that making a video would always be bad! I agree that if the median person reading your comment would make a video, it would probably be good. I only disputed the claim that making a video would always be good.
Do you have a clear example of a blunder someone should not make when making such a video?
Obviously you can’t forecast all the effects of making a video, there could be some probability mass of negative outcome while the mean and median are clearly positive.
Do you have a clear example of a blunder someone should not make when making such a video?
Suppose Echo Example’s video says, “If ASI is developed, it’s going to be like in The Terminator—it wakes up to its existence, realizes it’s more intelligent than humans, and then does what more intelligent species do to weaker ones. Destroys and subjugates them, just like humans do to other species!”
Now Vee Viewer watches this and thinks “okay, the argument is that the ASIs would be a more intelligent ‘species’ than humans, and more intelligent species always want to destroy and subjugate weaker ones”.
Having gotten curious about the topic, Vee mentions this to their friends, and someone points them to Yann LeCun claiming that people imagine killer robots because people fail to imagine that we could just build an AI without the harmful human drives. Vee also runs into Steven Pinker arguing that history “does turn up the occasional megalomaniacal despot or psychopathic serial killer, but these are products of a history of natural selection shaping testosterone-sensitive circuits in a certain species of primate, not an inevitable feature of intelligent systems”.
So then Vee concludes that oh, that thing about ASI’s risks was just coming from a position of anthropomorphism and people not really understanding that AIs are different from humans. They put the thought out of their head.
Then some later time Vee runs into Denny Diligent’s carefully argued blog post about the dangers of ASI. The beginning reads: “In this post, I argue that we need a global ban on developing ASI. I draw on the notion of convergent instrumental goals, which holds that all sufficiently intelligent agents have goals such as self-preservation and acquiring resources...”
At this point, Vee goes “oh, this is again just another version of the Terminator argument, LeCun and Pinker have already disproven that”, closes the tab, and goes do something else. Later Vee happens to have a conversation with their friend, Ash Acquaintance.
Ash: “Hey Vee, I ran into some people worried about artificial superintelligence. They said we should have a global ban. Do you know anything about this?”
Vee: “Oh yeah! I looked into it some time back. It’s actually nothing to worry about, see it’s based on this mistaken premise that intelligence and a desire to dominate would always go hand in hand, but actually when they’ve spoken to some AI researchers and evolutionary psychologists about this...”
Ash: (after having listened to Vee explaining this for half an hour) “Okay, that’s really interesting, you seem to understand the topic really well! Glad you’d already looked into this, now I don’t need to. So, what else have you been up to?”
So basically, making weak arguments that viewers find easily to refute so that they will no longer listen to better arguments later (that those viewers would otherwise have listened to).
If you are making a video, I agree it’s not a good idea to put weaker arguments there if you know stronger arguments.
I strongly disagree with the idea that therefore you should defer to EA / LW leadership (or generally, anyone with more capital/attention/time), and either not publish your own argument or publish their argument instead of yours. If you think an argument is good and other people think it’s bad, I’d say post it.
I strongly disagree with the idea that therefore you should defer to EA / LW leadership (or generally, anyone with more capital/attention/time), and either not publish your own argument or publish their argument instead of yours.
I didn’t say that making a video would always be bad! I agree that if the median person reading your comment would make a video, it would probably be good. I only disputed the claim that making a video would always be good.
Oh, cool
Do you have a clear example of a blunder someone should not make when making such a video?
Obviously you can’t forecast all the effects of making a video, there could be some probability mass of negative outcome while the mean and median are clearly positive.
Suppose Echo Example’s video says, “If ASI is developed, it’s going to be like in The Terminator—it wakes up to its existence, realizes it’s more intelligent than humans, and then does what more intelligent species do to weaker ones. Destroys and subjugates them, just like humans do to other species!”
Now Vee Viewer watches this and thinks “okay, the argument is that the ASIs would be a more intelligent ‘species’ than humans, and more intelligent species always want to destroy and subjugate weaker ones”.
Having gotten curious about the topic, Vee mentions this to their friends, and someone points them to Yann LeCun claiming that people imagine killer robots because people fail to imagine that we could just build an AI without the harmful human drives. Vee also runs into Steven Pinker arguing that history “does turn up the occasional megalomaniacal despot or psychopathic serial killer, but these are products of a history of natural selection shaping testosterone-sensitive circuits in a certain species of primate, not an inevitable feature of intelligent systems”.
So then Vee concludes that oh, that thing about ASI’s risks was just coming from a position of anthropomorphism and people not really understanding that AIs are different from humans. They put the thought out of their head.
Then some later time Vee runs into Denny Diligent’s carefully argued blog post about the dangers of ASI. The beginning reads: “In this post, I argue that we need a global ban on developing ASI. I draw on the notion of convergent instrumental goals, which holds that all sufficiently intelligent agents have goals such as self-preservation and acquiring resources...”
At this point, Vee goes “oh, this is again just another version of the Terminator argument, LeCun and Pinker have already disproven that”, closes the tab, and goes do something else. Later Vee happens to have a conversation with their friend, Ash Acquaintance.
Ash: “Hey Vee, I ran into some people worried about artificial superintelligence. They said we should have a global ban. Do you know anything about this?”
Vee: “Oh yeah! I looked into it some time back. It’s actually nothing to worry about, see it’s based on this mistaken premise that intelligence and a desire to dominate would always go hand in hand, but actually when they’ve spoken to some AI researchers and evolutionary psychologists about this...”
Ash: (after having listened to Vee explaining this for half an hour) “Okay, that’s really interesting, you seem to understand the topic really well! Glad you’d already looked into this, now I don’t need to. So, what else have you been up to?”
So basically, making weak arguments that viewers find easily to refute so that they will no longer listen to better arguments later (that those viewers would otherwise have listened to).
Thanks for reply.
If you are making a video, I agree it’s not a good idea to put weaker arguments there if you know stronger arguments.
I strongly disagree with the idea that therefore you should defer to EA / LW leadership (or generally, anyone with more capital/attention/time), and either not publish your own argument or publish their argument instead of yours. If you think an argument is good and other people think it’s bad, I’d say post it.
I also strongly disagree with that idea.