In other words, if you set up the allegory so as to force a particular conclusion, that proves that that’s the proper conclusion in real life, because we all know that the allegory must be correct.
I don’t believe for one moment that using a Balrog analogy actually makes people understand the argument when they otherwise wouldn’t.
It is a fallacy to think of AI risk as like Balrogs because someone has written a plausible-sounding story comparing it to Balrogs. And that seems to be the main effect of the Balrog analogy.
In other words, if you set up the allegory so as to force a particular conclusion, that proves that that’s the proper conclusion in real life, because we all know that the allegory must be correct.
I’m a teacher (in real life). I set up the allegory to communicate a set of my beliefs about AI.
I think this is more useful as a piece that fleshes out the arguments; a philosophical dialogue.
I don’t believe for one moment that using a Balrog analogy actually makes people understand the argument when they otherwise wouldn’t.
It is a fallacy to think of AI risk as like Balrogs because someone has written a plausible-sounding story comparing it to Balrogs. And that seems to be the main effect of the Balrog analogy.
I disagree, I think there is value in analogies when used carefully.
Yes, I also agree with this; you have to be careful of implicitly using fiction as evidence.