This seems like an incredibly biased presentation, with the author never realizing the depth of the bias. Then again, he writes “My goal was to convince my students that all of us are going to be killed by an artificial intelligence,” not to probe the validity of the point, so his bottom line was already written.
He says “I presented a neutral summary” after judiciously guiding the students through one-sided claims and refutations about the AGI (can never play -- plays ), not any of the claims that have not (yet) been refuted, then spicing it up with the Terminator quotes.
He says “At all points in the discussion I did my best to appear neutral and to not reveal my views.” right after scaring them with a bomb in a trashcan.
He assigned no homework and gave no time outside the class for the students to come up with counter-arguments.
He writes:
In my classes, my primary goal was to teach students how to construct and assess
arguments… Arguments can be assessed. If an argument has flaws, you can find
those flaws. If you find flaws in an argument, the argument is refuted. If you cannot find flaws,
you can take time to think about it more. If you still cannot find flaws, you should consider the
possibility that the argument has no flaws. And if there are no flaws in an argument, then the
conclusion of that argument has to be true, no matter what that conclusion might be.
The idea that an argument can sometimes be tested experimentally seems utterly foreign to him (even when it is in his favor, like the “AI can never be better at chess” one). Must be something about the philosophers in general, I suppose.
He primed his students in advance:
Up to this point, I had not presented any AI material to anyone in any of the classes. I had only remarked a couple of times that the AI arguments were “awesome” or “epic” or some such.
He did not attempt to provide a balanced context by inviting (or at least quoting) an expert in the area who does not share his views.
So his conclusion, that it is possible to convince a person who never thought about a topic before of the dangers of an AGI, was a foregone one. He could probably have convinced them that AGI is the second coming of Christ, if he bothered (it is a Catholic college, so the leap is not that large).
“My goal was to convince my students that all of us are going to be killed by an artificial intelligence,” not to probe the validity of the point, so his bottom line was already written.
Sort of. Assuming he was basically convinced that ”...all of us are going to be killed by an artificial intelligence,” he knew he was trying to convince his students of that but he did not know if he would succeed at doing so with this method. He wasn’t testing the dangers of AI, he was testing a method of persuasion.
This seems like an incredibly biased presentation, with the author never realizing the depth of the bias. Then again, he writes “My goal was to convince my students that all of us are going to be killed by an artificial intelligence,” not to probe the validity of the point, so his bottom line was already written.
He says “I presented a neutral summary” after judiciously guiding the students through one-sided claims and refutations about the AGI (can never play -- plays ), not any of the claims that have not (yet) been refuted, then spicing it up with the Terminator quotes.
He says “At all points in the discussion I did my best to appear neutral and to not reveal my views.” right after scaring them with a bomb in a trashcan.
He assigned no homework and gave no time outside the class for the students to come up with counter-arguments.
He writes:
The idea that an argument can sometimes be tested experimentally seems utterly foreign to him (even when it is in his favor, like the “AI can never be better at chess” one). Must be something about the philosophers in general, I suppose.
He primed his students in advance:
He did not attempt to provide a balanced context by inviting (or at least quoting) an expert in the area who does not share his views.
So his conclusion, that it is possible to convince a person who never thought about a topic before of the dangers of an AGI, was a foregone one. He could probably have convinced them that AGI is the second coming of Christ, if he bothered (it is a Catholic college, so the leap is not that large).
Sort of. Assuming he was basically convinced that ”...all of us are going to be killed by an artificial intelligence,” he knew he was trying to convince his students of that but he did not know if he would succeed at doing so with this method. He wasn’t testing the dangers of AI, he was testing a method of persuasion.