Some of the stuff I’ve posted—http://lesswrong.com/lw/ksa/the_metaphormyth_of_general_intelligence/ , http://lesswrong.com/lw/hvo/against_easy_superintelligence_the_unforeseen/ - could be used to build a good anti-MIRI steelman, but I’ve not seen them used.
The most convincing anti-MIRI argument? AI may not develop in the way you’re imagining. The most convincing rebuttal? We only need a decent probability of that happening to justify worrying about it.
Some of the stuff I’ve posted—http://lesswrong.com/lw/ksa/the_metaphormyth_of_general_intelligence/ , http://lesswrong.com/lw/hvo/against_easy_superintelligence_the_unforeseen/ - could be used to build a good anti-MIRI steelman, but I’ve not seen them used.
The most convincing anti-MIRI argument? AI may not develop in the way you’re imagining. The most convincing rebuttal? We only need a decent probability of that happening to justify worrying about it.