On the contrary, my core thesis is that AI risk advocates are being irrational. It’s implied in the title of the post ;)
Specifically I think they are arriving at their beliefs via philosophical arguments about the nature of intelligence which are severely lacking in empirical data, and then further shooting themselves in the foot by rationalizing reasons to not pursue empirical tests. Taking a belief without evidence, and then refusing to test that belief empirically—I’m willing to call a spade a spade: that is most certainly irrational.
I largely agree, but to be fair we should consider that MIRI started working on AI safety theory long before the technology required for practical experimentation with human-level AGI—to do that you need to be close to AGI in the first place.
Now that we are getting closer, the argument for prioritizing experiments over theory becomes stronger.
On the contrary, my core thesis is that AI risk advocates are being irrational. It’s implied in the title of the post ;)
Specifically I think they are arriving at their beliefs via philosophical arguments about the nature of intelligence which are severely lacking in empirical data, and then further shooting themselves in the foot by rationalizing reasons to not pursue empirical tests. Taking a belief without evidence, and then refusing to test that belief empirically—I’m willing to call a spade a spade: that is most certainly irrational.
That’s a good summary of your post.
I largely agree, but to be fair we should consider that MIRI started working on AI safety theory long before the technology required for practical experimentation with human-level AGI—to do that you need to be close to AGI in the first place.
Now that we are getting closer, the argument for prioritizing experiments over theory becomes stronger.