I largely agree, but to be fair we should consider that MIRI started working on AI safety theory long before the technology required for practical experimentation with human-level AGI—to do that you need to be close to AGI in the first place.
Now that we are getting closer, the argument for prioritizing experiments over theory becomes stronger.
That’s a good summary of your post.
I largely agree, but to be fair we should consider that MIRI started working on AI safety theory long before the technology required for practical experimentation with human-level AGI—to do that you need to be close to AGI in the first place.
Now that we are getting closer, the argument for prioritizing experiments over theory becomes stronger.