My inside-view perspective: MIRI failed in part because they’re wrong and philosophically confused. They made incorrect assumptions about the problem, and so of course they failed.
naïvely
I did my PhD in this field and have authored dozens of posts about my beliefs, critiques, and proposals. Specifically, many posts are about my disagreements with MIRI/EY, like Inner and Outer Alignment Decompose One Hard Problem Into Two Extremely Hard Problems (voted into the top 10 of the LessWrong review for that year), Many Arguments for AI X-Risk Are Wrong, or Some of My Disagreements with List of Lethalities. You might disagree with me, but I am not naive in my experience or cavalier in coming to this conclusion.
My inside-view perspective: MIRI failed in part because they’re wrong and philosophically confused. They made incorrect assumptions about the problem, and so of course they failed.
I did my PhD in this field and have authored dozens of posts about my beliefs, critiques, and proposals. Specifically, many posts are about my disagreements with MIRI/EY, like Inner and Outer Alignment Decompose One Hard Problem Into Two Extremely Hard Problems (voted into the top 10 of the LessWrong review for that year), Many Arguments for AI X-Risk Are Wrong, or Some of My Disagreements with List of Lethalities. You might disagree with me, but I am not naive in my experience or cavalier in coming to this conclusion.