To my understanding, since the time when the events described in the OP took place, MIRI and CFAR have been very close and getting closer and closer. As far as I see it, nowadays CFAR is about 60% a hiring ground for MIRI and only 40% something else, though I could be wrong. Since you’re one of the leaders of CFAR, that makes you one of the leading people behind all those things the OP is critical of.
The OP even writes that she thought and thinks CFAR was corrupt in 2017:
Both these cases are associated with a subgroup splitting off of the CFAR-centric rationality community due to its perceived corruption, centered around Ziz. (I also thought CFAR was pretty corrupt at the time, and I also attempted to split off another group when attempts at communication with CFAR failed; I don’t think this judgment was in error, though many of the following actions were; …)
Here she mentions Ziz also thinking that CFAR was corrupt, and I remember that in her blog, Ziz thought you being in the center of said corruption.
So, how all is this compatible with you agreeing with the OP?
I am (was) an X% researcher, where X<Y. I wish I had given up on AI safety earlier. I suspect it would’ve been better for me if AI safety resources explicitly said things like “if you’re less than Y, don’t even try”, although I’m not sure if I would’ve believed them. Now, I’m glad that I’m not trying to do AI safety anymore and instead I just work at a well paying relaxed job doing practical machine learning. So, I think pushing too many EAs into AI safety will lead to those EAs suffering much more, which happened to me, so I don’t want that to happen and I don’t want the AI Alignment community to stop saying “You should stay if and only if you’re better than Y”.
Actually, I wish there were more selfish-oriented resources for AI Alignment. Like, with normal universities and jobs, people analyze how to get into them, have a fulfilling career, earn good money, not burn out, etc. As a result, people can read this and properly analyze if it makes sense for them to try to get into jobs or universities for their own food. But with a career in AI safety, this is not the case. All the resources look out not only for the reader, but also for the whole EA project. I think this can easily burn people.