I feel pretty confident about “this is a line of thinking that’s reasonable and healthy to be able to entertain, alongside lots of other complicated case-by-case factors that all need to be weighed by each actor”, and then I don’t know how to translate that into concrete recommendations for arbitrary LW users.
No. This is maybe clearer given the parenthetical I edited in. Speaking for myself, Critch’s recommendations in https://www.lesswrong.com/posts/7uJnA3XDpTgemRH2c/critch-on-career-advice-for-junior-ai-x-risk-concerned seemed broadly reasonable to me, though I’m uncertain about those too and I don’t know of a ‘MIRI consensus view’ on Critch’s suggestions.
I feel pretty confident about “this is a line of thinking that’s reasonable and healthy to be able to entertain, alongside lots of other complicated case-by-case factors that all need to be weighed by each actor”, and then I don’t know how to translate that into concrete recommendations for arbitrary LW users.