Frankly, it feels more rooted in savannah-brained tribalism & human interest than a evenkeeled analysis of what factors are actually important, neglected and tractable.
Um, I’m not attempting to do cause prioritization or action-planning in the above comment. More like sense-making. Before I move on to the question of what should we do, I want to have an accurate model of the social dynamics in the space.
(That said, it doesn’t seem a foregone conclusion that there are actionable things to do, that will come out of this analysis. If the above story is true, I should make some kind of update about the strategies that EAs adopted with regards to OpenAI in the late 2010s. Insofar as they were mistakes, I don’t want to repeat them.)
It might turn out to be right that the above story is “naive /misleading and ultimately maybe unhelpful”. I’m sure not an expert at understanding these dynamics. But just saying that it’s naive or that it seems rooted in tribalism doesn’t help me or others get a better model.
If it’s misleading, how is it misleading? (And is misleading different than “false”? Are you like “yeah this is technically correct, but it neglects key details”?)
Admittedly, you did label it as a tl;dr, and I did prompt you to elaborate on a react. So maybe it’s unfair of me to request even further elaboration.
Fair enough!
Sounds good.