Assuming this was the case, wouldn’t it actually imply slightly more optimistic long term odds for humanity? A world where AI development actually resembles something like natural evolution and (maybe) throws up red flags that generate interest in solving alignment would be good, no?
I worry that the strategies we might scrounge up to avoid them will be of the sort that are very unlikely to generalise once the superintelligence risks do eventually rear their heads
Ok sure but extra resources and attention is still better than none.
Minor point here, but I think this is less to do with the potential commercial utility of LLMs and more relating to the reticence of large tech companies to publicly release a LLM that poses a significant risk of social harm. It is my intuition that in comparison with people on LW, the higher ups at the likes of Google are relatively more worried about those risks and the associated potential PR disaster. Entirely safety proofing a LLM in that way seems like it would be incredibly difficult as well as subjective and may greatly slow the release of such models.