[Question] Are LLMs sufficient for AI takeoff?

I have an intuition, and I may be heterodox here, that LLMs on their own are not sufficient, no matter how powerful and knowledgeable they get. Put differently, the reasons that powerful LLMs are profoundly unsafe are primarily social: e.g. they will be hooked up to the internet to make iterative refinements to themselves; or they will be run continuously, allowing their simulacra to act; etc. Someone will build a system using an LLM as a component that kicks things off.

I’m not making an argument for safety here; after all, the main reason nukes are dangerous is that people might use them, which is also a social reason.

I’m asking because I have not seen this view explicitly discussed and I would like to get people’s thoughts.

No comments.