[Question] Does increasing the power of a multimodal LLM get you an agentic AI?

I’m worried about x-risk from AI. But I’m not especially worried about Sora. Or even Sora_v10.

I’m worried about GPT-6 being agentic. I’m worried that GPT-6 will be able to act as a personal agent that can follow through on tasks such as; “Get Yanni a TV for less than $700, delivered by next Thursday, and organise someone to mount it on my wall.”

Assume GPT-6 scales from GPT-4 like GPT-2 did to GPT-4. Throw in whatever Auto-GPT is running on, do we get the scenario I’m worried about?

(I am not technical, so non-technical answers are appreciated!)

Thanks!

Yanni

No comments.