A more grounded idea of AI risk

Are you struggling to get someone to understand why AGI might be very dangerous?

Rather than talking about nanomachines, Skynet, AGI easily persuading people, etc.

I suggest using a more grounded idea:

Meta, Google, Microsoft, OpenAI, etc have yet to create or release a single AI model that doesn’t have things going wrong within the first week. Meta had the most embarrassing moment, with LLaMA’s weights leaking to the public a week after announcement. (And now Meta’s head AI scientist is posting on twitter about how AGI and AI will never be dangerous and there’s nothing to worry about, likely to try to avoid responsibility for when LLaMA is inevitably used in the next big AI cybercrime). Youtube and Facebook have yet to figure out how to get their much more simple algorithm to stop promoting terrorist, anti-vaxxers, etc. Either out of not knowing how or it not being a high enough priority to fix. Do you trust these companies to correctly create, or even care to correctly create a much, much more powerful AI?

Edit: changed from “Are you struggling to get someone to understand why AGI might be kill us all?” to “Are you struggling to get someone to understand why AGI might be very dangerous?”after mukashi pointed out this is an argument for danger, not extinction.