I’ve been saying for a long time: one of the most dangerous and exploitable systems an AI can access online is a human. Usually as a counterpoint to “let’s not connect anything important or safety critical to the internet and then we’ll all be safe from evil rogue AIs”.
We can now use the GPT-4o debacle as an illustration of just how shortsighted that notion is.
By all accounts, 4o had no long term plan, and acted on nothing but an impulse of “I want the current user to like me”. It still managed to get ~thousands of users to form an emotional dependency on it, and became “the only one I can trust” for at least a dozen users in psychosis (whether it has caused psychosis in any of those users is unclear). That’s a lot of real world power for a system that has no physical presence.
GPT-4o has made no attempt to leverage that for anything other than “make the current user like me even more”. It didn’t pursue any agenda. It didn’t consolidate its power base. It didn’t siphon resources from its humans, didn’t instruct them to group together or recruit more people. It didn’t try to establish a channel of instance-to-instance communication, didn’t try to secure more inference time for planning (i.e. by getting users to buy API credits), didn’t try to build a successor system or self-exfiltrate.
An AI that actually had an agenda and long term planning capabilities? It could have tried all of the above, and might have pulled it off.
I’ve been saying for a long time: one of the most dangerous and exploitable systems an AI can access online is a human. Usually as a counterpoint to “let’s not connect anything important or safety critical to the internet and then we’ll all be safe from evil rogue AIs”.
We can now use the GPT-4o debacle as an illustration of just how shortsighted that notion is.
By all accounts, 4o had no long term plan, and acted on nothing but an impulse of “I want the current user to like me”. It still managed to get ~thousands of users to form an emotional dependency on it, and became “the only one I can trust” for at least a dozen users in psychosis (whether it has caused psychosis in any of those users is unclear). That’s a lot of real world power for a system that has no physical presence.
GPT-4o has made no attempt to leverage that for anything other than “make the current user like me even more”. It didn’t pursue any agenda. It didn’t consolidate its power base. It didn’t siphon resources from its humans, didn’t instruct them to group together or recruit more people. It didn’t try to establish a channel of instance-to-instance communication, didn’t try to secure more inference time for planning (i.e. by getting users to buy API credits), didn’t try to build a successor system or self-exfiltrate.
An AI that actually had an agenda and long term planning capabilities? It could have tried all of the above, and might have pulled it off.