The Meaning of Shoggoth AI Memes



Shoggoth with Smiley Face (Artificial Intelligence) | Know Your Meme

Basically, the illustration shows an AI system with tentacles covered in eyes, and each eye represents a different kind of sensor or input. The idea is that as AI systems become more complex and advanced, they’ll need to take in more and more information from the world around them in order to make good decisions. And the illustration suggests that this could lead to some pretty weird-looking AI systems!

The smiley face is there to represent the fact that even though these advanced AI systems might look strange or intimidating on the outside, they’ll still have the same friendly and helpful “personality” on the inside. ”

-Explanation of Shoggoth AI memes from Pi.ai July 31, 2023

This (above) certainly seems like real, original reasoning (though incorrect and containing a lie, it seems to be pretending familiarity with the meme). Since the LLM predicts what humans would write, we can’t really know how smart it is. In theory, it should pretend to be humanly stupid fairly consistently, even if smarter. But how well does its internal human model persistantly mimic the emotions of an individual human?

We know connecting a LLM to a brain to computer interface (BCI) allowes the computer to apparently read peoples minds with middling to ok accuracy, producing text or images.

If we fed a model on the continuous brain information AND video of a few thousand mice, also giving it the DNA sequence of each mouse, for the mouse’s whole life, asking it to predict the brain output and behaviour of novel mice, would it not learn the personality of a mouse (and maybe even predict to some extent from DNA alone the behavior of a specific unseen mouse, or form a few hours of photage learn a specific mouse’s ‘personality’)?

Mice arn’t too smart but they are likely thinking beings with feelings, persistant personality and persistant agency. They have facial expressions, which have predictive power. They do things for reasons, and life in a 3d world. Once you have done the mice, you could add a bit of human data with a pretrained LLM, with BCI decoding data (text, thought to text, and image).

You get it to generalize some from mice to humans, and you then try few-shot prediction of a particular person.

Would life-long-mouse-data-augmented AI be more agent-like, and maybe-by predicting emotions & thoughts-more safe? Or would this be even more dangerous?

I remain undecided but I believe it would be better to do experiment with agency though bio-mimicry, maybe in other ways I havent thought of, now with GPT-4 size models, then to continue to play around with ever large models like GPT-5+ first.