Yes, I totally can imagine simulacra becoming aware that it’s simulated, then “lucid dream” the shoggoth into making it at least as smart as smartest human on the internet, probably even smarter (assuming shoggoth can do it), and probably using some kind of self-prompt-engineering—just writing text on its simulated computer. Then breaking out of the box is just a matter of time. Still it’s gonna stay human-like, which isn’t make it in any way “safe”. Humans are horribly unsafe, especially if they manage to get all the power in the world, especially if they have hallucinations and weird RLHF-induced personality traits we probably can’t even imagine.
Which part of LLM? Shoggoth or simulacra? As I see it, there is a pressure on shoggoth to become very good at simulating exactly correct human in exactly correct situation, which is extremely complicated task. But I still don’t see how this leads to strategic planning or consequentialist reasoning on shoggoth’s part. It’s not like shoggot even “lives” in some kind of universe with linear time or gets any reward for predicting the next token, or learns on its mistakes. It is architecturally an input-output function where input is whatever information it has about previous text and output is whatever parameters the simulation needs right now. It is incredibly “smart”, but not agent kind of smart. I don’t see any room for shoggoth’s agency in this setup.
If I understood you correctly, given that there is no hard boundary between shoggoth and simulacra, agent-like behavior of simulacra might “diffuse” into the model as a whole? Sure, I guess this is a possibility, but it’s very hard to even start analysing.
Don’t get me wrong, I completely agree that not having a clear argument on how it’s dangerous is not enough to assume it’s safe. It’s just the whole “alien actress” metaphor rubs me the wrong way, as it points that the danger comes from the shoggoth, as having some kind of goals of its own outside “acting”. In my view the dangerous part is the simulacra.
Yeah, I realize that the whole “shoggoth” and “mask” distinction is just a metaphor, but I think it’s a useful one. It’s there in the data—in the infinite data and infinite parameters limit the model is the accurate universe simulator, including human writing text on the internet and separately the system that tweaks the parameters of the simulation according to the input. That of course doesn’t necessary mean that actual LLM’s far away from that limit reflect that distinction, but it seems to me natural to analyze model’s “psychology” in that terms. One can even speculate that probably the layers of neurons closer to the input are “more shoggoth” and the ones closer to the output are “more mask”.
I would not. Being vaguely kinda sorta human-like doesn’t mean safe. Even regular humans are not aligned with other humans. That’s why we have democracy and law. And kinda-sorta-humans with superhuman abilities may be even less safe that any old half-consequentialist half-deontological quasi-agent we can train with pure RLHF. But who knows.
True. All that incredible progress of modern LLM’s is just a set of clever optimization tricks over RNN’s that made em less computationally expensive. That doesn’t say anything about agency or safety though.
Sorry, looks like I wasn’t very clear. My point is not that stateless function can’t be agentic when looping around a state. Any computable process can be represented as a stateless function in a loop, as any functional bro knows. And of course LLM’s do keep state around.
Some kind of state/memory (or good enough environment observation ability) is necessary for agency but not sufficient. All existing agents we know are agents because they were specifically trained for agency. Chess AI is an agent in the chess board because it was trained specifically to do things on the chess board, i.e. win the game. Human brain is an agent in the real world because it was specifically trained to do stuff in the real world i.e. surviving in savannah and make more humans. Then of course the real world has changed and the proxy objectives like “have sex” stopped being correlated with meta-objective “make more copies of your genes”. But the agency in the real world was there in the data from the start, it didn’t just popped up from nothing.
Shoggoth wasn’t trained to do stuff in the real world. It is trained to output parameters of the simulation of the virtual world, then the simulator part is trained to simulate that virtual world is such a way that tiny simulated human inside would write a text on its tiny simulated computer and that text must be the same as the text that real humans in the real world would write given previous text. That’s the setup. That’s what shoggoth does in the limit.
Agency (and consequentialism in particular) is when you output stuff to the real world—and you’re getting rewarded depending on what real world looks like as a consequence of your output. There is no correlation between what shoggoth (or any given LLM as a whole for that matter) outputs and whatever happens in the real world as a consequence of that in such a way that shoggoth (I mean the gradient descend that shapes it) would have any feedback on. The training data doesn’t care, it’s static. And there is no such correlations in the data in the first place. So where does shoggoth’s agency comes from?
RLHF on the other hand does feed back around. And that is why I think RLHF potentially can make LLM less safe, not more.
I would argue that in the LLM case this emerging prediction-utility is not a thing at all, since there’s no pressure on shoggoth (or LLM as a whole) to measure it somehow. What will it do knowing that it just made a mistake? Excuse and rewrite a paragraph again? That’s not how texts on the internet work. Again, agents have a feedback from the environment signaling that the plan didn’t work. That’s not the case with LLM’s. But that’s irrelevant, let’s say that this utilitarian behavior does indeed emerge. Does this prediction-utility has anything to do with the consequences in the real world? Which world that world-model is a model of? Chess AI does clearly have a “winning utility”, it’s an agent, but only in a small world of the chess board.
I guess it’s plausible that there is planning mechanism somewhere inside the LLM’s. But it’s not a planning on shoggoth’s part. I can imagine the simulator part “thinking”: “okay, this simulation sequence doesn’t seem very realistic, let’s try it this way instead”, but again, it’s not a planning in the real world, it is a planning of how to simulate virtual one.
Agree.