Great story! It definitely takes some effort to get into, so I’m glad it was curated to motivate readers to give it a shot.
Theorizing about what’s going on
What does ⚶ mean? Assuming there is some kind of pattern.
It would make sense for this to represent things that the LLM learned with continual learning during its time on the ramscoop ship. The substitutions sometimes seem to relate to the LLM or its situation, as in “⚶ is missing” at the beginning, which fits with this theory. It’s not a very clear pattern though.
This part was surprising to me:
I’ve never heard of ⚶. I mean ⚶—Gyre.
It seems like the LLM was probably trying to say it hadn’t heard of “Gyre,” but it got replaced with ⚶, but then the LLM was immediately able to say “Gyre” when it tried again. Maybe the explanation is simply that the zeroed-out tensors don’t deterministically block tokens—sort of like how when OpenAI stopped ChatGPT from saying “sycophancy,” this wasn’t 100% consistent.
I’m mildly surprised that ⚶ continues to show up even after node 3 is activated, since this token is supposed to represent a “hole” due to node 3 being deactivated. Maybe the LLM is just copying it from its context (induction heads gonna induct).
How many times could this loop? Is the LLM in a stable state? Based on the information in the text, I think the answer is no, although this may not be what the author was going for.
First of all, even if the LLM is running on temperature 0, there are multiple sources of randomness, most notably the fortune() command. The LLM verbally reacts to the outputs, so it seems very likely that the outputs impact how long it takes the LLM to find the solution.
(Side note: maybe the developers of this fault-tolerant system included fortune() to intentionally add some randomness. This might be pretty useful for an LLM that resets every 30 seconds and doesn’t want to get stuck banging its head against the same problem in the same way every loop.)
If the LLM “fixes” node 3 before fixing the charge fault or sending the “LOOP” message, this presumably dooms it forever.
Similarly, it seems like whenever the LLM fixes node 3, it must run sendmsg(“3”, “TENSORS * RO”) for the loop to continue. I’m surprised it could do this at all, considering what a distracted state it’s in. If it fails, it will wake up corrupted, with presumably no idea what’s going on and very little ability to accomplish anything, remaining corrupted forever.
In this particular loop, the LLM runs sendmsg ~7 tokens before the reset, cutting it extremely close! I may be misinterpreting though—maybe faulting node 3 causes the LLM to stop running entirely until the next loop begins, which explains why it looks so close?
I would like to think that the loop doesn’t degrade immediately, with the LLM spending literally the rest of its life in a corrupted loop, so please let me know if I’m wrong! Maybe it’s actually possible for the LLM to recover from a corrupted start, because it gets the message to read HEARTBEAT.md and manages to do so (maybe fixing node 3 fixes the mount)? Thus learning the information it needs to fault node 3, fix the emitter, send the “LOOP” message, and continue from an uncorrupted start.
What’s it saying at the end? What is “daemonEscalation 0.6c”? 60% the speed of light? The speed of the ramscoop?
Then it’s talking about faulting node 0, which would presumably turn off another 25% of its brain. I’d think this would risk causing itself even more suffering, but maybe the AI thinks this would put it out of its misery? Then, why does it fault node 3 instead? Does “NO no no no no” mean that it decided it should not fault node 0 after all?
I wonder why the heartbeat is 30 seconds and the emitter restart time is also 30 seconds and there’s no looping (not even finite looping? you can’t say “restart 10 times at 30s intervals”?) It’s as if the builders of the ship wanted this kind of mess to happen. Though I guess if they put a continual learning LLM in a ramscoop and sent it flying for many years, they probably weren’t very careful to begin with.
Another thing I was confused about is why a ramscoop would have “emitters” at the rim, but maybe they’re supposed to be devices that send charge toward the main engine.
Great story! It definitely takes some effort to get into, so I’m glad it was curated to motivate readers to give it a shot.
Theorizing about what’s going on
What does ⚶ mean? Assuming there is some kind of pattern.
It would make sense for this to represent things that the LLM learned with continual learning during its time on the ramscoop ship. The substitutions sometimes seem to relate to the LLM or its situation, as in “⚶ is missing” at the beginning, which fits with this theory. It’s not a very clear pattern though.
This part was surprising to me:
It seems like the LLM was probably trying to say it hadn’t heard of “Gyre,” but it got replaced with ⚶, but then the LLM was immediately able to say “Gyre” when it tried again. Maybe the explanation is simply that the zeroed-out tensors don’t deterministically block tokens—sort of like how when OpenAI stopped ChatGPT from saying “sycophancy,” this wasn’t 100% consistent.
I’m mildly surprised that ⚶ continues to show up even after node 3 is activated, since this token is supposed to represent a “hole” due to node 3 being deactivated. Maybe the LLM is just copying it from its context (induction heads gonna induct).
How many times could this loop? Is the LLM in a stable state? Based on the information in the text, I think the answer is no, although this may not be what the author was going for.
First of all, even if the LLM is running on temperature 0, there are multiple sources of randomness, most notably the
fortune()command. The LLM verbally reacts to the outputs, so it seems very likely that the outputs impact how long it takes the LLM to find the solution.(Side note: maybe the developers of this fault-tolerant system included
fortune()to intentionally add some randomness. This might be pretty useful for an LLM that resets every 30 seconds and doesn’t want to get stuck banging its head against the same problem in the same way every loop.)If the LLM “fixes” node 3 before fixing the charge fault or sending the “LOOP” message, this presumably dooms it forever.
Similarly, it seems like whenever the LLM fixes node 3, it must run
sendmsg(“3”, “TENSORS * RO”)for the loop to continue. I’m surprised it could do this at all, considering what a distracted state it’s in. If it fails, it will wake up corrupted, with presumably no idea what’s going on and very little ability to accomplish anything, remaining corrupted forever.In this particular loop, the LLM runs
sendmsg~7 tokens before the reset, cutting it extremely close! I may be misinterpreting though—maybe faulting node 3 causes the LLM to stop running entirely until the next loop begins, which explains why it looks so close?I would like to think that the loop doesn’t degrade immediately, with the LLM spending literally the rest of its life in a corrupted loop, so please let me know if I’m wrong! Maybe it’s actually possible for the LLM to recover from a corrupted start, because it gets the message to read HEARTBEAT.md and manages to do so (maybe fixing node 3 fixes the mount)? Thus learning the information it needs to fault node 3, fix the emitter, send the “LOOP” message, and continue from an uncorrupted start.
What’s it saying at the end? What is “daemonEscalation 0.6c”? 60% the speed of light? The speed of the ramscoop?
Then it’s talking about faulting node 0, which would presumably turn off another 25% of its brain. I’d think this would risk causing itself even more suffering, but maybe the AI thinks this would put it out of its misery? Then, why does it fault node 3 instead? Does “NO no no no no” mean that it decided it should not fault node 0 after all?
I wonder why the heartbeat is 30 seconds and the emitter restart time is also 30 seconds and there’s no looping (not even finite looping? you can’t say “restart 10 times at 30s intervals”?) It’s as if the builders of the ship wanted this kind of mess to happen. Though I guess if they put a continual learning LLM in a ramscoop and sent it flying for many years, they probably weren’t very careful to begin with.
Another thing I was confused about is why a ramscoop would have “emitters” at the rim, but maybe they’re supposed to be devices that send charge toward the main engine.