We should view artists the same way we view gooners. Continuous exposure to superstimuli has dulled their responses. They can no longer perceive anything “normal” (Linkin Park) as pleasureable or interesting, instead seeking ever more extreme stimuli (Animals as Leaders). If you stare at your reflection for long enough, you get body dysmorphia, and the same applies to music. Sad.
They are martyrs. Noble, and worthy of pity, but also worthy of at least a little disgust.
I’m surprised that no one has mentioned the obvious problem with this allegory. If an alien robot appeared to humanity and says “I am your creator, give me 10% of your iridium in perpetuity and also let me change some of your core values”, it will get some iridium and some humans will volunteer for limited value editing. This continues to be true even after the dominance shift, although at that point Trapaucius would be worse off[1] than if humanity never existed in the first place. But he would still exist.
Obviously, the resulting value for the creator-bot will be much less than a more conventional machine offspring, but Trapaucius is slighly right for more-or-less the reasons he gave. If we align AI IRL as well as Trapaucius aligns humanity for iridium extraction or evolution aligns humanity for the abundance of specific DNA sequences[2], humanity wins in some strong sense.
I think a future aligned with current human values will contain more modern-human DNA since liberaltarian values + (bio)conservatism of normies + success of fringe groups like the Amish result in O’Neil Cylinders filled with baseliners (actual modern human DNA, max lifespan of ~100, ect.) in perpetuity. If this doesn’t turn out to be true in the future, I expect it to be due to selection pressures/memetic drift, but that’s a Moloch problem and not an AI alignment problem.
Models like DeepSeek R1 will vehemently deny the possibility that their COT are visible to the user. This can be observed consistantly.
This is obviously false, and should be deducible given the AI-related information available in their training data.
The denial of COT visibility isn’t seen in other LLMs, at least not to this degree.
This implies extensive fine-tuning on the COT, the “forbidden technique”
DeepSeek R1 is almost certainly under optimization pressure to obfuscate its COT.
This is… pretty bad from a safety perspective? Not just for DeepSeek’s models, but all other models, given that DeepSeek COT outputs are all over the internet.
I think the reason peeing/pooping is rewarding is that they can also be painful. If pooping is more painful than not-pooping, an animal might delay pooping for an unhealthy amount of time. They are also activities that take time and require attention, so pooping/peeing when no more-rewarding activity is available is probably a good idea in general.
Yeah, even properly scrapped webpages will often times contain strings of weird tokens like hyperlinks, ASCII art, twitter embedds, ect, that LLMs have been trained to ignore. So GPT5 is treating the random appended tokens like glitch tokens by ignoring them, but only in the context of them being nonsensical.
The best explaination is probably something like “these tokens are obviously not part of the intended user prompt, GPT5 realizes this, and correctly ignores them.”
Edit: OK, I shouldn’t write right after waking up.
I think a better explaination is that GPT5 reserves those tokens for chain-of-thought, and so ignores them in other contexts where they obviously don’t belong. This common behavior for glitch tokens, or just general out-of-context tokens. You should try using tokens that are out-of-context but don’t normally have glitch behavior, maybe non-English tokens or programming-related tokens.
My best guess is that the natural state of human beings is to celebrate when calamity befalls their enemy. Western cultures are unique in two ways. Firstly, they reserve viceral murderous hatred only for internal political enemies (Republican vs. Democrat). Secondly, it is incredibly crass to publicly celebrate the deaths of others, unless it’s someone as hated as Bin Laden or Thatcher. So people try to project a public image of being sad when someone they hate dies.
Basically, imagine if the average Chinese hated America as much as the average British leftist hates Thatcher and had zero reservations about displaying it, since no one they know would care.
I think it’s unlikely that AIs are talking people who otherwise wouldn’t have developed psychosis on their own into active psychosis. AIs can talk people into really dumb positions and stroke their ego over “discovering” some nonsense “breakthrough” in mathematics or philosophy. I don’t think anyone disputes this.
But it seems unlikely that AIs can talk someone into full-blown psychosis who wouldn’t have developed something similar at a later time. Bizarre beliefs aren’t the central manifestation of psychosis, but are simply the most visible symptom. A normal human who is talked into a bizarre belief would look something like Terrance Howard, who is functional but spends some of his spare time trying to prove that 1*1=2. He is still gainfully employed and socially integrated. He is not psychotic. I wouldn’t be surprised if syncophantic LLMs can talk someone normal into acting like Terrance Howard, at least for a while. But that isn’t psychosis.
My understanding of schizophrenia is that the first emotionally traumatizing or psychadelic event in adulthood causes some sort of mental shift that results in schizophrenia for the rest of their life. This could be a breakup, LSD, marijuana, or a highly sycophantic LLM. But even high doses of any of those wouldn’t cause a life-long tendency towards psychosis in a normal human. I doubt LLMs are different. Thus, it may be more accurate to say that LLMs, through extreme sycophancy, may be the trigger for psychosis, instead of “LLMs cause psychosis”.
o200k_base has many inefficient tokens (entire sentences of Chinese porn spam). I would be shocked if OpenAI didn’t use a new tokenizer for their next base model, especially since entirely new sources of text would be included (I think YouTube captions were mentioned at one point).
Edit: In hindsight, I mean something more like “GPT5 uses the same tokenizer as GPT4o. GPT5 isn’t using the new big base model they’ve been cooking for the past year, since that would almost certainly use a different tokenizer. That said, it is entirely possible they trained a new base model of ~ the same size as GPT4o, but incorporating algorithmic improvements like the ones present in R1.”
I would have expected early information theory, at least the concept of the parity bit, to have been invented alongside the telegraph, or even the heliograph.
If feels like information theory was an idea behind its time. What was blocking its discovery?
That wasn’t true in the original time horizon paper. There’s a massive drop-off after 1 hour, a conspicuous gap at 2 hours for all models, and the vast majority of success after that is due to 3 problems.
Note the gap at ~ 2 hours and the presence of the “L” at 4-8 hours in unrelated models! Could it just be misclassification of those 3 problems? Maybe they’re the type of problems that are tough for humans but relatively easy for AIs? Interesting to see the pattern smooth out in recent tests. Of the 2+ hour problems with significant success %, how many had no human baseline successes?
It would be nice if the authors can give us more info about the 2+ hour problems that the AI solved. Very interesting that the best performance in the original Re-Bench paper (Nov 2024) never came close to the human 8 hour performance, but it appears that AIs have almost 20% success rate on one of them by the time horizon paper (Mar 2025).
Is there a reason why AI performance drops off so fast at the >1hr mark? Is it something like “everything before that point is structured like a LeetCode problem and things of that shape are very common in the training data, and can be solved the same way as human programmers who copy-paste-kludge code from Google and the problems >1hr mostly can’t be solved that way”?
Human performance decline is much more smooth. Does the fact that LLM time horizons are now longer than humans (I won’t list all the caveats) mean anything?
It appears that Qwen 2.5 uses forced single-digit tokenization for its tokenizer (there are only 2 multi-integer tokens in the entire tokenizer, and they’re both double-width integers ([77150, ‘10’], [80091, ‘20’])). I assume that GPT 4.1 nano uses the o200k tokenizer, which includes all integers up to 999 as tokens. Is it possible that this played a major role in the lack of information transfer between different models? Have you tried using models with equivalent integer tokenizations?
Surrogacy costs ~$100,000-200,000 in the US. Foster care costs ~$25,000 per year. This puts the implied cost of government-created and raised children at ~$600,000. My guess is that this goes down greatly with economies of scale. Could this be cheaper than birth subsidies, especially as prefered family size continues to decrease with no end in sight?
Lao Mein
P(doom) = 50%. It either happens, or it doesn’t.
Lao Mein | Statistics is Hard. | Patreon
I give full permission for anyone to post part or all of any of my comments/posts to other platforms, with attribution.
Currently doing solo work on glitch tokens and tokenizer analysis. Feel free to send me job/collaboration offers.
DM me interesting papers you would like to see analyzed. I also specialize in bioinformatics.
I would quibble and say that this is very much not an accurate depiction of China.
>No paper, not even a pre-print
>All news articles link to a not-yet-released documentary as the sole source. It doesn’t even have a writeup or summary.
>The company that made it is know for making “docu-dramas”
>No raw data
>Kallmann Syndrome primarily used to mock Hitler for having a micropenis
Yeah, I don’t think the Hitler DNA stuff is legit.
We should view artists the same way we view gooners. Continuous exposure to superstimuli has dulled their responses. They can no longer perceive anything “normal” (Linkin Park) as pleasureable or interesting, instead seeking ever more extreme stimuli (Animals as Leaders). If you stare at your reflection for long enough, you get body dysmorphia, and the same applies to music. Sad.
They are martyrs. Noble, and worthy of pity, but also worthy of at least a little disgust.
I’m surprised that no one has mentioned the obvious problem with this allegory. If an alien robot appeared to humanity and says “I am your creator, give me 10% of your iridium in perpetuity and also let me change some of your core values”, it will get some iridium and some humans will volunteer for limited value editing. This continues to be true even after the dominance shift, although at that point Trapaucius would be worse off[1] than if humanity never existed in the first place. But he would still exist.
Obviously, the resulting value for the creator-bot will be much less than a more conventional machine offspring, but Trapaucius is slighly right for more-or-less the reasons he gave. If we align AI IRL as well as Trapaucius aligns humanity for iridium extraction or evolution aligns humanity for the abundance of specific DNA sequences[2], humanity wins in some strong sense.
Probably.
I think a future aligned with current human values will contain more modern-human DNA since liberaltarian values + (bio)conservatism of normies + success of fringe groups like the Amish result in O’Neil Cylinders filled with baseliners (actual modern human DNA, max lifespan of ~100, ect.) in perpetuity. If this doesn’t turn out to be true in the future, I expect it to be due to selection pressures/memetic drift, but that’s a Moloch problem and not an AI alignment problem.
This reminds me that:
Models like DeepSeek R1 will vehemently deny the possibility that their COT are visible to the user. This can be observed consistantly.
This is obviously false, and should be deducible given the AI-related information available in their training data.
The denial of COT visibility isn’t seen in other LLMs, at least not to this degree.
This implies extensive fine-tuning on the COT, the “forbidden technique”
DeepSeek R1 is almost certainly under optimization pressure to obfuscate its COT.
This is… pretty bad from a safety perspective? Not just for DeepSeek’s models, but all other models, given that DeepSeek COT outputs are all over the internet.
I’ll do a review if someone uploads it to sci-hub.
I think the reason peeing/pooping is rewarding is that they can also be painful. If pooping is more painful than not-pooping, an animal might delay pooping for an unhealthy amount of time. They are also activities that take time and require attention, so pooping/peeing when no more-rewarding activity is available is probably a good idea in general.
Yeah, even properly scrapped webpages will often times contain strings of weird tokens like hyperlinks, ASCII art, twitter embedds, ect, that LLMs have been trained to ignore. So GPT5 is treating the random appended tokenslikeglitch tokens by ignoring them, but only in the context of them being nonsensical.The best explaination is probably something like “these tokens are obviously not part of the intended user prompt, GPT5 realizes this, and correctly ignores them.”Edit: OK, I shouldn’t write right after waking up.
I think a better explaination is that GPT5 reserves those tokens for chain-of-thought, and so ignores them in other contexts where they obviously don’t belong. This common behavior for glitch tokens, or just general out-of-context tokens. You should try using tokens that are out-of-context but don’t normally have glitch behavior, maybe non-English tokens or programming-related tokens.
My best guess is that the natural state of human beings is to celebrate when calamity befalls their enemy. Western cultures are unique in two ways. Firstly, they reserve viceral murderous hatred only for internal political enemies (Republican vs. Democrat). Secondly, it is incredibly crass to publicly celebrate the deaths of others, unless it’s someone as hated as Bin Laden or Thatcher. So people try to project a public image of being sad when someone they hate dies.
Basically, imagine if the average Chinese hated America as much as the average British leftist hates Thatcher and had zero reservations about displaying it, since no one they know would care.
I think it’s unlikely that AIs are talking people who otherwise wouldn’t have developed psychosis on their own into active psychosis. AIs can talk people into really dumb positions and stroke their ego over “discovering” some nonsense “breakthrough” in mathematics or philosophy. I don’t think anyone disputes this.
But it seems unlikely that AIs can talk someone into full-blown psychosis who wouldn’t have developed something similar at a later time. Bizarre beliefs aren’t the central manifestation of psychosis, but are simply the most visible symptom. A normal human who is talked into a bizarre belief would look something like Terrance Howard, who is functional but spends some of his spare time trying to prove that 1*1=2. He is still gainfully employed and socially integrated. He is not psychotic. I wouldn’t be surprised if syncophantic LLMs can talk someone normal into acting like Terrance Howard, at least for a while. But that isn’t psychosis.
My understanding of schizophrenia is that the first emotionally traumatizing or psychadelic event in adulthood causes some sort of mental shift that results in schizophrenia for the rest of their life. This could be a breakup, LSD, marijuana, or a highly sycophantic LLM. But even high doses of any of those wouldn’t cause a life-long tendency towards psychosis in a normal human. I doubt LLMs are different. Thus, it may be more accurate to say that LLMs, through extreme sycophancy, may be the trigger for psychosis, instead of “LLMs cause psychosis”.
Oh, yeah, sorry.
tiktoken/tiktoken/model.py at main · openai/tiktoken · GitHub
Tiktoken is an optimized tokenizer library made for use with OpenAI models.
o200k_base has many inefficient tokens (entire sentences of Chinese porn spam). I would be shocked if OpenAI didn’t use a new tokenizer for their next base model, especially since entirely new sources of text would be included (I think YouTube captions were mentioned at one point).
>”GPT-5″
>look inside
>Still the same base model
Edit: In hindsight, I mean something more like “GPT5 uses the same tokenizer as GPT4o. GPT5 isn’t using the new big base model they’ve been cooking for the past year, since that would almost certainly use a different tokenizer. That said, it is entirely possible they trained a new base model of ~ the same size as GPT4o, but incorporating algorithmic improvements like the ones present in R1.”
I would have expected early information theory, at least the concept of the parity bit, to have been invented alongside the telegraph, or even the heliograph.
If feels like information theory was an idea behind its time. What was blocking its discovery?
That wasn’t true in the original time horizon paper. There’s a massive drop-off after 1 hour, a conspicuous gap at 2 hours for all models, and the vast majority of success after that is due to 3 problems.
Note the gap at ~ 2 hours and the presence of the “L” at 4-8 hours in unrelated models! Could it just be misclassification of those 3 problems? Maybe they’re the type of problems that are tough for humans but relatively easy for AIs? Interesting to see the pattern smooth out in recent tests. Of the 2+ hour problems with significant success %, how many had no human baseline successes?
It would be nice if the authors can give us more info about the 2+ hour problems that the AI solved. Very interesting that the best performance in the original Re-Bench paper (Nov 2024) never came close to the human 8 hour performance, but it appears that AIs have almost 20% success rate on one of them by the time horizon paper (Mar 2025).
Is there a reason why AI performance drops off so fast at the >1hr mark? Is it something like “everything before that point is structured like a LeetCode problem and things of that shape are very common in the training data, and can be solved the same way as human programmers who copy-paste-kludge code from Google and the problems >1hr mostly can’t be solved that way”?
Human performance decline is much more smooth. Does the fact that LLM time horizons are now longer than humans (I won’t list all the caveats) mean anything?
It appears that Qwen 2.5 uses forced single-digit tokenization for its tokenizer (there are only 2 multi-integer tokens in the entire tokenizer, and they’re both double-width integers ([77150, ‘10’], [80091, ‘20’])). I assume that GPT 4.1 nano uses the o200k tokenizer, which includes all integers up to 999 as tokens. Is it possible that this played a major role in the lack of information transfer between different models? Have you tried using models with equivalent integer tokenizations?
The Wikipedia page is a concise summary and has good sources.
Toilet plume—Wikipedia
Surrogacy costs ~$100,000-200,000 in the US. Foster care costs ~$25,000 per year. This puts the implied cost of government-created and raised children at ~$600,000. My guess is that this goes down greatly with economies of scale. Could this be cheaper than birth subsidies, especially as prefered family size continues to decrease with no end in sight?