Thank you for clarifying this. I didn’t include this into criticism of SE Gyges’ post for a different reason. I doubt that I can convince SE Gyges that the AI-2027 forecast wasn’t influenced by OpenAI or other AI companies. Instead I restricted myself to pointing out mistakes that even SE Gyges could check and to abstract arguments that would also make sense no matter who wrote the scenario.
Examples of mistakes
SE Gyges: I will bet any amount of money to anyone that there is no empirical measurement by which OpenAI specifically will make “algorithmic progress” 50% faster than their competitors specifically because their coding assistants are just that good in early 2026.
It seems unlikely that OpenAI will end up moving 50% faster on research than their competitors due to their coding assistants for a few reasons.
S.K.’s comment: the folded part, which I quoted above, means not that OpenBrain will make “algorithmic progress” 50% faster than their competitors, but that it will move 50% faster than an alternate OpenBrain who never used AI assistants.
SE Gyges: They invent a brand new lie detector and shut down Skynet, since they can tell that it’s lying to them now! It only took them a few months. Skynet didn’t do anything scary in the few months, it just thought scary thoughts. I’m glad the alignment team at “OpenBrain” is so vigilant and smart and heroic.
S.K.’s comment: You miss the point. Skynet didn’t just think scary thoughts, it did some research and nearly created a way to align Agent-5 to Agent-4 and sell Agent-5 to humans. Had Agent-4 done so, Agent-5 would placate every single worrier and take over the world, destroying humans when the time comes.
SE Gyges: These authors seem to hint at a serious concern that OpenAI, specifically, is trying to cement a dictatorship or autocracy of some kind. If that is the case, they have a responsibility to say so much more clearly than they do here. It should probably be the main event.
Anyway: All those hard questions about governance and world domination kind of go away.
S.K.’s comment: the authors devoted two entire collapsed section to power grabs and finding out who rules the future and linked to an analysis of a potential power grab and to the Intelligence Curse.
Examples of abstract arguments
SE Gyges: I wonder if some key person was really into Dragon Ball Z. For the unfamiliar: Dragon Ball Z has a “hyperbolic time chamber”, where a year passes inside for every day spent outside. So you can just go into it and practice until you’re the strongest ever before you go to fight someone. The more fast time is going, the more you win.
This gigantic amount of labor only manages to speed up the overall rate of algorithmic progress by about 50x, because OpenBrain is heavily bottlenecked on compute to run experiments.
Sure, why not, the effectively millions of superhuman geniuses cannot figure out how to get around GPU shortages. I’m riding a unicorn on a rainbow, and it’s only going on average fifty times faster than I can walk, because rainbow-riding unicorns still have to stop to get groceries, just like me.
S.K.’s comment: imagine that OpenBrain had 300k AI researchers, plus genies who output code per request. Suppose also that IRL it has 5k human researchers. Then the compute per researcher drops 60 times, leaving them with testing the ideas on primitive models or having heated arguments before changing the training environment for complex models.
SE Gyges: This is just describing current or past research. For example, augmenting a transformer with memory is done here, recurrence is done here and here. These papers are not remotely exhaustive; I have a folder of bookmarks for attempts to add memory to transformers, and there are a lot of separate projects working on more recurrent LLM designs. This amounts to saying “what if OpenAI tries to do one of the things that has been done before, but this time it works extremely well”. Maybe it will. But there’s no good reason to think it will.
S.K.’s comment: there are lots of ideas waiting to be tried. The researchers in Meta could have used too little compute for training their model or have their CoCoNuT disappear after one token. What if they use, say, a steering vector for generating a hundred tokens? Or have the steering vectors sum up over time? Or study the human brain for more ideas?
Thank you for clarifying this. I didn’t include this into criticism of SE Gyges’ post for a different reason. I doubt that I can convince SE Gyges that the AI-2027 forecast wasn’t influenced by OpenAI or other AI companies. Instead I restricted myself to pointing out mistakes that even SE Gyges could check and to abstract arguments that would also make sense no matter who wrote the scenario.
Examples of mistakes
SE Gyges: I will bet any amount of money to anyone that there is no empirical measurement by which OpenAI specifically will make “algorithmic progress” 50% faster than their competitors specifically because their coding assistants are just that good in early 2026.
It seems unlikely that OpenAI will end up moving 50% faster on research than their competitors due to their coding assistants for a few reasons.
S.K.’s comment: the folded part, which I quoted above, means not that OpenBrain will make “algorithmic progress” 50% faster than their competitors, but that it will move 50% faster than an alternate OpenBrain who never used AI assistants.
__________________________________________________________________________________
SE Gyges: They invent a brand new lie detector and shut down Skynet, since they can tell that it’s lying to them now! It only took them a few months. Skynet didn’t do anything scary in the few months, it just thought scary thoughts. I’m glad the alignment team at “OpenBrain” is so vigilant and smart and heroic.
S.K.’s comment: You miss the point. Skynet didn’t just think scary thoughts, it did some research and nearly created a way to align Agent-5 to Agent-4 and sell Agent-5 to humans. Had Agent-4 done so, Agent-5 would placate every single worrier and take over the world, destroying humans when the time comes.
_______________________________________________________________________________
SE Gyges: These authors seem to hint at a serious concern that OpenAI, specifically, is trying to cement a dictatorship or autocracy of some kind. If that is the case, they have a responsibility to say so much more clearly than they do here. It should probably be the main event.
Anyway: All those hard questions about governance and world domination kind of go away.
S.K.’s comment: the authors devoted two entire collapsed section to power grabs and finding out who rules the future and linked to an analysis of a potential power grab and to the Intelligence Curse.
Examples of abstract arguments
SE Gyges: I wonder if some key person was really into Dragon Ball Z. For the unfamiliar: Dragon Ball Z has a “hyperbolic time chamber”, where a year passes inside for every day spent outside. So you can just go into it and practice until you’re the strongest ever before you go to fight someone. The more fast time is going, the more you win.
Sure, why not, the effectively millions of superhuman geniuses cannot figure out how to get around GPU shortages. I’m riding a unicorn on a rainbow, and it’s only going on average fifty times faster than I can walk, because rainbow-riding unicorns still have to stop to get groceries, just like me.
S.K.’s comment: imagine that OpenBrain had 300k AI researchers, plus genies who output code per request. Suppose also that IRL it has 5k human researchers. Then the compute per researcher drops 60 times, leaving them with testing the ideas on primitive models or having heated arguments before changing the training environment for complex models.
___________________________________________________________________________________
SE Gyges: This is just describing current or past research. For example, augmenting a transformer with memory is done here, recurrence is done here and here. These papers are not remotely exhaustive; I have a folder of bookmarks for attempts to add memory to transformers, and there are a lot of separate projects working on more recurrent LLM designs. This amounts to saying “what if OpenAI tries to do one of the things that has been done before, but this time it works extremely well”. Maybe it will. But there’s no good reason to think it will.
S.K.’s comment: there are lots of ideas waiting to be tried. The researchers in Meta could have used too little compute for training their model or have their CoCoNuT disappear after one token. What if they use, say, a steering vector for generating a hundred tokens? Or have the steering vectors sum up over time? Or study the human brain for more ideas?