swe, speculative investor
O O
How is any of that wrong or related to the question of ai being aligned. Do doomers seriously think you can indefinitely stop automation? It’s been happening for centuries.
They’re ignoring alignment but so are most labs. I still don’t get how this is not irrational. If it was worded as AI will inevitably become smarter then no one here would care.
The reaction to Mechanize seems pretty deranged. As far as I can tell they don’t deny or hasten existential risk any more than other labs. They just don’t sugarcoat it. It’s quite obvious that the economic value of AI is for labor automation, and that the only way to stop this is to stop AI progress itself. The forces of capitalism are quite strong, labor unions in the US tried to slow automation and it just moved to China as a result (among other reasons). There is a reason Yudkowsky always implies measures like GPU bans.
It just seems like they hit a nerve since apparently a lot of doomerism is fueled by insecurities of job replacement.
In practice, this likely boils down to a race. On one side are people trying to empower humanity by building coordination technology and human-empowering AI. On the other side are those working to create human-disempowering technology and render human labor worthless as fast as possible.
I mean if we’re being completely candid here, there is almost no chance the first group wins this race right?
the world had more centralization, such that the Industrial Revolution never started in an uncontrolled way
What motive does a centralized dominant power have to allow any progress? The entire world would likely look more like North Korea.
War is not the only potential response. I don’t know why this is being framed as normal when a normal treaty would have something like sanctions as a response.
Keep in mind propagandizing it is also an easy way to get political polarization.
How has nuclear non proliferation been a success?
Short of something that would stop us from even pondering this, we’ve gotten dangerously close to nuclear exchanges multiple times and several rogue states have nukes or use how close they are to a nuke as a bargaining tool.
AI 2027 timelines got more pushback than warranted. The superhuman coder stuff at least vaguely seems on track. Most code at the frontier of usage (ie gpt-5-codex) is generated by AI agents.
There is more to coding than just writing the code itself, but the AI 2027 website has AI coding just at the level of human pros by Dec 2025. Seems like we’re well on the way to that.
AI progress can be rapid but the pathway to it may involve different capability unlocks. For example, it may be you automate work more broadly and then reinvest that into more compute/automate chipmaking itself). Or you can get the same unlocks without rapid progress. For example, you get a superhuman coder but run into different bottlenecks.
I think it’s pretty obvious AI progress won’t completely stall out, so I don’t think that’s the prediction you’re making? It’s one thing to say AI progress won’t be rapid and then give a specific story as to why. Later if you hit most of your marks, it’ll look like a much more valuable prediction than saying simply it won’t be rapid. (Same applies to AI 2027).
The authors of AI 2027 made a pretty specific story before the release of ChatGPT and looked really prescient after the fact since it turned out to be mostly accurate.
I just don’t think there is much to this prediction.
It takes a set of specific predictions, says none of it will happen, and by the nature of the conjunctive prediction, most will not happen. It would be more interesting to hear how AI will and will not progress rather than just denying an already unlikely to be perfect prediction.
Inevitably they’ll be wrong on some of these, but they’ll look more right on the surface level because they will be right on most of them.
It seems like basically everything in this is already true today. Not sure what you’re predicting here.
The author also seems to not realize that OpenAI’s costs are mostly unrelated to its inference costs?
I think the extra effort required to go from algorithmically to holistically qualifying scales linearly with task difficulty. Dense reward model scaling on hard to verify tasks seems to have cracked this. Deepminds polished holistically passing IMO solutions probably required the same order of magnitude of compute/effort as the technically correct but less polished OpenAI IMO solutions. (They used similar levels of models, compute, and time to get their respective results)
So while it will shift timelines, it is something that will fall to scale and thus shouldn’t shift it too much.
I predict once these methods make their way into commercial models, this will go away, or roughly 1 year. I’ll check back in 2026 to see if I’m wrong.
I think AI doomers as a whole lose some amount of credibility if timelines end up being longer than they project. Even if doomers technically hedge a lot, the most attention grabbing part to outsiders is the short timelines + intentionally alarmist narrative, so they’re ultimately associated with them.
It seems Gemini was ahead of openai on the IMO gold. The output was more polished so presumably they achieved a gold worthy model earlier. I expect gemini’s swe bench to thus at least be ahead of OpenAI’s 75%.
- Aug 8, 2025, 4:30 PM; 3 points) 's comment on ryan_greenblatt’s Shortform by (
Afaict this case has been generally good for the industry but especially bad for Anthropic.
Edit: overall win, you can use books in training. You just can’t use pirated books.
Progress wise this seems accurate but the usefulness gap is probably larger than the one this paints.
The right has always been vaguely anti-semitic. What’s new is the left is now also vaguely anti-semitic, leading it to being overall more normalized.
Yes I think protectionist viewpoints are very naive. The industrial revolution flipped the gameboard for which countries stood at the top and the most economically powerful country back then, China ruled by the Qing dynasty, did a lot of these protectionist measures and what actually happened was tiny backwater nations instead dominated it decades-centuries later. AGI compresses this to months-years.
3 votes
Overall karma indicates overall quality.
0 votes
Agreement karma indicates agreement, separate from overall quality.
Seems like a highly speculative post built on complete guesses with little evidence or faulty claims.