A quick side note: in the 17 years which have passed since the post you cite had been written historiography of connectionism moved on, and we now know that modern backpropagation was invented as early as 1970 and first applied to neural nets in 1982 (technology transfer was much harder before web search!), see https://en.wikipedia.org/wiki/Backpropagation#Modern_backpropagation and references thereof
Petropolitan
I think it does, among other things, actually investigate cross-border crime but just on a small scale due to limited resources, check this: https://www.frontex.europa.eu/what-we-do/operations/operations
police force
Actually, since 2016 EU has a relatively small (~3700 officers as of this writing, which is about 1⁄6 larger than police of Luxembourg) border police force called Frontex! EU president would like to increase it an order of magnitude in a few years, but member states are not very enthusiastic
Why would anyone want to pay a fortune for a system that is expected to let ~40 warheads through (assuming ~99% overall interception rate which will require average rate of 99.99+%), about the same as the number of ICBMs the Soviet Union had in service during the Cuban Missile Crisis? Unacceptable damage is the cornerstone of the nuclear deterrence, MAD or not (there is no MAD between India and Pakistan, for example).
The RV separation distance is normally around ~100 km (even up to 300 km in some cases) not 10 km, and the decoy dispersal might be expected on the same order of magnitude. It will be easy to ramp it up BTW with a cheap modernization.
None of the US adversaries really practice counterforce targeting, so the silo protection is moot.
lower EQ
I don’t think it’s relevant here: judging by the EQ-Bench leaderboard, GPT-5 is on par with GPT-4o and has far higher EQ than any of the Anthropic models!
Even if it has some influence, it should be much less than the emoji usage (remember the scandal about the Llama 4 on LMSys) and certainly incomparable to the sycophancy
I like to imagine the whole GPT-5 launch from the perspective of a cigarette company.
OpenAI is Philip Morris over here. Realized they make a product that addicts and hurts people. Instead of feeding it, they cut it off. The addicts went insane and OpenAI unfortunately caved.
— u/ohwut at https://www.reddit.com/r/OpenAI/comments/1mlzo12/comment/n7uko9n
one or more warheads are blown up at limits of interceptor range
Not range but height. You blow up a warhead high enough the drones can’t intercept it, and all the drones below fall out of the air
You seem to believe that radars and infrared cameras can somehow distinguish between the decoys and the warheads, but they can’t. In space, no radar and no IR camera can differentiate between a conical foil balloon with a small heater inside and a reentry vehicle with a nuke.
Another problem of ballistic missile defense is that once you are dealing with nukes and not conventional warheads, you can’t afford, say, a 97% average interception rate, it must be 99.999+%[1]. To put this in context, Israel, which currently has the best BMD system in the world, couldn’t even reliably achieve 90% against Iranian MRBMs (and those are pretty unsophisticated, e. g. they lack MIRVs and decoys).
Now calculate how many interceptors your plan requires for a plausible probability of an interception with a single drone, and you will see it’s entirely unworkable. Note that both arguments are based on simple physics and math so don’t depend on the progress in technology at all.
If you are interested in the topic, I strongly recommend reading on the Soviet response to SDI for more expensive anti-ABM options that were considered but ultimately not pursued: https://russianforces.org/podvig/2013/03/did_star_wars_help_end_the_col.html
- ^
When this seemingly waterproof probability is raised to the power of the Russian warhead count it still results in ~4% (basically 1e-5 times ~4k) of at least one RV not intercepted, and in reality hundreds of warheads will be harder to intercept than the average one you accounted for when calculating your probability. E. g., drones work poorly in bad weather, and it’s almost always bad weather above at least some of American cities
- ^
Normal timescales for building new mines using mature, already well-established technologies is over a decade for exploration to feasibility, 1.8 years of construction planning and getting environmental permits (unlike a robot factory, which you may built almost whenever you like, you have to build a mine on the actual minerals) and 2.6 of construction and production set up: https://www.statista.com/statistics/1297832/global-average-lead-times-for-mineral-resources-from-discovery-to-production This is the timescales on which commodity cycles boom and bust.
I do not assume that “innovation may lose”, I actually do not consider new technologies for mining resources whatsoever in my reasoning, only the application of existing technologies to the deposits formerly uneconomical to extract. I try to explain that all these processes are just much slower than the timescales you are discussing.
And you seem to have missed my argument about capital: if the interest rates skyrocketed due to transformative AI (see, e. g., https://www.lesswrong.com/posts/k6rkFMM2x5gqJyfmJ/on-ai-and-interest-rates), how would you finance all these mines?
Downvoted the post (which I do very rarely) because it considers neither the Amdahl’s Law nor the factors of production, which is Economics 101.
Fully automated robot factories can’t make robot factories out of thin air, they need energy and raw materials which are considered secondary factors of production in economics. As soon as there appears a large demand for them, their prices will skyrocket.
These are called so because they are acquired from primary factors of production, which in classical economics consist of land, labor and capital. Sure, labor is cheap with robots but land and capital will become very costly because they are complements. These primary and secondary factors will become bottlenecks, making the discussion of theoretical doubling rates moot.
Note that during most of European 2nd millennium, including times of Adam Smith and Karl Marx, labor was the most abundant and cheap primary factor, so that reversal from the now-expensive labor would not be something extraordinary.
P. S.
The following might not apply to the “post-AGI” world but this post gives a hint how hard automating manufacturing actually is: https://www.lesswrong.com/posts/r3NeiHAEWyToers4F/frontier-ai-models-still-fail-at-basic-physical-tasks-a
Does this count? A new mathematical result, particularly a proof of what a professional mathematician was intuitively sure of, but only a small part of an actual paper (literally just one proposition), and also took some prompt engineering https://x.com/bremen79/status/1927768299271496008
Poland and Estonia, ended up much better off than those that chose a gradual approach, like Bulgaria and Ukraine
The main reason of the hardness of the crash of Ukrainian is the large share of advanced defense industry (which almost ceased to exist) in the GDP in 1990, as well as advanced civilian industries which were reliant on their partners in other Soviet republics. Belarus, which had similar economy structure but smaller share of defense industry in particular, which maintained economic ties to Russia and which implemented even less reforms and even more gradually than Ukraine, weathered better in the 1990s and might provide a counterexample to this thesis (even if the end point long-term is awful).
Also, both Bulgaria and Ukraine are further from rich Western/Northern European markets. Poland in particular borders Germany, which provides all kinds of benefits. Even then, Polish and Bulgarian GDP per capita were similar at the moment of their EU ascension (2004 and 2007 respectively). I do not exclude that Bulgaria had their own self-inflicted problems but you have to compare against Romania and Hungary to demonstrate that!
P. P. S.
In the month since writing the previous comment I have read the following article by @Abhishaike Mahajan and believe it illustrates well why the non-tech world is so difficult for AI, can recommend: https://www.owlposting.com/p/what-happened-to-pathology-ai-companies
I guess it was usually not worth bothering with prosecuting disobedience as long as it was rare. If ~50% of soldiers were refusing to follow these orders, surely the Nazi repression machine would have set up a process to effectively deal with them and solved the problem
Continuing the analogy to the Manhattan Project: They succeeded in keeping it secret from Congress, but failed at keeping it secret from the USSR.
To develop this (quite apt in my opinion) analogy, the reason why this happened is simple: some scientists and engineers wanted to do something so that no one country could dictate its will to everyone else. Whistleblowing project secrets to the Congress couldn’t have solved this problem but spying for a geopolitical opponent did exactly that
In my experience, this is a common kind of failure with LLMs—that if asked directly about how to best a solve problem, they do know the answer. But if they aren’t given that slight scaffolding, they totally fail to apply it.
The recent release of o3 and o4-mini seems to indicate that diminishing returns from scaling are forcing OpenAI into innovating with scaffolding and tool use. As an example, they demonstrated o3 parsing an image of a maze with an imgcv and then finding the solution programmatically with graph search: https://openai.com/index/thinking-with-images
I believe that it won’t be hard to help reasoning models with the scaffolding you discuss and RL them to first think about which tools are most suitable, if any, before going on with actually tackling the problem. Afterwards any tasks which are easily solvable with a quick Python script won’t usually be a problem, unless there’s some kind of “adversarialness”, “trickyness”.
P. S.
And on the topic of reliability, I would recommend exploring PlatinumBench, which is a selection of hundreds of manually verified reasonably easy problems on which SOTA LLMs still don’t achieve 100% accuracy. The amount of mistakes correlates very well with the actual performance of the model on real-world tasks. I personally find the commonsense reasoning benchmark Winograd WSC the most insightful, here’s an example of puzzling mistakes SOTA LLMs (in this case Gemini 2.5 Pro) make in it sometimes:
**Step 6:** Determine what logically needs to be moved first given the spatial arrangement. If object A (potatoes) is below object B (flour), and you need to move things, object A must typically be moved first to get to object B or simply to clear the way.
Almost all machinists I’ve talked to have (completely valid) complaints about engineers that understand textbook formulas and CAD but don’t understand real world manufacturing constraints.
Telling a recent graduate to “forget what you have been taught in college” might happen in many industries but seems especially common in the manufacturing sector AFAIK
As Elon Musk likes to say, manufacturing efficiently is 10-100x times more challenging than making a prototype. This involves proposing and evaluating multiple feasible approaches, designing effective workholding, selecting appropriate machines, and balancing complex trade-offs between cost, time, simplicity, and quality. This is the part of the job that’s actually challenging.
And setting up quality control!
Swedish inventor and vlogger Simone Giertz recently published the following video elaborating on this topic in a funny and enjoyable way:
Since this seems to be obscure knowledge in modern post-industrial societies[1], many forecasters have assumed that you could easily “multiply” robots designed by AGI (presumably which overcomes the first three challenges in your list) with the same robots. I don’t believe that’s accurate!
- ^
Personal anecdote: I won a wager with a school friend who got a job in an EV start-up after a decent career in IT and disagreed with me
- ^
I think regionalisms are better approached systematically, as there are tons of scientific literature on this and even a Wikipedia article with an overview: https://en.wikipedia.org/wiki/American_English_regional_vocabulary (same for accents https://en.wikipedia.org/wiki/North_American_English_regional_phonology but that might require a fundamental study of English phonology)
To make it a little more substantial, web browsing agents with some OSINT skills (and multimedia models already geolocate photos made in Western urban areas comparably with human experts) offer prospects of automatizing or at least significantly speeding up (making much cheaper) targeted attacks like spearphishing