No problem dynomight. The history of technology as an academic subject has exploded in size and depth since the mid 20th century, I’d say as a rule of thumb, recentness matters more for it than many other fields of history. And yeah I think “few explosives” would be perfectly accurate.
Lost Futures
Gunpowder was invented many times in history. By the Chinese, yes, but also by the Greeks, the Hindus, the Arabs, the English and the Germans. Humanity had no other explosives until 1847
I may be mistaken, but I don’t think these claims are supported by historical scholarship. The Byzantines did develop Greek fire but it likely contained petroleum rather than gunpowder.
I’m also not aware of any modern historians suggesting that Arabs or Indians independently invented gunpowder. Earlier European writers sometimes associated gunpowder with the broader “East,” which seems to have led to misunderstandings about its origins. In a similar same vein, medieval Europeans attributed “Arabic Numerals” to the Arab world despite them originating from India.
As for the Western Europeans, historical consensus supports it being a diffused technology from China (perhaps via the Mongols, especially given the timing of when it appears in the west), though I have heard occasional (and fringe) claims of independent development.
Finally, fulminated mercury is another explosive besides gunpowder that predates nitroglycerin.
The Devin mishap is a reminder of how tricky it often is for the general public to gauge what’s currently possible and what isn’t for AI. A lot of people, including myself, assumed the claimed performance was legitimate. No doubt many AI startups like Devin are waiting for the rising tide of improving foundational models to make their ideas feasible. I wonder how many are engaging in similar deceptive marketing tactics or will do so in the future.
How many horses were there?
Well over a million in England by 1850. However they were used primarily for agriculture and later transport. Not industry. As such, they played, at most, a supporting role in industrialization. Also, my original question stands, “Why England?”, given the Dutch Golden Age had similar conditions.
Also, development of those 3 technologies wasn’t limited by available power.
No, but they were limited by technological advancement and production getting cheaper, which by the mid 1800s were very much tied to steam power. They were also limited by the availability of capital for development, capital which would be much harder to come by with less energy to begin with. And of course the steam turbine was developed directly from the steam engine.
All of three technologies you’ve listed were not ready for broad practical use until well over 150 years after Newcomen’s steam engine. By this time, steam power had long since dethroned wind and water as the primary source of energy for industrial production.
https://histecon.fas.harvard.edu/energyhistory/data/Warde_Energy%20Consumption%20England.pdf
By the mid 1800s, steam was producing as much power for England and Wales as all other sources of fixed motive power combined. That’s not even mentioning the world changing impact of inventions such as the train and steamship. Now consider a world without this technology. What leads you to believe that a practical ICE, large steam turbine, and/or hydroelectric power would develop even remotely on schedule in a world with no trains, far lower steel production, and half the motive power? The steam engine’s impact on early industrialization is often overstated but its impact by 1850 really can’t be exaggerated. It was the diffusion and improvement of the steam engine that bridged the economic gap between the first and second industrial revolution.
One popular conception of the Industrial Revolution is that steam engines were invented, and then an increase in available power led to economic growth.
This doesn’t make sense, because water power and horses were much more significant than steam power until well after technological development and economic growth became fast.
While it is true that the first industrial revolution was largely propelled by water, wind, and horsepower rather than the steam engine, the steam engine was instrumental in continuing that momentum into the latter half of the 19th century. The Dutch Golden Age is sometimes characterized as a kind of proto-industrial revolution and likely saw the highest productivity in history prior to the 1800s. (The Dutch by this time also ticked most of the boxes you listed as causes of industrialization.) This economic revolution, like early British industrialization, relied on wind and water power (along with peat) but eventually hit a wall. Without the steam engine, once the rivers are dammed, the countryside is dotted with windmills, and the easily accessible biomass is depleted, energy availability becomes a major constraint for further growth.
Had a practical steam engine somehow failed to materialize during early industrialization, the first industrial revolution may very well have gone down in the annals of history as just another lost “golden age” like so many economic economic efflorescences before it. A period of high mechanization like the Dutch Golden Age that generated new technologies and vast wealth for a short period before sputtering out.
I’m skeptical that spelling reform moved the needle much. I’m admittedly not super familiar with the subject but the notion that vast swathes of information were lost due to phonetic spelling seems unlikely to me. Intellectuals could always fallback on Latin as a lingua franca until shortly before industrialization. Striking that entry from your industrialization checklist, the obvious next question becomes “Why Britain?”, as many other European states met the other requirements, yet not only failed to industrialize before Britain but even struggled to follow Britain’s progression. Industrialization in The Netherlands would not really take off in earnest until nearly a century after the first textile factories opened on the other side of the North Sea.
Interested in any of the roles. I haven’t played chess competitively in close to a decade and my USCF elo was in the 1500s at the time of stopping. So long as I’m given a heads up in advance, I’m free almost all day on Wednesdays, Fridays, and Sundays.
This line leaves me wondering about human isolation on our little planet and what maladaptations humanity is stuck with because we lack neighbors to learn from.
Failing to adopt cheap and plentiful nuclear power comes to mind as a potential example.
I largely agree with the sentiment of your post. However, one nitpick:
The world’s largest protest-riot ever, when measured by estimated damage to property.
This claim is questionable. The consensus is that the economic cost of the George Floyd Protests was between one and several billion. Perhaps it was the most expensive riot in US history (though when inflation-adjusted the LA riots may give it a run for its money) and the most expensive to be cleanly accounted for economically, but intuitively I would imagine many of the most violent riots in history, such as the partition riots in India and Pakistan, caused more economic damage.
Sam’s comments a few months ago would also make sense given this context:
https://www.lesswrong.com/posts/ndzqjR8z8X99TEa4E/?commentId=XNucY4a3wuynPPywb
further progress will not come from making models bigger. “I think we’re at the end of the era where it’s going to be these, like, giant, giant models,” he told an audience at an event held at MIT late last week. “We’ll make them better in other ways.” [...] Altman said there are also physical limits to how many data centers the company can build and how quickly it can build them. [...] At MIT last week, Altman confirmed that his company is not currently developing GPT-5. “An earlier version of the letter claimed OpenAI is training GPT-5 right now,” he said. “We are not, and won’t for some time.”
This new rumor about GPT-4′s architecture is just that and should be taken with a massive grain of salt...
That said however, it would explain OpenAI’s recent comments about difficulty training a model better than GPT-3. IIRC, OA spent a full year unable to substantially improve on GPT-3. Perhaps the scaling laws do not hold? Or they ran out of usable data? And thus this new architecture was deployed as a workaround. If this is true, it supports my suspicion that AI progress is slowing and that a lot of low-hanging fruit has been picked.
Altman said there are also physical limits to how many data centers the company can build and how quickly it can build them.
This seems to insinuate a cool down in scaling compute and Sam previously acknowledged that the data bottleneck was a real roadblock.
Yep, just as developing countries don’t bother with landlines, so to will companies, as they overcome inertia and embrace AI, choose to skip older outdated models and jump to the frontier, wherever that may lie. No company embracing LLMs in 2024 is gonna start by trying to first integrate GPT2, then 3, then 4 in an orderly and gradual manner.
Pretty sure that’s just an inside joke about Lex being a robot that stems from his somewhat stiff personality and unwillingness to take a strong stance on most topics.
You’re likely correct, but I’m not sure that’s relevant. For one, Chinchilla wasn’t announced until 2022, nearly two years after the release of GPT-3. So the slowdown is still apparent even if we assume OpenAI was nearly done training an undertrained GPT-4 (which I have seen no evidence of).
Moreover, the focus on efficiency itself is evidence of an approaching wall. Taking an example from the 20th century, machines got much more energy efficient after the 70s which is also when energy stopped getting cheaper. Why didn’t OpenAI pivot their attention to fine-tuning and efficiency after the release of GPT-2? Because GPT-2 was cheap to train and relied on a tiny fraction of all available data, sidelining their importance. Efficiency is typically a reaction to scarcity.
AFAIK, no information regarding this has been publicly released. If my assumption that Bing’s AI is somehow worse than GPT-4 is true, then I suspect some combination of three possible explanations must be true:
To save on inference costs, Bing’s AI uses less compute.
Bing’s AI simply isn’t that well trained when it comes to searching the web and thus isn’t using the tool as effectively as it could with better training.
Bing’s AI is trained to be sparing with searches to save on search costs.For multi-part questions, Bing seems too conservative when it comes to searching. Willingness to make more queries would probably improve its answers but at a higher cost to Microsoft.
I’m also quite sympathetic to the idea that another AI winter is plausible, mostly based off compute and data limits. One trivial but frequently overlooked data point is that GPT-4 was released nearly three years after GPT-3. In contrast, GPT-3 was released around a year after GPT-2 which in turn was released less than a year after GPT-1. Despite hype around AI being larger than ever, there already has been a progress slowdown relative to 2017-2020.
That said, a big unknown is to what extent specialized hardware dedicated to AI can outperform Moore’s Law. Jensen Huang sure thinks it can:
So obviously, computing has advanced tremendously and the way that’s happened, of course, is a complete reinvention of how computers write software, the computer architecture of it, and the computer runs software. Every single layer from the chip to the system to the interconnect to the algorithms, all completely redesigned and so this way of doing full-stack computing as you projected out ten years, there’s no question in my mind, large language models and these very large language models will have an opportunity to improve by another factor of a million. It just it has to be full stack.
That said, the economy is absorbing AI much slower than it is progressing and even if frontier progress halts tomorrow, investment may still be buoyed by the diffusion of the current models. It’s hard to argue that current models aren’t powerful enough to have economic value and won’t get less expensive as time progresses, regardless of how the frontier moves.
Does GPT-4 seem better than Bing’s AI (which also uses some form of GPT-4) to anyone else? This is hard to quantify, but I notice Bing misunderstanding complicated prompts or making mistakes in ways GPT-4 seems better at avoiding.
The search requests it makes are sometimes too simple for an in-depth question and because of this, its answers miss the crux of what I’m asking. Am I off base or has anyone else noticed this?
Many premodern societies actually spent a lot of time and effort on pursuing anti-aging technology. Perhaps not to the extent of organizing their whole society around it but their efforts were not trivial in scale. In medieval and early modern Europe, it was a primary goal of alchemy alongside transmuting base metals into gold. For Christians, the Bible already hinted at the possibility of radical life extension (biblical figures such as Methuselah were said to have lived for hundreds of years) and prominent intellectuals like Roger Bacon believed that human lifespans had been artificially shortened. Searching for a means of reversing this “corruption” to extend human lifespans was a mainstream, even cliche intellectual pursuit for centuries. It only became fringe with the rise of modernity.