In the future, there will be millions, and then billions, and then trillions of broadly superhuman AIs thinking and acting at 100x human speed (or faster). If all goes well, what might it feel like to live in the world as it undergoes this transformation?
Analogy: Imagine being a typical person living in England from 1520 to 2020 (500 years) but experiencing time 100x slower than everyone else, so to you it feels like only five years have passed:
Year 1 (1520–1620). A year of political turmoil. In February, Henry VIII breaks with Rome. By March, the monasteries are dissolved. In May, Mary burns Protestants; by the end of May, Elizabeth reverses everything again. Three religions of state in the span of a season. In September, the Spanish Armada sails and fails. Jamestown is founded around November. The East India Company is chartered. But the texture of life is identical in December to what it was in January. You still read by candlelight, travel by horse, communicate by letter. Your religious opinions may have flip-flopped a bit but you are still Christian. The New World is interesting news but nothing more.
Year 2 (1620–1720). In March, civil war breaks out. By April, the king is beheaded — a man who ruled by divine right, executed by his own Parliament! In June, the Great Plague sweeps London, killing a quarter of its population. Weeks later, the Great Fire burns it to the ground. In September, Newton publishes the Principia, recasting the universe as a mechanism of mathematical laws. The Glorious Revolution replaces one king with another, this time by Parliament’s invitation, with a Bill of Rights attached. In the moment, the political event feels bigger. Later you’ll realize Newton mattered more. Newcomen builds a steam engine in November. It pumps water out of mines. You don’t see what the hype is about.
Year 3 (1720–1820). The last year in which you will feel at home in the world. In May, the Seven Years’ War makes Britain the dominant global power; the New World is actually most of the world, and your country is conquering it. In June, Watt dramatically improves the steam engine. You visit a factory and find it unpleasant but not alarming. In July, the American colonies break away. In September, France explodes — revolution, regicide, the Terror. By October, Napoleon has seized control and is conquering Europe. It ends at Waterloo in December. You enter year 4 rattled but intact. You still travel by horse, communicate by letter, go to Church on Sunday.
Year 4 (1820–1920). The world breaks. In January, railways appear — steam-powered carriages on iron tracks. By February they’re everywhere. Slavery is abolished. The telegraph arrives in March: messages transmitted instantaneously by electrical signal. In May, Darwin publishes On the Origin of Species. Now people are saying maybe we’re all descended from monkeys instead of Adam and Eve. You don’t believe it.
You move to a city and work in a factory; you are still poor, but now your job is somewhat better and differently dirty. In July, you pick up a telephone and hears a human voice from another city through a wire. In August, electric light banishes the darkness that has structured every human evening since the beginning of the species. That same month, you see an automobile. People say it will make horses obsolete, but that doesn’t happen; months later you still see plenty of horses.
In November, the Wright Brothers fly. Up until now you thought that was impossible. The next month, the Great War happens. Machine guns, poison gas, tanks, aircraft. Several of your friends die.
Reflecting at the end of the year, you are struck by how visibly different everything is. You live in a city and work a factory instead of a farm. You ride around in horseless carriages. You aren’t as poor; numerous inventions and contraptions have improved your quality of life. New ideas have swept your social circles — atheism, communism, universal suffrage. It feels like a different world.
Year 5 (1920–2020).
The changes this year are crazier and harder to understand. People are saying the universe is billions of years old, and apparently there are things called galaxies in it that are very big and very far away. You still go to church, sometimes, but you don’t really believe anymore.
In February, the global economy collapses. Hitler rises; his ideology cites Darwin from last year. In March, the war starts again, worse in every dimension — cities bombed nightly, and it ends in April with a weapon that destroys an entire city in a single flash. Seventy million dead. But by May the economy is doing better than ever. You don’t see horses anymore.
The empire dissolves — India, Africa, gone in weeks. People are talking about the nuclear arms race, and the end of the human species. You take a flight for the first time. In June, humans walk on the moon, and you watch it happen through your new television.
You leave your factory job and get a desk job. Your new job title didn’t even exist at the start of the year. You are rich now, by the standards you are used to: Big clean house, plenty of good food, many fancy new appliances. Personal computers appear in August. In October, something called the internet connects them. In November, everyone carries small glass rectangles containing a telephone, a camera, a library, and a map. You pick one up and can’t figure out how to make it work. A child shows you.
You hear about climate change, gene editing, cryptocurrency. Something called “artificial intelligence” beats any human at chess; experts say it’s not actually intelligent though. Then in December a new version beats top Go players; experts say it’s scientifically interesting but still not truly intelligent. The next week, there’s a new version that can write sloppy essays and hold conversations. Now the experts are divided.
...
I suspect that this analogy might understate the pace of change and vertigo induced by the AI transition, for several reasons: 1. In the analogy, the non-slowed-down human population grows from about 400 million to about 7 billion, a bit more than 1 OOM. Whereas the AI population will grow by many OOMs, starting a small fraction of the human population and coming to dwarf it. 2. In the analogy, the non-slowed-down human population operates at a flat 100x speed compared to the slowed-down narrator. But in the AI case, the AIs will probably get faster over time. 3. More importantly, in the AI case the AIs will get qualitatively smarter, probably by quite a lot, over time. Whereas in the historical analogy, the humans of 1900 may be more educated and a bit smarter than the humans of 1500 but the difference isn’t huge.
This is a wonderful essay — really interesting. I have one question. I do acknowledge the possibility of an intelligence explosion, but I’d like to understand in more detail the scenario you describe, like in AI 2027, where several centuries of technological progress could occur within just 1–2 years. I’m not skeptical about a technological explosion driven by superintelligence — I simply want to better understand your reasoning.
What I want to understand is how much of an “industrial explosion” — that is, an explosion in research capital — is required for a “technology explosion.” In your AI 2027 report, it seemed to me that you climb several centuries’ worth of the technological tree even without a very large industrial expansion.
Footnote 68 of the Forethought paper employs a Cobb-Douglas R&D production function (σ = 1) in its quantitative analysis of a technology explosion, with cognitive labor exponent γ = 0.7 derived from NSF R&D expenditure data. Under this assumption, an explosive increase in cognitive capability can produce centuries of technological progress even with limited physical R&D capital.
However, Growiec, McAdam and Mućk (2023, Kansas City Fed) directly estimated the elasticity of substitution between R&D labor and R&D capital in the idea production function, finding σ = 0.7–0.8 using U.S. data from 1968–2019. Their conclusion is that “rather than ideas getting harder to find, the R&D capital needed to find them has become scarce.” Replacing the Cobb-Douglas assumption in footnote 68 with a CES production function using this empirically estimated σ significantly alters the conclusions. Key findings: Assuming C = 10^{10} (explosive increase in cognitive effort), the conclusions vary dramatically with σ: σ = 1.0 (footnote 68′s assumption): ~3x R&D capital expansion sufficient for 300 years of progress σ = 0.75 (midpoint of Growiec et al.): 100 years of progress requires ~17x R&D capital expansion. 300 years requires several hundred thousand times expansion—equivalent to hundreds of times current world GDP σ = 0.7 (lower bound of Growiec et al.): 100 years requires ~45x R&D capital. 300 years requires ~650,000x The Cobb-Douglas assumption in footnote 68 is therefore decisive for the conclusion. Within the empirically supported range of σ = 0.7–0.8, material bottlenecks are far more severe than footnote 68 suggests. The AI 2027 scenario envisions AGI/ASI rapidly climbing the technology tree within 1–2 years, achieving centuries’ worth of technological progress. In light of this analysis, several questions arise. Questions: Does the AI 2027 scenario implicitly assume σ ≈ 1 for the relationship between cognitive effort and physical R&D capital? If so, how do you evaluate the empirical findings of Growiec et al. (σ = 0.7–0.8)? In a world where σ = 0.75, achievable technological progress within 1–2 years may be limited to roughly 100 years’ worth. While 100 years of progress would still be revolutionary (curing most diseases, substantially slowing aging, universal robotics, etc.), it may not reach the most ambitious technologies mentioned in AI 2027, such as mind uploading or atomic-precision nanoscale manufacturing. To what extent would the AI 2027 scenario need to be revised in this case? Do you believe ASI could endogenously raise the effective σ toward 1 by increasing the efficiency of existing physical capital—extracting more information from the same experimental apparatus, substituting simulation for physical experimentation, and so on? If so, how much could σ plausibly rise from its historical level of 0.7–0.8 within a 1–2 year timeframe?
CES is almost as much of an oversimplification as Cobb-Douglas, and any value under σ=1 means labor and capital can each bottleneck output to some (fairly small) finite value if the other goes to infinity. E.g. if σ=0.8 and labor and capital are equally important, then output will only 16x if labor goes to infinity and capital is unchanged.
For physical capital in the form of computers it seems reasonable to me that AIs much better at coding than current AIs will get basically unlimited value from existing computers, just with diminishing marginal returns. For other physical capital, probably we need an increase in quality, though maybe not an increase in quantity. E.g. a new type of AFM capable of serving as a first-stage nanofactory could be designed, which would be 10,000x more valuable for nanoscale manufacturing research than current models, and therefore represent 10,000x the capital, but is the same size and so would not visibly result in an industrial explosion.
This is great. Both as a literary condensed history as well as communicating the felt acceleration. There is so much insight in there—some plain and some I feel a bit hidden. There is also a distinction I’m not sure you intend that I want to highlight.
In Year 2, you sketch a world where the political drama is loud and immediate, while Newton lands like a curiosity
Later you’ll realize Newton mattered more. There are insights that will matter because they change what is explainable, but they don’t force themselves on the average person.
In Year 4, the opposite happens:
In January, railways appear [...] By February they’re everywhere.
This isn’t just “better explanations.” It’s an implementation shock that forces adaption via new schedules, wider logistics, and new expectations about distance and time. You can resist Newton’s treatise without penalty; you can’t resist a railway that transports you and your goods.
A lot of “AI at 100x speed” is Newton-like unless it crosses the threshold to railway-like pushing the world.
You even gesture at this: Newcomen’s engine is “hype you don’t see.” Watt’s improvement is “unpleasant but not alarming.” Those are capability jumps that remain optional until they become embedded in institutions and capital stock.
As long as ideas are only conceptual and both their technical as well as social consequences are unknown or in the process of being explored and not implemented, the average person doesn’t see or hear about them. They are only circulated in a smaller research community and then productized by companies—often under significant risk. Only when the ideas are implemented and brought into a form that works for the market of consumer goods, production technology, or social change does the average person see them. This makes the pioneers, not the inventors, of the idea a buffer between the idea and the implementation.
In your piece, you make the effect of the buffers a recurring theme:
But the texture of life is identical in December to what it was in January.
You still read by candlelight, travel by horse, communicate by letter.
You still travel by horse, communicate by letter, go to Church on Sunday.
And you also describe what happens when the buffer gets saturated and the ideas spread wider. Only then does culture, and people’s beliefs and habits update, but slowly and incrementally:
Now people are saying maybe we’re all descended from monkeys instead of Adam and Eve. You don’t believe it.
You still go to church, sometimes, but you don’t really believe anymore.
But the process doesn’t only happen in time. It also happens in space. We see such buffers at work today in the rural to urban difference. Cities run closer to the frontier because they concentrate infrastructure, capital, service networks, etc. Rural areas often lag not because people are behind, but because everything is thinner: fewer institutions per square kilometer, fewer investments, and generally slower cycles. Slowness means delay and many of the innovations haven’t diffused there (yet).
electric light banishes the darkness.
We can see how people felt in Year 4 right now. In many regions in many developing countries, this is not only a lived memory, but still quite common. If not every night, then at least at frequent blackouts. My mother-in-law in rural Kenya has power now, but no TV, no dishwasher, no microwave—in fact, food is prepared on fire. My wife grew up with stories told around the hearth, the only light source. While India has amazingly managed to connect everyone in a short time, a shock like the railroad, in South Sudan, only 5% have electricity.
Even within one country, you can live a digital job life an hour away from a world that is more like “letter and horse.”
So I think your essay nails the psychology. But the element that predicts the experience is not “faster minds.” It’s whether the results of the minds remain Newton-like, i.e., ideas guarded by elites, tried in isolated experiments, and hidden behind mediated interfaces, or if they become railway-like where they get embedded in and reconfigure the environment faster than people can renegotiate norms.
The question becomes: Will buffers remain or not? And that depends on whether humans and human institutions remain in control. Thus my question to you is:
Who do you think will hold the controls?
Not “who has the smartest models,” but who gets end-to-end control over the channels that move material, money, permissions, and enforcement? Do you imagine AI mostly as a datacenter advisory layer inside existing institutions (the geniuses in a datacenter), with humans in the loop? Or do you imagine AI as embodied systems, whether autonomous robots or AI-controlled actuators of many kinds, that directly substitute for human labor and coordination in factories, care, construction, or even security and regulation?
I think this story might be a useful bit of propaganda for convincing people who are not already feeling future shock to start feeling it, which may be useful for getting political support.
Looking at the actual object level, and setting aside the massive complicated assumption “If all goes well”, I think this is an unfair perspective, because “If all goes well”, than AIs care about the wellbeing of humans and humanity, in which case there will be an incomprehensible collective of incomprehensible intelligence devoted to solving the problem of making humans feel comfortable adjusting to the future environment they now find themselves in.
It’s the marginal worlds between “things go well” and “things go poorly” where future shock is a concern.
If you haven’t already, maybe look at Bostrom’s Deep Utopia. I think his exploration of the “things go well” idea is quite good, although the format of the book seems optimized to amuse rather than to inform in an organized and efficient manner. I’m not sure I would have made the same decision.
I think “autopotency” is a relevant concept here. Moving from a “post-instrumental utopia” to a “plastic utopia” we would expect people to see people beginning to modify themselves in deep, repeatable ways that solve the issues of future shock.
In the future, there will be millions, and then billions, and then trillions of broadly superhuman AIs thinking and acting at 100x human speed (or faster). If all goes well, what might it feel like to live in the world as it undergoes this transformation?
Analogy: Imagine being a typical person living in England from 1520 to 2020 (500 years) but experiencing time 100x slower than everyone else, so to you it feels like only five years have passed:
Year 1 (1520–1620). A year of political turmoil. In February, Henry VIII breaks with Rome. By March, the monasteries are dissolved. In May, Mary burns Protestants; by the end of May, Elizabeth reverses everything again. Three religions of state in the span of a season. In September, the Spanish Armada sails and fails. Jamestown is founded around November. The East India Company is chartered. But the texture of life is identical in December to what it was in January. You still read by candlelight, travel by horse, communicate by letter. Your religious opinions may have flip-flopped a bit but you are still Christian. The New World is interesting news but nothing more.
Year 2 (1620–1720). In March, civil war breaks out. By April, the king is beheaded — a man who ruled by divine right, executed by his own Parliament! In June, the Great Plague sweeps London, killing a quarter of its population. Weeks later, the Great Fire burns it to the ground. In September, Newton publishes the Principia, recasting the universe as a mechanism of mathematical laws. The Glorious Revolution replaces one king with another, this time by Parliament’s invitation, with a Bill of Rights attached. In the moment, the political event feels bigger. Later you’ll realize Newton mattered more. Newcomen builds a steam engine in November. It pumps water out of mines. You don’t see what the hype is about.
Year 3 (1720–1820). The last year in which you will feel at home in the world. In May, the Seven Years’ War makes Britain the dominant global power; the New World is actually most of the world, and your country is conquering it. In June, Watt dramatically improves the steam engine. You visit a factory and find it unpleasant but not alarming. In July, the American colonies break away. In September, France explodes — revolution, regicide, the Terror. By October, Napoleon has seized control and is conquering Europe. It ends at Waterloo in December. You enter year 4 rattled but intact. You still travel by horse, communicate by letter, go to Church on Sunday.
Year 4 (1820–1920). The world breaks. In January, railways appear — steam-powered carriages on iron tracks. By February they’re everywhere. Slavery is abolished. The telegraph arrives in March: messages transmitted instantaneously by electrical signal. In May, Darwin publishes On the Origin of Species. Now people are saying maybe we’re all descended from monkeys instead of Adam and Eve. You don’t believe it.
You move to a city and work in a factory; you are still poor, but now your job is somewhat better and differently dirty. In July, you pick up a telephone and hears a human voice from another city through a wire. In August, electric light banishes the darkness that has structured every human evening since the beginning of the species. That same month, you see an automobile. People say it will make horses obsolete, but that doesn’t happen; months later you still see plenty of horses.
In November, the Wright Brothers fly. Up until now you thought that was impossible. The next month, the Great War happens. Machine guns, poison gas, tanks, aircraft. Several of your friends die.
Reflecting at the end of the year, you are struck by how visibly different everything is. You live in a city and work a factory instead of a farm. You ride around in horseless carriages. You aren’t as poor; numerous inventions and contraptions have improved your quality of life. New ideas have swept your social circles — atheism, communism, universal suffrage. It feels like a different world.
Year 5 (1920–2020).
The changes this year are crazier and harder to understand. People are saying the universe is billions of years old, and apparently there are things called galaxies in it that are very big and very far away. You still go to church, sometimes, but you don’t really believe anymore.
In February, the global economy collapses. Hitler rises; his ideology cites Darwin from last year. In March, the war starts again, worse in every dimension — cities bombed nightly, and it ends in April with a weapon that destroys an entire city in a single flash. Seventy million dead. But by May the economy is doing better than ever. You don’t see horses anymore.
The empire dissolves — India, Africa, gone in weeks. People are talking about the nuclear arms race, and the end of the human species. You take a flight for the first time. In June, humans walk on the moon, and you watch it happen through your new television.
You leave your factory job and get a desk job. Your new job title didn’t even exist at the start of the year. You are rich now, by the standards you are used to: Big clean house, plenty of good food, many fancy new appliances. Personal computers appear in August. In October, something called the internet connects them. In November, everyone carries small glass rectangles containing a telephone, a camera, a library, and a map. You pick one up and can’t figure out how to make it work. A child shows you.
You hear about climate change, gene editing, cryptocurrency. Something called “artificial intelligence” beats any human at chess; experts say it’s not actually intelligent though. Then in December a new version beats top Go players; experts say it’s scientifically interesting but still not truly intelligent. The next week, there’s a new version that can write sloppy essays and hold conversations. Now the experts are divided.
...
I suspect that this analogy might understate the pace of change and vertigo induced by the AI transition, for several reasons:
1. In the analogy, the non-slowed-down human population grows from about 400 million to about 7 billion, a bit more than 1 OOM. Whereas the AI population will grow by many OOMs, starting a small fraction of the human population and coming to dwarf it.
2. In the analogy, the non-slowed-down human population operates at a flat 100x speed compared to the slowed-down narrator. But in the AI case, the AIs will probably get faster over time.
3. More importantly, in the AI case the AIs will get qualitatively smarter, probably by quite a lot, over time. Whereas in the historical analogy, the humans of 1900 may be more educated and a bit smarter than the humans of 1500 but the difference isn’t huge.
i would be strongly in favor of you expanding this into a main page post, this is really good
This is a wonderful essay — really interesting. I have one question. I do acknowledge the possibility of an intelligence explosion, but I’d like to understand in more detail the scenario you describe, like in AI 2027, where several centuries of technological progress could occur within just 1–2 years. I’m not skeptical about a technological explosion driven by superintelligence — I simply want to better understand your reasoning.
What I want to understand is how much of an “industrial explosion” — that is, an explosion in research capital — is required for a “technology explosion.” In your AI 2027 report, it seemed to me that you climb several centuries’ worth of the technological tree even without a very large industrial expansion.
Footnote 68 of the Forethought paper employs a Cobb-Douglas R&D production function (σ = 1) in its quantitative analysis of a technology explosion, with cognitive labor exponent γ = 0.7 derived from NSF R&D expenditure data. Under this assumption, an explosive increase in cognitive capability can produce centuries of technological progress even with limited physical R&D capital.
https://www.forethought.org/research/preparing-for-the-intelligence-explosion#the-technology-explosion?reloaded=true&reloaded=tr
However, Growiec, McAdam and Mućk (2023, Kansas City Fed) directly estimated the elasticity of substitution between R&D labor and R&D capital in the idea production function, finding σ = 0.7–0.8 using U.S. data from 1968–2019. Their conclusion is that “rather than ideas getting harder to find, the R&D capital needed to find them has become scarce.”
Replacing the Cobb-Douglas assumption in footnote 68 with a CES production function using this empirically estimated σ significantly alters the conclusions.
Key findings:
Assuming C = 10^{10} (explosive increase in cognitive effort), the conclusions vary dramatically with σ:
σ = 1.0 (footnote 68′s assumption): ~3x R&D capital expansion sufficient for 300 years of progress
σ = 0.75 (midpoint of Growiec et al.): 100 years of progress requires ~17x R&D capital expansion. 300 years requires several hundred thousand times expansion—equivalent to hundreds of times current world GDP
σ = 0.7 (lower bound of Growiec et al.): 100 years requires ~45x R&D capital. 300 years requires ~650,000x
The Cobb-Douglas assumption in footnote 68 is therefore decisive for the conclusion. Within the empirically supported range of σ = 0.7–0.8, material bottlenecks are far more severe than footnote 68 suggests.
The AI 2027 scenario envisions AGI/ASI rapidly climbing the technology tree within 1–2 years, achieving centuries’ worth of technological progress. In light of this analysis, several questions arise.
Questions:
Does the AI 2027 scenario implicitly assume σ ≈ 1 for the relationship between cognitive effort and physical R&D capital? If so, how do you evaluate the empirical findings of Growiec et al. (σ = 0.7–0.8)?
In a world where σ = 0.75, achievable technological progress within 1–2 years may be limited to roughly 100 years’ worth. While 100 years of progress would still be revolutionary (curing most diseases, substantially slowing aging, universal robotics, etc.), it may not reach the most ambitious technologies mentioned in AI 2027, such as mind uploading or atomic-precision nanoscale manufacturing. To what extent would the AI 2027 scenario need to be revised in this case?
Do you believe ASI could endogenously raise the effective σ toward 1 by increasing the efficiency of existing physical capital—extracting more information from the same experimental apparatus, substituting simulation for physical experimentation, and so on? If so, how much could σ plausibly rise from its historical level of 0.7–0.8 within a 1–2 year timeframe?
CES is almost as much of an oversimplification as Cobb-Douglas, and any value under σ=1 means labor and capital can each bottleneck output to some (fairly small) finite value if the other goes to infinity. E.g. if σ=0.8 and labor and capital are equally important, then output will only 16x if labor goes to infinity and capital is unchanged.
For physical capital in the form of computers it seems reasonable to me that AIs much better at coding than current AIs will get basically unlimited value from existing computers, just with diminishing marginal returns. For other physical capital, probably we need an increase in quality, though maybe not an increase in quantity. E.g. a new type of AFM capable of serving as a first-stage nanofactory could be designed, which would be 10,000x more valuable for nanoscale manufacturing research than current models, and therefore represent 10,000x the capital, but is the same size and so would not visibly result in an industrial explosion.
This is great. Both as a literary condensed history as well as communicating the felt acceleration. There is so much insight in there—some plain and some I feel a bit hidden. There is also a distinction I’m not sure you intend that I want to highlight.
In Year 2, you sketch a world where the political drama is loud and immediate, while Newton lands like a curiosity
In Year 4, the opposite happens:
This isn’t just “better explanations.” It’s an implementation shock that forces adaption via new schedules, wider logistics, and new expectations about distance and time. You can resist Newton’s treatise without penalty; you can’t resist a railway that transports you and your goods.
A lot of “AI at 100x speed” is Newton-like unless it crosses the threshold to railway-like pushing the world.
You even gesture at this: Newcomen’s engine is “hype you don’t see.” Watt’s improvement is “unpleasant but not alarming.” Those are capability jumps that remain optional until they become embedded in institutions and capital stock.
As long as ideas are only conceptual and both their technical as well as social consequences are unknown or in the process of being explored and not implemented, the average person doesn’t see or hear about them. They are only circulated in a smaller research community and then productized by companies—often under significant risk. Only when the ideas are implemented and brought into a form that works for the market of consumer goods, production technology, or social change does the average person see them. This makes the pioneers, not the inventors, of the idea a buffer between the idea and the implementation.
In your piece, you make the effect of the buffers a recurring theme:
And you also describe what happens when the buffer gets saturated and the ideas spread wider. Only then does culture, and people’s beliefs and habits update, but slowly and incrementally:
But the process doesn’t only happen in time. It also happens in space. We see such buffers at work today in the rural to urban difference. Cities run closer to the frontier because they concentrate infrastructure, capital, service networks, etc. Rural areas often lag not because people are behind, but because everything is thinner: fewer institutions per square kilometer, fewer investments, and generally slower cycles. Slowness means delay and many of the innovations haven’t diffused there (yet).
We can see how people felt in Year 4 right now. In many regions in many developing countries, this is not only a lived memory, but still quite common. If not every night, then at least at frequent blackouts. My mother-in-law in rural Kenya has power now, but no TV, no dishwasher, no microwave—in fact, food is prepared on fire. My wife grew up with stories told around the hearth, the only light source. While India has amazingly managed to connect everyone in a short time, a shock like the railroad, in South Sudan, only 5% have electricity.
Even within one country, you can live a digital job life an hour away from a world that is more like “letter and horse.”
So I think your essay nails the psychology. But the element that predicts the experience is not “faster minds.” It’s whether the results of the minds remain Newton-like, i.e., ideas guarded by elites, tried in isolated experiments, and hidden behind mediated interfaces, or if they become railway-like where they get embedded in and reconfigure the environment faster than people can renegotiate norms.
The question becomes: Will buffers remain or not? And that depends on whether humans and human institutions remain in control. Thus my question to you is:
Who do you think will hold the controls?
Not “who has the smartest models,” but who gets end-to-end control over the channels that move material, money, permissions, and enforcement? Do you imagine AI mostly as a datacenter advisory layer inside existing institutions (the geniuses in a datacenter), with humans in the loop? Or do you imagine AI as embodied systems, whether autonomous robots or AI-controlled actuators of many kinds, that directly substitute for human labor and coordination in factories, care, construction, or even security and regulation?
I think this story might be a useful bit of propaganda for convincing people who are not already feeling future shock to start feeling it, which may be useful for getting political support.
Looking at the actual object level, and setting aside the massive complicated assumption “If all goes well”, I think this is an unfair perspective, because “If all goes well”, than AIs care about the wellbeing of humans and humanity, in which case there will be an incomprehensible collective of incomprehensible intelligence devoted to solving the problem of making humans feel comfortable adjusting to the future environment they now find themselves in.
It’s the marginal worlds between “things go well” and “things go poorly” where future shock is a concern.
If you haven’t already, maybe look at Bostrom’s Deep Utopia. I think his exploration of the “things go well” idea is quite good, although the format of the book seems optimized to amuse rather than to inform in an organized and efficient manner. I’m not sure I would have made the same decision.
I think “autopotency” is a relevant concept here. Moving from a “post-instrumental utopia” to a “plastic utopia” we would expect people to see people beginning to modify themselves in deep, repeatable ways that solve the issues of future shock.