One thing I notice when reading 20th century history is that people in the 1900s-1970s had much higher priors than modern people do that the future might be radically different, in either great or terrible ways. For example:
They talked about how WW1 was the war to end all wars. They seriously talked about the prospect of banning war after WW1. Such things now sound hopelessly naive.
Serious people talked very seriously about the possibility of transformative technological change and social change following from it (e.g. Keynes/Russell speculating that people would work way fewer hours in the future).
As a minor example, between 1905-1915 Churchill spent a bunch of time trying to persuade the British government that on current trends, oil-powered ships would soon be way better than coal-powered ships, and the navy should be converted to oil power. I know of ~no recent examples where a major politician’s main schtick was being thoughtful about the future of technology and making policy based on it. More generally, it was obvious after WW1 that states needed to be doing futurism and technological development in order to understand the military implications of modern technology.
I really feel like the ambient cultural sense among educated Americans is: the future will be kind of like the present, treating it as if something radical will happen is naive. (They sort of say that they think climate change will be apocalyptic, but it feels to me like what they’re really imagining is that the world is “enshittified” further, in the same way that it sucks that DoorDash is now expensive, and maybe poor people elsewhere die.)
I think this is probably mostly because there’s an important sense in which world has been changing more slowly (at least from the perspective of Americans), and the ways in which it’s changing feel somehow less real. Someone who was 50 in 1945 had seen the collapse of empires that had lasted centuries, unprecedented wars, the sudden shocking rise of Communism, the invention and mass adoption of cars, radio, tanks, etc. That’s just way way crazier than anything that 50 year old Americans have seen. And the main technological advances—phones, internet, social media, and recently AI—seem somehow subtler and easier to ignore, even though they have an objectively large effect on people’s experience of life and on how society functions.
I think that people of the past might have reacted with more credulity to some of our claims about transformative AI.
I often feel like people I’m talking to are demonstrating an embarrassing lack of historical context when they implicitly imagine that states will be stable and that technology won’t drastically change the world. (Or sometimes they say “usually it works better to trade with people than to overpower them”, and my response is “that is really not a historical universal!”)
Eliezer sometimes talks about how people are ruined by modern culture, in a way only fixable by reading 1950s sci-fi (or something like this, I don’t remember). I wonder how much of what he’s talking about is related to this.
As a datapoint, the more I learn about bio, especially recent-ish stuff (past 1-5 decades), I’m more like “the whole “The Great Stagnation” thing was basically bullshit”:
DNA sequencing in any form has only existed for about half a century.
Before the 21st century, we hadn’t sequenced 1 human genome.
IDK why this isn’t felt so intuitively. Maybe it’s just kinda opaque. People notice “hey the mRNA vaccines were developed really fast, that’s weird/cool” but don’t know about the vastness of the field. There’s plenty of popular “science news” but it is somehow assumed to be unreal / fictional. Maybe because clickbait.
This is going to be transformative on slower scales than other tech because medicine and reprogenetics is inherently slower (slow experiments, more caution, more regulation, more difficult problems) compared to, like, making vehicles. But it’s one of those overestimating short-term change / underestimating long-term change things, probably.
None of these advancements have direct impacts on most people’s day-to-day lives.
In contrast, the difference between “I’ve heard of cars, but they’re play things for the rich” and “my family owns a car”, is transformative for individuals and societies.
At least in the 21st century, new internal combustion engine technologies exhibit high reproducibility and low verification costs. There are no large numbers of internal combustion engine specialists employing various means to generate false or selectively filtered test reports for personal gain. Consequently, no engine configuration used in automotive development has been found fundamentally impossible.
Automobiles are not regulated by a group of accident experts with questionable ties to automotive giants and overly strict automotive ethicists. Consequently, a vehicle cannot be banned for violating some aspect of so-called automotive ethics. New cars also do not require decades of randomized controlled trials involving thousands of participants to gain market approval—costs that smaller automotive companies could never afford.
Driving a car is not regarded as a qualification requiring years of costly university education, but rather as a right enjoyed by all who undergo basic training. The thousands who die annually in car accidents are not perceived as a catastrophic failure of automobiles, compelling society to pressure for their elimination.
Society does not view automobiles as solely for transporting patients. Not every attempt to use cars for faster mobility faces resistance, suspicion from licensed drivers well-versed in automotive ethics, or sparks conspiracy-tinged debates about social equity and the value of life. On the contrary, people have the right to drive to most places they wish to go—provided roads exist and traffic restrictions do not apply.
Of course, there are also virtually no automotive conspiracy theories claiming that only divinely granted legs are suitable for transportation, advocating water as a fuel substitute, or declaring that adding trace amounts of explosives to fuel tanks can achieve any desired speed.
Really? Maybe, I’m not sure. Did you check? If you add up vaccines developed in the last 50 years, times the number of illness / damage they’ve prevented, what do you get? What about other medical treatments? What about food production downstream of GMOs? Etc.
Speculatively introducing a hypothesis: It’s easier to notice a difference like
N years ago, we didn’t have X. Now that we have X, our life has been completely restructured. (Xϵ{car, PC, etc.})
than
N years ago, people sometimes died of some disease that is very rare / easily preventable now, but mostly everyone lived their lives mostly the same way.
I.e., introducing some X that causes ripples restructuring a big aspect of human life, vs introducing some X that removes an undesirable thing.
I wonder if it’s a thing where it’s taking a while for those things to hit. Like, mRNA vaccines are only a couple of years old, GLP-1RAs are in a gradual process of being rolled out, etc. If I think of the category of “awesome newish bio stuff I’d like to use”, it seems like most of it becomes widely available to consumers in the near future or last 5 years, with the exception of statins.
Right, I think bio stuff is slower and involves somewhat deeper science. (Compared to car manufacturing, in some sense, maybe. Though it’s hard to say / unclear what the question is, because you have deep stuff with chip manufacturing, and special alloys, and so on; but I think bio has much more prerequisite scientific richness to its big advances). Because it’s slower and deeper, it’s more opaque (i.e. harder to do credit assignment).
I think it’s not just that it’s slower/deeper: my personal sense (which might be just a thing of not requiring much medical care between the ages of 5 and 30) is that the pace at which awesome new stuff is happening in medicines I can buy got much faster in the last few years. If my perception is right, it seems like that requires some explanation of “bio is slower/deeper and also 40 years ago there was a massive breakthru that took 40 years to percolate”, and not just “bio is slower/deeper”.
Well, to really evaluate this I’d want to see some sort of thorough-ish investigation, that tries to think of most of the main ways that bio would have been impacting people’s lives, and checking the timescales for the prerequisite research. It’s not something I’d update very much about, based on anecdata, because it’s too big of a question.
Drug approvals have gone up in recent years: https://pmc.ncbi.nlm.nih.gov/articles/PMC10856271/ (figure 1). Of course most of those are not ones that you’ll encounter in day-to-day life. Meanwhile, some of the most commonly used over-the-counter drugs from previous decades have been pulled from the market or made harder to get (cold medicine particularly: phenylpropanolamine due to rare side effects in 2000, oral phenylephrine due to lack of effect last year, and pseudoephedrine restricted to behind the counter due to use in meth a decade ago or so).
My guess is that the big difference in the speed of biotech compared to early-20th-century-advancements is the relative conservatism of the medical field, and the money & time-consuming certifications you need to get before releasing anything to market. This, in my view, is much less a function of the science, and much more a function of the sociology around the science.
I imagine that’s one relevant thing going on, but also I think the actual science has a lot more depth. The progress I listed doesn’t seem like it’s going slower due to medical regulation.
The progress I listed doesn’t seem like it’s going slower due to medical regulation.
I mean the basic research aspect sure (except for stem cells), but applications of each of the progress areas you listed basically involve either clinical applications or selling GMOs. Both of which have very bad regulatory bottlenecks, especially from a world-wide perspective.
There has been, as you mention, enormous progress in bio-tech and our broader understanding of biology in the past 50 years, but comparatively little application of that knowledge. This is not what you would expect if the science is “deep” but applications easy. How exactly does the progress you listed support this conclusion?
Yeah, as I mentioned in my earlier comment bio stuff is:
inherently slower (slow experiments, more caution, more regulation, more difficult problems)
So yeah I agree applications are also difficult. One thing I’m trying to say is “the progress of bio feels slower in significant part because the science itself is difficult, and is actually slower in a sense, but this is a confusing way to view it because there has also been a large amount of scientific progress; so it’s slower in some sense of being less progress per time relative to the total difficulty of the field, i.e. we’re still mostly confused and mostly powerless in the domain of bio; but the absolute quantity of knowledge and power we’ve gained is large; but people don’t appreciate that; partly that’s because the applications are separately harder and slower, and maybe partly that’s because it’s harder / less legible to attribute the applications to the font of deep progress”.
I agree there’s been a lot of scientific progress, and real GDP per capita, which is maybe the most canonical single metric, continues to rise steadily.
But yeah, I think that this feels underwhelming to people compared to earlier qualitative changes. I think this is some combination of them noting that tech advances affect their lives less, and the tech advances feeling more opaque.
Very, very few people are at all about scientific accomplishments unless it’s directly affecting somebody they personally care about, and particularly themselves or their kids. The technical accomplishments you list are in fundamental bio or medical innovations that have so far only affected a small number of individuals, so for the time being, virtually nobody will care about them. The reality is that turning the technical accomplishments you list into safe and effect medicines ready for doctors to give to patients have been extremely time consuming, expensive, and limited in scope, or have not yet resulted in bona fide medical breakthroughs in humans.
In biology, innovations in earlier times (vaccines and antibiotics) were cheap, saved a lot more young people and prevented more common and deadly illnesses. mRNA vaccines and gene editing are applicable to much smaller numbers of people, often in more distant countries, often primarily benefitting older people with comorbidities. You can see this in tapering off of lifespan, the growing gap between lifespan and healthspan, and the rising cost of medical care.
You can also see it in the exceptions—how GLP-1 agonists like semaglutide (ozempic), which benefitted numerous young people in the first word in a very tangible way—provoked much more news coverage, popular awareness, and grassroots optimism about progress in this domain, as well as populist anger at high prices, limited availability, worries about side effects, and so on.
I think we will see similar excitement if substantial strides are made in bringing down IVF costs, improving fertility for women in their late 30s and beyond, and ability to predict and prevent or terminate pregnancies when the fetus is expected to show profound autism or other serious cognitive impairments; widespread deployment of xenotransplantation (pig kidneys transplanted into humans), drastically improving antipsychotics or treatments for substance abuse, and continued improvements to GLP-1 agonists and potentially other drugs performing related functions based on insights gained from initial success.
We will also see improvements in the US health situation if congress increases the number of funded residency positions so that we can expand the doctor workforce, as well as through expansion of telehealth. But given everything that’s going on, I think this is unlikely in the near term future.
idk, it’s unclear to me that computers and the Internet are more subtle than cars or radios. it’s also, 50 year old americans today have seen the fall of the soviet union, the creation of the european union, enormous advances in civil rights, 9/11, the 2008 crash, covid, the invasion of ukraine, etc. this isn’t exactly WWII level but also nowhere near a static stable world.
Seems a lot less subtle than radios at least! Cars are a different story, they are big and loud and everywhere. But phones are small and loud and everywhere...
I think they are because in practice they just didn’t produce the same amount of economic growth. And for most people, their direct impact of these things are entertainment applications, or using them at work (where sometimes they feel like they make things worse). Meanwhile I remember hearing a story of a woman (someone’s grandma) who was in awe of the washing machine they had just bought because well, it had saved her hours of daily gruelling work. And that’s more impactful to one’s life than almost anything computers or the internet have done.
I have heard Peter Thiel make the point that almost all the recent significant advances are concentrated in the digital world, whereas change in the analog world has been very marginal.
Serious people talked very seriously about the possibility of transformative technological change and social change following from it (e.g. Keynes/Russell speculating that people would work way fewer hours in the future).
Don’t we have things like that today? E.g. Bengio and Hinton speculating that ASI will arrive and maybe kill everyone. Also, I’d argue that people like Bostrom and Yudkowsky will be viewed more favorably 50 years from now than they are today, and will generally be thought of as “serious people” to a much greater degree. When Keynes/Russell were speculating about the future, they probably weren’t as renowned as they are now.
Re: Politicians: Andrew Yang isn’t a major politician I guess, but his main schtick was “AI is coming” basically right?
Also Dominic Cummings has similar vibes, possibly even more extreme, than Churchill’s schtick about coal vs. oil.
Re: Politicians: Andrew Yang isn’t a major politician I guess, but his main schtick was “AI is coming” basically right?
Not really, from my memory and checking wikipedia, his campaign was mainly focused on advocating for UBI, and used whatever arguments it could to defend that policy position, including but certainly not limited to an argument that automation was coming, but mainly for menial tasks like truck driving.
I think this is probably mostly because there’s an important sense in which world has been changing more slowly (at least from the perspective of Americans), and the ways in which it’s changing feel somehow less real.
Maybe another factor is that a lot of the unbounded, grand, and imaginative thinking of the early 20th and the 19th century ended up either being either unfounded or quite harmful. So maybe the narrower margins of today are in part a reaction to that in addition to being a reaction to fewer wild things happening.
For example, many of the catastrophes of the 20th century (Nazism, Maoism, Stalinism) were founded in a kind of utopian mode of thinking that probably made those believers more susceptible to mugging. In the 20th century, postmodernists started (quite rightly, imo) rejecting grand narratives in history, like those by Hegel, Marx, and Spengler, and instead historians started offering more nuanced (and imo accurate) historical studies. And several of the most catastrophic fears, like those of 19th-century millenarianism and nuclear war, didn’t actually happen.
I think you’re probably right about that historical difference. But I don’t agree with the implication that people won’t believe AGI is coming until too late. (I realize this isn’t the main claim you’re making here, but I think you’d agree that’s the most important implication.)
It’s like January 2020 now, when those concerned with Covid were laughed off. That doesn’t mean AGI concerns will be dismissed when more evidence hits. The public could easily go from not nearly concerned enough to making panicked demands for mass action like shutting down half the economy as a precautionary measure.
Yes, the modern assumption that nothing really changes will slow down recognition of AI’s dangers. But not for long if we’re fortunate enough to get a slowish takeoff and public deployments of useful (and therefore creepy) LLM agents. Of course, that might not happen until we’re too close to internal deployment of a misaligned takeover-capable system like Agent-4 from AI 2027. But it’s looking pretty likely we’ll get such deployments and job replacements before the point of no return, so I think we should at least have some contingency plans in case of dramatic public concern.
AI is in far-mode thinking for most people now, but I predict it’s going to be near-mode for a lot of people as soon as we’ve got inarguable job replacement and more common experience with agentic AI.
I’m the first to talk about how foolish people are compared to our idealized self-conception. People are terrible with abstract ideas. But I think the main reason is that they don’t spend time thinking seriously about them until they’re personally relevant. Humans take a long time to figure out new things. It takes a lot of thought. But it’s also a collective process. As it becomes a bigger part of public conversation, basic logic like “oh yeah they’re probably going to build a new species, and that sounds pretty dangerous” will become common.
Note that most of the people talking about AI now are entrepreneurs and AI developers—the small slice of humanity most prone to be pro-AI biased. Most other people intuitively fear it, arguably for good reasons.
I can think of several prominent predictions in the present of similar magnitude.
Every election is proclaimed as the death of American democracy.
Race war precipitated by Whites becoming a racial minority.
The recognition of “same-sex marriages” was to harbinger a collapse of all public morality.
Restrictions on abortion access reducing women to sex-slaves, à la The Handmaid’s Tale.
I think you’re understating the apocalypticism of climate-change activism.
Smartphones/social media/pornography corrupting the youth, leading to … okay, admittedly this one’s vaguer, but the consequences, whatever they might be, are still expected to be dire.
If overpopulation has ceased to be a major concern, that’s a very recent development.
Similarly, running out of oil was forecast to return technology to horse-drawn carriages and beeswax candles. They’ve definitely stopped saying this, but I heard it in the ’00s.
The difference you’re talking about might be simply due to you discounting these as insane (or maybe just disingenuous) while hailing analogous predictions in the past as wise/prescient.
the future will be kind of like the present, treating it as if something radical will happen is naive
Neglectedness must be observed in resource allocation, and anything worth doing is worth doing seriously. So if criteria such as “naive” determine what gets done, some things that are unusually “naive” will get neglected, and so would be worth doing. And if things are not taken seriously when they are characterized in some way, such as being based on “naive” motivations, they get done inefficiently even when they do get done, and so it would be worth fixing the inefficiency.
That’s just way way crazier than anything that 50 year old Americans have seen. And the main technological advances—phones, internet, social media, and recently AI—seem somehow subtler and easier to ignore
You forgot computers more generally. Which only became widely used in the 1980s & 1990s
It depends where you look. In the 2010s the World Economic Forum was predicting a fourth industrial revolution that would transform every aspect of life. In the 1990s you had Fukuyama saying that the end of the Cold War meant a new worldwide consensus on political ideology. Around the same time, the Internet was also seen as something transformative, and the ideas of nanotechnology haunted the parts of the culture attuned to technological futurism. For that matter, AI utopianism and apocalypticism has been everywhere for the past three years and has never really gone away. The war on terror, the rise of progressivism, the rise of populism, the rise of BRICS, these all have futurisms associated with them. MAGA and the Green New Deal are both intended as utopian visions. So I’d say that the idea that the future will be different from the present, and that we have some capacity to shape it, has never really gone away.
I wonder if it’s less about rate of change (but don’t really take any exception to that claim) and more about divergence of change from expectations. 1950′s or 60′s expectations (at least in pop culture) was flying cars and smart robot house servants—think Jetson’s here.
People of the early 20th Century had the direct experience of living though some very significant events which they probably had not really expected. The future became much more uncertain so receptivity to more possible outcomes probably increases. The situation is a bit different up to now, so I wonder if that doesn’t place greater weight on a view of the future as some trend path with variation but mean-reversion.
-- Median age in the U.S. was ⇐ 30 throughout the 20th century, until, roughly, the start of the 1980s. Today it’s 39. The median age of white Americans is now 44.5! Insofar as more music, advertising, and fiction is written with an older audience in mind, and insofar as people’s preferences tend to shift towards quiesence as they age, I think this would contribute to the “ambient cultural sense” of stagnation that you describe. (Of course, it’s also possible that a widespread belief that the future will be just like the present has made people have less kids, thus causing the median age to rise, but that seems likely to be a less sizeable causal channel than the other way around).
Some pretty important stuff has happened since the 1980s. The collapse of the Soviet Union, the Arab Spring, and the political rise of China were arguably bigger geopolitical deals than most 20th century events pre-WWI, in the interwar period, and during the Cold War. I’d argue it’s a change in the audience, not in the things happening, that’s made the changes these have had in our world seem relatively “subtler and easier to ignore”.
-- Somewhat relatedly, more financial power is in the hands of the geriatric. This paper has some fascinating data. In 1983, the mean net worth of Americans 75 and older was 5% greater than the mean for all Americans; in 2022 it was reportedly 58% greater… meanwhile, the relative amount of wealth in the belonging to young and middle-aged Americans fell precipitately; people aged 45-54 went from having an average net worth 53% above mean in 1983, to 9% below mean in 2022! (To be clear, the paper points to the elderly having higher-valued stock portfolios as a key cause of this wealth composition shift, and it’s possible there’s some incumbency advantage behind this which exists as part of the thing that’s made the world feel like it’s moving more slowly...).
An outsized amount of wealth in people well past the age of child-rearing might mean that political and financial power now caters more towards risk-averse, pro-status quo initiatives. (The elderly also participate far more in elections than young people [a truism of American politics that’s only slightly less true since the 2020 election], and the AARP has played a considerable role in sinking the political viability of reforms such as the implementation of a nationwide V.A.T.). There is already significant evidence to suggest that an increase in the fraction of elderly residents in a region is associated with less per-child education spending. It wouldn’t surprise me if a bunch of political incentives which originate from the center of power skewing older have additive effects, such that it ends up being outside the overton window to believe something like transformative AI is something worth paying serious attention or effort… (the undertaking of infrastructure investment and the meticulous setup of regulatory regimes are actions that tend disproportionately favor the young… also, by virtue of understanding technology better, the young are far more likely to recognize the need for these things).
To be clear, these intergenerational wealth dynamics may be particular to the United States (and perhaps Western Europe), but given the country’s outsized cultural power, I imagine the world is feeling the effects of this generational competition for cultural power in the U.S., in that the window for transformative technological change might seem as if it were in the past. (Even if the cultural ‘vibe’ feels much more youthful in parts of Eastern Europe and South Asia with still healthy economies, it’s probably the case those countries still look to the U.S. for leadership on matters of technology.)
Ironically, it may be that the amassing of cultural power in an older population, compelled by the greater financial weight in the hands of the 80-somethings who were probably the original audience for 1950s sci-fi, has been the anergy that’s dulled our cultural receptors from detecting the winds of change.
A central pillar of the Democratic Party has been that Republicans will destroy democracy and take the country down with it (somewhat ditto the Republican line on immigration). Both parties are obsessed with the end of American greatness, and motivate their voters through that narrative. To a lesser extent, they’re also nebulously united on “beating China”.
Where I agree is that there’s an absence of a positive vision for the future (something this just isn’t the world today + better healthcare). I think this is especially true on the American left, which has basically mired itself into an anti-progress position through its natural distrust of billionaires and its reaction to the tech-right rising in political prominence. It’s hard to accept radical change is possible (except through the existing lens of concentration of wealth or environmental impact) when accepting that change means elevating the importance of people in your cultural outgroup. ASI is a silly concern for fringe thinkers in San Francisco; real writers ask the pressing questions about electricity costs, copyright, and corporate influence on the Trump administration.
Compare what the writers of places like the Atlantic, the NYT, or Times have to say about AI compared to people like Steve Bannon. It’s incredible near term and sanded down, while the right has been generally more willing to engage with superintelligence being possible.
One thing I notice when reading 20th century history is that people in the 1900s-1970s had much higher priors than modern people do that the future might be radically different, in either great or terrible ways. For example:
They talked about how WW1 was the war to end all wars. They seriously talked about the prospect of banning war after WW1. Such things now sound hopelessly naive.
Serious people talked very seriously about the possibility of transformative technological change and social change following from it (e.g. Keynes/Russell speculating that people would work way fewer hours in the future).
As a minor example, between 1905-1915 Churchill spent a bunch of time trying to persuade the British government that on current trends, oil-powered ships would soon be way better than coal-powered ships, and the navy should be converted to oil power. I know of ~no recent examples where a major politician’s main schtick was being thoughtful about the future of technology and making policy based on it. More generally, it was obvious after WW1 that states needed to be doing futurism and technological development in order to understand the military implications of modern technology.
I really feel like the ambient cultural sense among educated Americans is: the future will be kind of like the present, treating it as if something radical will happen is naive. (They sort of say that they think climate change will be apocalyptic, but it feels to me like what they’re really imagining is that the world is “enshittified” further, in the same way that it sucks that DoorDash is now expensive, and maybe poor people elsewhere die.)
I think this is probably mostly because there’s an important sense in which world has been changing more slowly (at least from the perspective of Americans), and the ways in which it’s changing feel somehow less real. Someone who was 50 in 1945 had seen the collapse of empires that had lasted centuries, unprecedented wars, the sudden shocking rise of Communism, the invention and mass adoption of cars, radio, tanks, etc. That’s just way way crazier than anything that 50 year old Americans have seen. And the main technological advances—phones, internet, social media, and recently AI—seem somehow subtler and easier to ignore, even though they have an objectively large effect on people’s experience of life and on how society functions.
I think that people of the past might have reacted with more credulity to some of our claims about transformative AI.
I often feel like people I’m talking to are demonstrating an embarrassing lack of historical context when they implicitly imagine that states will be stable and that technology won’t drastically change the world. (Or sometimes they say “usually it works better to trade with people than to overpower them”, and my response is “that is really not a historical universal!”)
Eliezer sometimes talks about how people are ruined by modern culture, in a way only fixable by reading 1950s sci-fi (or something like this, I don’t remember). I wonder how much of what he’s talking about is related to this.
As a datapoint, the more I learn about bio, especially recent-ish stuff (past 1-5 decades), I’m more like “the whole “The Great Stagnation” thing was basically bullshit”:
DNA sequencing in any form has only existed for about half a century.
Before the 21st century, we hadn’t sequenced 1 human genome.
Only in the past 5ish years do we have millions of whole genomes (or 10ish years if you count SNP arrays; see https://berkeleygenomics.org/articles/How_many_human_genomes_have_been_sequenced_.html), and the resulting polygenic scores (now including thousands of alleles for dozens of traits).
Epigenomic sequencing (RNA sequencing, methylation sequencing, chromatin accessibility sequencing, spatial sequencing) is a decade old.
Embryonic stem cells? Isolated <50 years ago.
Turning non-stem cells into stem cells? 21st century.
Serious de novo DNA synthesis (more than a few base pairs)? <50 years old.
Megabase synthetic chromosome (stitched together): 2010ish (https://www.csmonitor.com/Science/2010/0521/J.-Craig-Venter-Institute-creates-first-synthetic-life-form).
Mouse gametogenesis? Past decade-ish.
CRISPR-Cas9 gene editing? Past 2 decades.
CRISPR epigenetic editing? Past decade.
Etc.
IDK why this isn’t felt so intuitively. Maybe it’s just kinda opaque. People notice “hey the mRNA vaccines were developed really fast, that’s weird/cool” but don’t know about the vastness of the field. There’s plenty of popular “science news” but it is somehow assumed to be unreal / fictional. Maybe because clickbait.
This is going to be transformative on slower scales than other tech because medicine and reprogenetics is inherently slower (slow experiments, more caution, more regulation, more difficult problems) compared to, like, making vehicles. But it’s one of those overestimating short-term change / underestimating long-term change things, probably.
None of these advancements have direct impacts on most people’s day-to-day lives.
In contrast, the difference between “I’ve heard of cars, but they’re play things for the rich” and “my family owns a car”, is transformative for individuals and societies.
At least in the 21st century, new internal combustion engine technologies exhibit high reproducibility and low verification costs. There are no large numbers of internal combustion engine specialists employing various means to generate false or selectively filtered test reports for personal gain. Consequently, no engine configuration used in automotive development has been found fundamentally impossible.
Automobiles are not regulated by a group of accident experts with questionable ties to automotive giants and overly strict automotive ethicists. Consequently, a vehicle cannot be banned for violating some aspect of so-called automotive ethics. New cars also do not require decades of randomized controlled trials involving thousands of participants to gain market approval—costs that smaller automotive companies could never afford.
Driving a car is not regarded as a qualification requiring years of costly university education, but rather as a right enjoyed by all who undergo basic training. The thousands who die annually in car accidents are not perceived as a catastrophic failure of automobiles, compelling society to pressure for their elimination.
Society does not view automobiles as solely for transporting patients. Not every attempt to use cars for faster mobility faces resistance, suspicion from licensed drivers well-versed in automotive ethics, or sparks conspiracy-tinged debates about social equity and the value of life. On the contrary, people have the right to drive to most places they wish to go—provided roads exist and traffic restrictions do not apply.
Of course, there are also virtually no automotive conspiracy theories claiming that only divinely granted legs are suitable for transportation, advocating water as a fuel substitute, or declaring that adding trace amounts of explosives to fuel tanks can achieve any desired speed.
Really? Maybe, I’m not sure. Did you check? If you add up vaccines developed in the last 50 years, times the number of illness / damage they’ve prevented, what do you get? What about other medical treatments? What about food production downstream of GMOs? Etc.
Speculatively introducing a hypothesis: It’s easier to notice a difference like
than
I.e., introducing some X that causes ripples restructuring a big aspect of human life, vs introducing some X that removes an undesirable thing.
Relatedly, people systematically overlook subtractive changes.
I wonder if it’s a thing where it’s taking a while for those things to hit. Like, mRNA vaccines are only a couple of years old, GLP-1RAs are in a gradual process of being rolled out, etc. If I think of the category of “awesome newish bio stuff I’d like to use”, it seems like most of it becomes widely available to consumers in the near future or last 5 years, with the exception of statins.
Right, I think bio stuff is slower and involves somewhat deeper science. (Compared to car manufacturing, in some sense, maybe. Though it’s hard to say / unclear what the question is, because you have deep stuff with chip manufacturing, and special alloys, and so on; but I think bio has much more prerequisite scientific richness to its big advances). Because it’s slower and deeper, it’s more opaque (i.e. harder to do credit assignment).
I think it’s not just that it’s slower/deeper: my personal sense (which might be just a thing of not requiring much medical care between the ages of 5 and 30) is that the pace at which awesome new stuff is happening in medicines I can buy got much faster in the last few years. If my perception is right, it seems like that requires some explanation of “bio is slower/deeper and also 40 years ago there was a massive breakthru that took 40 years to percolate”, and not just “bio is slower/deeper”.
Well, to really evaluate this I’d want to see some sort of thorough-ish investigation, that tries to think of most of the main ways that bio would have been impacting people’s lives, and checking the timescales for the prerequisite research. It’s not something I’d update very much about, based on anecdata, because it’s too big of a question.
Drug approvals have gone up in recent years: https://pmc.ncbi.nlm.nih.gov/articles/PMC10856271/ (figure 1). Of course most of those are not ones that you’ll encounter in day-to-day life. Meanwhile, some of the most commonly used over-the-counter drugs from previous decades have been pulled from the market or made harder to get (cold medicine particularly: phenylpropanolamine due to rare side effects in 2000, oral phenylephrine due to lack of effect last year, and pseudoephedrine restricted to behind the counter due to use in meth a decade ago or so).
My guess is that the big difference in the speed of biotech compared to early-20th-century-advancements is the relative conservatism of the medical field, and the money & time-consuming certifications you need to get before releasing anything to market. This, in my view, is much less a function of the science, and much more a function of the sociology around the science.
I imagine that’s one relevant thing going on, but also I think the actual science has a lot more depth. The progress I listed doesn’t seem like it’s going slower due to medical regulation.
I mean the basic research aspect sure (except for stem cells), but applications of each of the progress areas you listed basically involve either clinical applications or selling GMOs. Both of which have very bad regulatory bottlenecks, especially from a world-wide perspective.
There has been, as you mention, enormous progress in bio-tech and our broader understanding of biology in the past 50 years, but comparatively little application of that knowledge. This is not what you would expect if the science is “deep” but applications easy. How exactly does the progress you listed support this conclusion?
Yeah, as I mentioned in my earlier comment bio stuff is:
So yeah I agree applications are also difficult. One thing I’m trying to say is “the progress of bio feels slower in significant part because the science itself is difficult, and is actually slower in a sense, but this is a confusing way to view it because there has also been a large amount of scientific progress; so it’s slower in some sense of being less progress per time relative to the total difficulty of the field, i.e. we’re still mostly confused and mostly powerless in the domain of bio; but the absolute quantity of knowledge and power we’ve gained is large; but people don’t appreciate that; partly that’s because the applications are separately harder and slower, and maybe partly that’s because it’s harder / less legible to attribute the applications to the font of deep progress”.
I agree there’s been a lot of scientific progress, and real GDP per capita, which is maybe the most canonical single metric, continues to rise steadily.
But yeah, I think that this feels underwhelming to people compared to earlier qualitative changes. I think this is some combination of them noting that tech advances affect their lives less, and the tech advances feeling more opaque.
Very, very few people are at all about scientific accomplishments unless it’s directly affecting somebody they personally care about, and particularly themselves or their kids. The technical accomplishments you list are in fundamental bio or medical innovations that have so far only affected a small number of individuals, so for the time being, virtually nobody will care about them. The reality is that turning the technical accomplishments you list into safe and effect medicines ready for doctors to give to patients have been extremely time consuming, expensive, and limited in scope, or have not yet resulted in bona fide medical breakthroughs in humans.
In biology, innovations in earlier times (vaccines and antibiotics) were cheap, saved a lot more young people and prevented more common and deadly illnesses. mRNA vaccines and gene editing are applicable to much smaller numbers of people, often in more distant countries, often primarily benefitting older people with comorbidities. You can see this in tapering off of lifespan, the growing gap between lifespan and healthspan, and the rising cost of medical care.
You can also see it in the exceptions—how GLP-1 agonists like semaglutide (ozempic), which benefitted numerous young people in the first word in a very tangible way—provoked much more news coverage, popular awareness, and grassroots optimism about progress in this domain, as well as populist anger at high prices, limited availability, worries about side effects, and so on.
I think we will see similar excitement if substantial strides are made in bringing down IVF costs, improving fertility for women in their late 30s and beyond, and ability to predict and prevent or terminate pregnancies when the fetus is expected to show profound autism or other serious cognitive impairments; widespread deployment of xenotransplantation (pig kidneys transplanted into humans), drastically improving antipsychotics or treatments for substance abuse, and continued improvements to GLP-1 agonists and potentially other drugs performing related functions based on insights gained from initial success.
We will also see improvements in the US health situation if congress increases the number of funded residency positions so that we can expand the doctor workforce, as well as through expansion of telehealth. But given everything that’s going on, I think this is unlikely in the near term future.
idk, it’s unclear to me that computers and the Internet are more subtle than cars or radios. it’s also, 50 year old americans today have seen the fall of the soviet union, the creation of the european union, enormous advances in civil rights, 9/11, the 2008 crash, covid, the invasion of ukraine, etc. this isn’t exactly WWII level but also nowhere near a static stable world.
Seems a lot less subtle than radios at least! Cars are a different story, they are big and loud and everywhere. But phones are small and loud and everywhere...
I think they are because in practice they just didn’t produce the same amount of economic growth. And for most people, their direct impact of these things are entertainment applications, or using them at work (where sometimes they feel like they make things worse). Meanwhile I remember hearing a story of a woman (someone’s grandma) who was in awe of the washing machine they had just bought because well, it had saved her hours of daily gruelling work. And that’s more impactful to one’s life than almost anything computers or the internet have done.
I have heard Peter Thiel make the point that almost all the recent significant advances are concentrated in the digital world, whereas change in the analog world has been very marginal.
Serious people talked very seriously about the possibility of transformative technological change and social change following from it (e.g. Keynes/Russell speculating that people would work way fewer hours in the future).
Don’t we have things like that today? E.g. Bengio and Hinton speculating that ASI will arrive and maybe kill everyone. Also, I’d argue that people like Bostrom and Yudkowsky will be viewed more favorably 50 years from now than they are today, and will generally be thought of as “serious people” to a much greater degree. When Keynes/Russell were speculating about the future, they probably weren’t as renowned as they are now.
Re: Politicians: Andrew Yang isn’t a major politician I guess, but his main schtick was “AI is coming” basically right?
Also Dominic Cummings has similar vibes, possibly even more extreme, than Churchill’s schtick about coal vs. oil.
Not really, from my memory and checking wikipedia, his campaign was mainly focused on advocating for UBI, and used whatever arguments it could to defend that policy position, including but certainly not limited to an argument that automation was coming, but mainly for menial tasks like truck driving.
Nice post!
Maybe another factor is that a lot of the unbounded, grand, and imaginative thinking of the early 20th and the 19th century ended up either being either unfounded or quite harmful. So maybe the narrower margins of today are in part a reaction to that in addition to being a reaction to fewer wild things happening.
For example, many of the catastrophes of the 20th century (Nazism, Maoism, Stalinism) were founded in a kind of utopian mode of thinking that probably made those believers more susceptible to mugging. In the 20th century, postmodernists started (quite rightly, imo) rejecting grand narratives in history, like those by Hegel, Marx, and Spengler, and instead historians started offering more nuanced (and imo accurate) historical studies. And several of the most catastrophic fears, like those of 19th-century millenarianism and nuclear war, didn’t actually happen.
I think you’re probably right about that historical difference. But I don’t agree with the implication that people won’t believe AGI is coming until too late. (I realize this isn’t the main claim you’re making here, but I think you’d agree that’s the most important implication.)
It’s like January 2020 now, when those concerned with Covid were laughed off. That doesn’t mean AGI concerns will be dismissed when more evidence hits. The public could easily go from not nearly concerned enough to making panicked demands for mass action like shutting down half the economy as a precautionary measure.
Yes, the modern assumption that nothing really changes will slow down recognition of AI’s dangers. But not for long if we’re fortunate enough to get a slowish takeoff and public deployments of useful (and therefore creepy) LLM agents. Of course, that might not happen until we’re too close to internal deployment of a misaligned takeover-capable system like Agent-4 from AI 2027. But it’s looking pretty likely we’ll get such deployments and job replacements before the point of no return, so I think we should at least have some contingency plans in case of dramatic public concern.
AI is in far-mode thinking for most people now, but I predict it’s going to be near-mode for a lot of people as soon as we’ve got inarguable job replacement and more common experience with agentic AI.
I’m the first to talk about how foolish people are compared to our idealized self-conception. People are terrible with abstract ideas. But I think the main reason is that they don’t spend time thinking seriously about them until they’re personally relevant. Humans take a long time to figure out new things. It takes a lot of thought. But it’s also a collective process. As it becomes a bigger part of public conversation, basic logic like “oh yeah they’re probably going to build a new species, and that sounds pretty dangerous” will become common.
Note that most of the people talking about AI now are entrepreneurs and AI developers—the small slice of humanity most prone to be pro-AI biased. Most other people intuitively fear it, arguably for good reasons.
My logic and an attempt to convey my intuitions on this are in A country of alien idiots in a datacenter.
I can think of several prominent predictions in the present of similar magnitude.
Every election is proclaimed as the death of American democracy.
Race war precipitated by Whites becoming a racial minority.
The recognition of “same-sex marriages” was to harbinger a collapse of all public morality.
Restrictions on abortion access reducing women to sex-slaves, à la The Handmaid’s Tale.
I think you’re understating the apocalypticism of climate-change activism.
Smartphones/social media/pornography corrupting the youth, leading to … okay, admittedly this one’s vaguer, but the consequences, whatever they might be, are still expected to be dire.
If overpopulation has ceased to be a major concern, that’s a very recent development.
Similarly, running out of oil was forecast to return technology to horse-drawn carriages and beeswax candles. They’ve definitely stopped saying this, but I heard it in the ’00s.
The difference you’re talking about might be simply due to you discounting these as insane (or maybe just disingenuous) while hailing analogous predictions in the past as wise/prescient.
Neglectedness must be observed in resource allocation, and anything worth doing is worth doing seriously. So if criteria such as “naive” determine what gets done, some things that are unusually “naive” will get neglected, and so would be worth doing. And if things are not taken seriously when they are characterized in some way, such as being based on “naive” motivations, they get done inefficiently even when they do get done, and so it would be worth fixing the inefficiency.
You forgot computers more generally. Which only became widely used in the 1980s & 1990s
It depends where you look. In the 2010s the World Economic Forum was predicting a fourth industrial revolution that would transform every aspect of life. In the 1990s you had Fukuyama saying that the end of the Cold War meant a new worldwide consensus on political ideology. Around the same time, the Internet was also seen as something transformative, and the ideas of nanotechnology haunted the parts of the culture attuned to technological futurism. For that matter, AI utopianism and apocalypticism has been everywhere for the past three years and has never really gone away. The war on terror, the rise of progressivism, the rise of populism, the rise of BRICS, these all have futurisms associated with them. MAGA and the Green New Deal are both intended as utopian visions. So I’d say that the idea that the future will be different from the present, and that we have some capacity to shape it, has never really gone away.
I wonder if it’s less about rate of change (but don’t really take any exception to that claim) and more about divergence of change from expectations. 1950′s or 60′s expectations (at least in pop culture) was flying cars and smart robot house servants—think Jetson’s here.
People of the early 20th Century had the direct experience of living though some very significant events which they probably had not really expected. The future became much more uncertain so receptivity to more possible outcomes probably increases. The situation is a bit different up to now, so I wonder if that doesn’t place greater weight on a view of the future as some trend path with variation but mean-reversion.
A couple of hypotheses as to why:
-- Median age in the U.S. was ⇐ 30 throughout the 20th century, until, roughly, the start of the 1980s. Today it’s 39. The median age of white Americans is now 44.5! Insofar as more music, advertising, and fiction is written with an older audience in mind, and insofar as people’s preferences tend to shift towards quiesence as they age, I think this would contribute to the “ambient cultural sense” of stagnation that you describe. (Of course, it’s also possible that a widespread belief that the future will be just like the present has made people have less kids, thus causing the median age to rise, but that seems likely to be a less sizeable causal channel than the other way around).
Some pretty important stuff has happened since the 1980s. The collapse of the Soviet Union, the Arab Spring, and the political rise of China were arguably bigger geopolitical deals than most 20th century events pre-WWI, in the interwar period, and during the Cold War. I’d argue it’s a change in the audience, not in the things happening, that’s made the changes these have had in our world seem relatively “subtler and easier to ignore”.
-- Somewhat relatedly, more financial power is in the hands of the geriatric. This paper has some fascinating data. In 1983, the mean net worth of Americans 75 and older was 5% greater than the mean for all Americans; in 2022 it was reportedly 58% greater… meanwhile, the relative amount of wealth in the belonging to young and middle-aged Americans fell precipitately; people aged 45-54 went from having an average net worth 53% above mean in 1983, to 9% below mean in 2022! (To be clear, the paper points to the elderly having higher-valued stock portfolios as a key cause of this wealth composition shift, and it’s possible there’s some incumbency advantage behind this which exists as part of the thing that’s made the world feel like it’s moving more slowly...).
An outsized amount of wealth in people well past the age of child-rearing might mean that political and financial power now caters more towards risk-averse, pro-status quo initiatives. (The elderly also participate far more in elections than young people [a truism of American politics that’s only slightly less true since the 2020 election], and the AARP has played a considerable role in sinking the political viability of reforms such as the implementation of a nationwide V.A.T.). There is already significant evidence to suggest that an increase in the fraction of elderly residents in a region is associated with less per-child education spending. It wouldn’t surprise me if a bunch of political incentives which originate from the center of power skewing older have additive effects, such that it ends up being outside the overton window to believe something like transformative AI is something worth paying serious attention or effort… (the undertaking of infrastructure investment and the meticulous setup of regulatory regimes are actions that tend disproportionately favor the young… also, by virtue of understanding technology better, the young are far more likely to recognize the need for these things).
To be clear, these intergenerational wealth dynamics may be particular to the United States (and perhaps Western Europe), but given the country’s outsized cultural power, I imagine the world is feeling the effects of this generational competition for cultural power in the U.S., in that the window for transformative technological change might seem as if it were in the past. (Even if the cultural ‘vibe’ feels much more youthful in parts of Eastern Europe and South Asia with still healthy economies, it’s probably the case those countries still look to the U.S. for leadership on matters of technology.)
Ironically, it may be that the amassing of cultural power in an older population, compelled by the greater financial weight in the hands of the 80-somethings who were probably the original audience for 1950s sci-fi, has been the anergy that’s dulled our cultural receptors from detecting the winds of change.
A central pillar of the Democratic Party has been that Republicans will destroy democracy and take the country down with it (somewhat ditto the Republican line on immigration). Both parties are obsessed with the end of American greatness, and motivate their voters through that narrative. To a lesser extent, they’re also nebulously united on “beating China”.
Where I agree is that there’s an absence of a positive vision for the future (something this just isn’t the world today + better healthcare). I think this is especially true on the American left, which has basically mired itself into an anti-progress position through its natural distrust of billionaires and its reaction to the tech-right rising in political prominence. It’s hard to accept radical change is possible (except through the existing lens of concentration of wealth or environmental impact) when accepting that change means elevating the importance of people in your cultural outgroup. ASI is a silly concern for fringe thinkers in San Francisco; real writers ask the pressing questions about electricity costs, copyright, and corporate influence on the Trump administration.
Compare what the writers of places like the Atlantic, the NYT, or Times have to say about AI compared to people like Steve Bannon. It’s incredible near term and sanded down, while the right has been generally more willing to engage with superintelligence being possible.