Concrete positive visions for a future without AGI

“There was a threshold crossed somewhere,” said the Confessor, “without a single apocalypse to mark it. Fewer wars. Less starvation. Better technology. The economy kept growing. People had more resource to spare for charity, and the altruists had fewer and fewer causes to choose from. They came even to me, in my time, and rescued me. Earth cleaned itself up, and whenever something threatened to go drastically wrong again, the whole attention of the planet turned in that direction and took care of it. Humanity finally got its act together.”

— Eliezer Yudkowsky, Three Worlds Collide


A common sentiment among people worried about AI x-risk is that our world is on track to stagnate, collapse, or otherwise come to a bad end without (aligned) AGI to save the day.

Scott Alexander:

[I]f we never get AI, I expect the future to be short and grim. Most likely we kill ourselves with synthetic biology. If not, some combination of technological and economic stagnation, rising totalitarianism + illiberalism + mobocracy, fertility collapse and dysgenics will impoverish the world and accelerate its decaying institutional quality.

@disturbance in a a recent LW post that got lots of comments:

Statement: I want to deliberately balance the caution and the recklessness in developing AGI, such that it gets created in the last possible moment so that I and my close ones do not die.


A seemingly straightforward implication of this view is that we should therefore be willing to take on some amount of risk in order to build towards AGI faster than we would in a world where we had the luxury to take our time.

I think some of these sentiments and their implications are based on a mistaken view of the relative difficulty of particular technical and social challenges, but here I want to focus on a totally different point: there are lots of ways that things could go well without AGI (at least for a while).

Even if positive scenarios without AGI are unlikely or unrealistic given our current circumstances and trajectory, it’s useful to have a concrete vision of what a good medium-term future without AGI could look like. I think it’s especially important to take a moment to reflect on these possible good futures because recent preliminary governance wins, even if they succeed without qualification, are mainly focused on restriction and avoidance of bad outcomes rather than on building towards particular positive outcomes.

The rest of this post is a collection of examples of technologies, ideas, projects, and trends unrelated to AGI that give me hope and joy when I see them being worked on or talked about. It’s not meant to be exhaustive in any sense—mostly it is just a list of areas that I personally enjoy reading about, and would consider professional opportunities related to them.

Most of them involve solving hard technological and social problems. Some are quite speculative, and likely to be intractable or extremely unlikely to come to pass in isolation. But making incremental progress on any one is probably robustly positive for the world and lucrative and fulfilling for the people working on them[1]. And progress tends to snowball, as long as there’s no catastrophe to stop it.

As you read through the list, try to set aside your own views and probabilities on AGI, other x-risks, and fizzle or stagnation scenarios. Imagine a world where it is simply a given that humanity has time and space to flourish unimpeded for a time. Visualize what such a world might look like, where solutions are permitted to snowball without the threat of everything being cut short or falling to pieces. The purpose of this post is not to argue that any such world is particularly likely to be actualized; it is intended to serve as a concrete reminder that there are things worth working towards, and concrete ways to do so.


Energy abundance; solar and nuclear energy

Image
From AukeHoekstra on Twitter; global solar capacity added vs. forecasts

Energy abundance has the potential to revolutionize many aspects of civilization, and there are fewer and fewer technological barriers holding it back. Regulatory barriers (e.g. to nuclear power, and to all kinds of construction and growth generally) may remain, but in a hypothetical world free of unrelated obstacles, energy abundance is looking more and more like a case of “when” and “how,” rather than “if”.

Prediction markets

Robin Hanson has been talking about the benefits of prediction markets since 1988, but it feels like with the recent growth of Manifold and the continued operation of real-money prediction markets (PredictIt, Kalshi, Polymarket) things are starting to snowball.

I’m most excited about the future of real-money policy prediction markets on all sorts of legal and governance issues. The possibility of such markets might seem distant given the current regulatory climate around prediction markets themselves, but that seems like the kind of thing that could change rapidly under the right circumstances. Imagine if, instead of a years-long slog through the FDA, the process for getting a new drug approved was a matter of simply convincing a prediction market that the efficacy and harms met certain objective risk thresholds, with private insurers competing to backstop the liability.

Assorted links:

Space travel and space tech

Kinda self-explanatory. Starlink and SpaceX seem to be leading the way at the moment, but I’m excited to follow other private and national space efforts.

Life extension

Bryan Johnson is all over Twitter these days, documenting his self-experimentation in a quest to extend his own lifespan, ideally indefinitely. This is also a favorite topic of OG transhumanists on LessWrong and elsewhere, and it’s nice to see it get some attention from someone with serious resources.

I haven’t looked too closely into the specifics of the “Blueprint protocol”, but from a distance it strikes me as about the right mix of correctly combining and synthesizing a bunch of existing science in an evidence-based way, and mad experimentation of his own design.

Occupational licensing reform

A lot of occupational licensing in the U.S. and around the world is mostly about rent-seeking. Reform efforts have enjoyed some success in recent years, and it’s nice to see some feel-good /​ commonsense wins in U.S. politics, even if they’re probably relatively minor in the grand scheme of things. Shoshana Weissmann is a great Twitter follow on this issue. More generally, efforts to reduce credentialism seem like a promising avenue towards incremental reclamation of territory ceded to Moloch.

Automation of all kinds

I think efforts to combine software, robotics, logistics, mechanical engineering, and the AI we already have to reduce the amount of labor that humans do, especially grunt work, is very cool and likely to lead to large and real productivity gains.

Self-driving cars and trucks are perhaps the flashiest example of this kind of automation, but I expect more gradual automation in restaurants, factories, cleaning, maintenance, and other areas of the economy that currently rely on a lot of relatively low-skill, low-paying human labor to unlock a lot of human capital. Avoiding squandering that unlocked potential or ceding it right back to Moloch will be a challenge, but it’s a challenge that looks a bit more solvable with a lot more collective wealth.

Semi-related: AI Summer Harvest

Charter cities

Scott Alexander’s posts on this topic are always a treat. More so than any specific project or proposal, what excites me about charter cities is the abstract appeal of starting from scratch. In software, legacy systems often complicate and slow development of new features, and I think the same dynamic exists in many traditional cities. Beneath the streets of New York City are layers and layers of utilities and pipes, some over a century old. Charter cities offer an opportunity to start fresh: literal greenfield projects.

Genetic engineering and biohacking

In humans, plants, and animals. There’s too much cool stuff here to talk about or even list, so I’ll just dump a few links:

Rationality

The original sequences hold up amazingly well, and modern LessWrong is still pretty great. I think there is a lot more room to grow, whether it is by writing and re-writing key ideas with better (or just different) presentation, spreading such ideas more widely, or developing more CFAR-style curriculum and teaching it to people of all ages.

Some of my own favorite recent rationality writing is buried in Eliezer’s million word collaborative BDSM fiction, and could really use a more accessible presentation. (Though some of it, e.g. on Bayesian statistics and that practice of science, already stands pretty well on its own, despite the obscure format.)

Effective altruism

Also pretty great. I could criticize a lot of things about EA, but my overall feelings are pretty well-captured by adapting a classic Churchill quote: “EA is the worst form of altruism except for all those other forms that have been tried from time to time...”

I think an EA that was free from the threat of AGI x-risk for a while has the potential to be even more amazing. In such a world, my guess is that EA would succeed at directing a sizable fraction of the wealth and productivity gains unlocked by other areas in this list towards tackling a lot of big problems (global poverty, global health, animal welfare, non-AGI x-risks, etc.) much faster than those problems would get addressed in a world without EA.

Cryonics

A common view is that current cyronics tech is speculative, and a hard sell to friends and family. Another worry is that civilization hardly seems stable enough to keep you well-preserved and on-track to advance to a point where they will reliably bring you back as yourself, into a world you’re happy to be brought back into.

Those concerns might be valid, but they look like problems on which it is possible to make incremental progress. If you mainly care about immortality for yourself and your loved ones, my guess is your best bet is to push hard for cryonics research and adoption, as well as overall civilizational stability and adequacy. Incremental progress in any of those areas seems far more likely to increase the likelihood you are eventually brought back, compared to pushing towards AGI as fast as possible.[2]
Links:

Georgism

Georgism and land value taxes are an elegant idea for aligning incentives, funding public goods, and combating rent-seeking. I think land value taxes are simple enough that there is some hope of incremental adoption in some form or jurisdiction in the relatively near future. A decent introduction and source for further reading is this winner from Scott’s book review contest a couple of years ago.

Balsa Research

Zvi’s thing, focused on addressing low-hanging fruit and finding achievable, shovel-ready wins through public policy. Whether Balsa in particular succeeds at its mission or not, I think it is an exciting model for how rationalists can enter the space of public policy-making and politics in an effective and robustly positive way.

dath ilan

dath ilan is Eliezer’s fictional medianworld. The defining characteristic is that the real Eliezer’s most important personality traits are the median among the entire planet’s population. Because Eliezer himself is unusually smart, kind, and honorable compared to the average earthling, the population of dath ilan is much smarter, kinder, and better coordinated as a whole compared to Earth.

Many details and features of dath ilan would probably look pretty strange to an average earthling. Some of this strangeness is due to Eliezer’s other unusual-for-Earth traits and personal preferences, but the deeper lesson of dath ilan is that many of their systems of governance, coordination mechanisms, methods of doing science, and modes of thought are actually derived from a combination of reasoning from first principles and relatively few assumptions about human nature and behavior.

It’s not that there’s a particular “dath ilani” way of aggregating information or resolving conflicts or running an economy. Rather, the desire to flourish, cooperate, build, and enjoy the benefits of technological civilization and other nice things is common among a very wide distribution of possible humans. It might take a certain baseline level of population-level intelligence, coordination ability, and lucky history to actually achieve all those nice things, but civilizations which do manage the feat will probably share some commonalities with each other and with dath ilan that Earth currently lacks.

dath ilan is currently getting along pretty well without AGI, making it an inspiring picture of what one possible future of Earth without AGI could look like. In worlds where Earth succeeds in the long-term, I expect that it will share many features with dath ilan. But I think a more interesting and exciting question involves thinking about ways that the best versions of Earth and dath ilan will still be different.


When I imagine a world that manages to solve problems or make significant progress in even a few of the areas above, I feel a lot more optimistic about the possibility of such a world confronting the challenge of building an AGI safely, eventually. And even if we still fail at that ultimate challenge, such a world would be a nicer place to live in the meantime.

If you’re feeling despair about recent AI progress, consider turning some of your attention or professional efforts elsewhere. There are lots of opportunities in research, startups, and established companies that have nothing to do with pushing the frontier of AI capabilities[3], and lots of ways to have fun, build meaning, and enjoy life that don’t involve thinking about AGI at all.

  1. ^

    Also, none of them come with the moral hazards, uncertainties, and
    x-risks associated with working directly towards AGI.

  2. ^

    My guess is that reconstructing a well-preserved brain and developing indefinite life extension technology for biological humans are probably about equally hard (i.e. both trivial) for a superintelligence.

  3. ^

    Even startups that are focused on applying current AI in service of automating and improving existing business processes in diverse industries are probably net-positive for the world under a wide variety of progress models, and likely to be personally lucrative if you choose wisely.

    Though for both short-term viability and moral hazard reasons, you probably want to be careful to avoid pure “wrapper” startups whose business model depends on continued access to current and future third-party frontier models. But automating business processes or building integrations using current AI at established companies seems positive or at least non-harmful, as long as you’re confident in your ability not to depend on or contribute to “hype” around future advances.