FLI Podcast: The Precipice: Existential Risk and the Future of Humanity with Toby Ord

Toby Ord’s “The Precipice: Existential Risk and the Future of Humanity” has emerged as a new cornerstone text in the field of existential risk. The book presents the foundations and recent developments of this budding field from an accessible vantage point, providing an overview suitable for newcomers. For those already familiar with existential risk, Toby brings new historical and academic context to the problem, along with central arguments for why existential risk matters, novel quantitative analysis and risk estimations, deep dives into the risks themselves, and tangible steps for mitigation. “The Precipice” thus serves as both a tremendous introduction to the topic and a rich source of further learning for existential risk veterans. Toby joins us on this episode of the Future of Life Institute Podcast to discuss this definitive work on what may be the most important topic of our time.

Topics discussed in this episode include:

-An overview of Toby’s new book
-What it means to be standing at the precipice and how we got here
-Useful arguments for why existential risk matters
-The risks themselves and their likelihoods
-What we can do to safeguard humanity’s potential

You can find the page for this podcast here: https://​​futureoflife.org/​​2020/​​03/​​31/​​he-precipice-existential-risk-and-the-future-of-humanity-with-toby-ord/​​

Transcript:

Lucas Perry: Welcome to the Future of Life Institute Podcast. I’m Lucas Perry. This episode is with Toby Ord and covers his new book “The Precipice: Existential Risk and the Future of Humanity.” This is a new cornerstone piece in the field of existential risk and I highly recommend this book for all persons of our day and age. I feel this work is absolutely critical reading for living an informed, reflective, and engaged life in our time. And I think even for those well acquainted with this topic area will find much that is both useful and new in this book. Toby offers a plethora of historical and academic context to the problem, tons of citations and endnotes, useful definitions, central arguments for why existential risk matters that can be really helpful for speaking to new people about this issue, and also novel quantitative analysis and risk estimations, as well as what we can actually do to help mitigate these risks. So, if you’re a regular listener to this podcast, I’d say this is a must add to your science, technology, and existential risk bookshelf.

The Future of Life Institute is a non-profit and this podcast is funded and supported by listeners like you. So if you find what we do on this podcast to be important and beneficial, please consider supporting the podcast by donating at futureoflife.org/​​donate. If you support any other content creators via services like Patreon, consider viewing a regular subscription to FLI in the same light. You can also follow us on your preferred listening platform, like on Apple Podcasts or Spotify, by searching for us directly or following the links on the page for this podcast found in the description.

Toby Ord is a Senior Research Fellow in Philosophy at Oxford University. His work focuses on the big picture questions facing humanity. What are the most important issues of our time? How can we best address them?

Toby’s earlier work explored the ethics of global health and global poverty, demonstrating that aid has been highly successful on average and has the potential to be even more successful if we were to improve our priority setting. This led him to create an international society called Giving What We Can, whose members have pledged over $1.5 billion to the most effective charities helping to improve the world. He also co-founded the wider effective altruism movement, encouraging thousands of people to use reason and evidence to help others as much as possible.

His current research is on the long-term future of humanity, and the risks which threaten to destroy our entire potential.

Finally, the Future of Life Institute podcasts have never had a central place for conversation and discussion about the episodes and related content. In order to facilitate such conversation, I’ll be posting the episodes to the LessWrong forum at Lesswrong.com where you’ll be able to comment and discuss the episodes if you so wish. The episodes more relevant to AI alignment will be crossposted from LessWrong to the Alignment Forum as well at alignmentforum.org.

And so with that, I’m happy to present Toby Ord on his new book “The Precipice.”

We’re here today to discuss your new book, The Precipice: Existential Risk and the Future of Humanity. Tell us a little bit about what the book is about.

Toby Ord: The future of humanity, that’s the guiding idea, and I try to think about how good our future could be. That’s what really motivates me. I’m really optimistic about the future we could have if only we survive the risks that we face. There have been various natural risks that we have faced for as long as humanity’s been around, 200,000 years of Homo sapiens or you might include an even broader definition of humanity that’s even longer. That’s 2000 centuries and we know that those natural risks can’t be that high or else we wouldn’t have been able to survive so long. It’s quite easy to show that the risks should be lower than about 1 in 1000 per century.

But then with humanity’s increasing power over that time, the exponential increases in technological power. We reached this point last century with the development of nuclear weapons, where we pose a risk to our own survival and I think that the risks have only increased since then. We’re in this new period where the risk is substantially higher than these background risks and I call this time the precipice. I think that this is a really crucial time in the history and the future of humanity, perhaps the most crucial time, this few centuries around now. And I think that if we survive, and people in the future, look back on the history of humanity, schoolchildren will be taught about this time. I think that this will be really more important than other times that you’ve heard of such as the industrial revolution or even the agricultural revolution. I think this is a major turning point for humanity. And what we do now will define the whole future.

Lucas Perry: In the title of your book, and also in the contents of it, you developed this image of humanity to be standing at the precipice, could you unpack this a little bit more? What does it mean for us to be standing at the precipice?

Toby Ord: I sometimes think of humanity has this grand journey through the wilderness with dark times at various points, but also moments of sudden progress and heady views of the path ahead and what the future might hold. And I think that this point in time is the most dangerous time that we’ve ever encountered, and perhaps the most dangerous time that there will ever be. So I see it in this central metaphor of the book, humanity coming through this high mountain pass and the only path onwards is this narrow ledge along a cliff side with this steep and deep precipice at the side and we’re kind of inching our way along. But we can see that if we can get past this point, there’s ultimately, almost no limits to what we could achieve. Even if we can’t precisely estimate the risks that we face, we know that this is the most dangerous time so far. There’s every chance that we don’t make it through.

Lucas Perry: Let’s talk a little bit then about how we got to this precipice and our part in this path. Can you provide some examples or a story of global catastrophic risks that have happened and near misses of possible existential risks that have occurred so far?

Toby Ord: It depends on your definition of global catastrophe. One of the definitions that’s on offer is 10%, or more of all people on the earth at that time being killed in a single disaster. There is at least one time where it looks like we’ve may have reached that threshold, which was the Black Death, which killed between a quarter and a half of people in Europe and may have killed many people in South Asia and East Asia as well and the Middle East. It may have killed one in 10 people across the whole world. Although because our world was less connected than it is today, it didn’t reach every continent. In contrast, the Spanish Flu 1918 reached almost everywhere across the globe, and killed a few percent of people.

But in terms of existential risk, none of those really posed an existential risk. We saw, for example, that despite something like a third of people in Europe dying, that there wasn’t a collapse of civilization. It seems like we’re more robust than some give us credit for, but there’ve been times where there hasn’t been an actual catastrophe, but there’s been near misses in terms of the chances.

There are many cases actually connected to the Cuban Missile Crisis, a time of immensely high tensions during the Cold War in 1962. I think that the closest we have come is perhaps the events on a submarine that was unknown to the U.S. that it was carrying a secret nuclear weapon and the U.S. Patrol Boats tried to force it to surface by dropping what they called practice depth charges, but the submarine thought that there were real explosives aimed at hurting them. The submarine was made for the Arctic and so it was overheating in the Caribbean. People were dropping unconscious from the heat and the lack of oxygen as they tried to hide deep down in the water. And during that time the captain, Captain Savitsky, ordered that this nuclear weapon be fired and the political officer gave his consent as well.

On any of the other submarines in this flotilla, this would have been enough to launch this torpedo that then would have been a tactical nuclear weapon exploding and destroying the fleet that was oppressing them, but on this one, it was lucky that the flotilla commander was also on board this submarine, Captain Vasili Arkhipov and so, he overruled this and talked Savitsky down from this. So this was a situation at the height of this tension where a nuclear weapon would have been used. And we’re not quite sure, maybe Savitsky would have decided on his own not to do it, maybe he would have backed down. There’s a lot that’s not known about this particular case. It’s very dramatic.

But Kennedy had made it very clear that any use of nuclear weapons against U.S. Armed Forces would lead to an all-out full scale attack on the Soviet Union, so they hadn’t anticipated that tactical weapons might be used. They assumed it would be a strategic weapon, but it was their policy to respond with a full scale nuclear retaliation and it looks likely that that would have happened. So that’s the case where ultimately zero people were killed in that event. The submarine eventually surfaced and surrendered and then returned to Moscow where people were disciplined, but it brought us very close to this full scale nuclear war.

I don’t mean to imply that that would have been the end of humanity. We don’t know whether humanity would survive the full scale nuclear war. My guess is that we would survive, but that’s its own story and it’s not clear.

Lucas Perry: Yeah. The story to me has always felt a little bit unreal. It’s hard to believe we came so close to something so bad. For listeners who are not aware, the Future of Life Institute gives out a $50,000 award each year, called the Future of Life Award to unsung heroes who have contributed greatly to the existential security of humanity. We actually have awarded Vasili Arkhipov’s family with the Future of Life Award, as well as Stanislav Petrov and Matthew Meselson. So if you’re interested, you can check those out on our website and see their particular contributions.

And related to nuclear weapons risk, we also have a webpage on nuclear close calls and near misses where there were accidents with nuclear weapons which could have led to escalation or some sort of catastrophe. Is there anything else here you’d like to add in terms of the relevant historical context and this story about the development of our wisdom and power over time?

Toby Ord: Yeah, that framing, which I used in the book comes from Carl Sagan in the ’80s when he was one of the people who developed the understanding of nuclear winter and he realized that this could pose a risk to humanity on the whole. The way he thought about it is that we’ve had this massive development over the hundred billion human lives that have come before us. This succession of innovations that have accumulated building up this modern world around us.

If I look around me, I can see almost nothing that wasn’t created by human hands and this, as we all know, has been accelerating and often when you try to measure exponential improvements in technology over time, leading to the situation where we have the power to radically reshape the Earth’s surface, both say through our agriculture, but also perhaps in a moment through nuclear war. This increasing power has put us in a situation where we hold our entire future in the balance. A few people’s actions over a few minutes could actually potentially threaten that entire future.

In contrast, humanity’s wisdom has grown only falteringly, if at all. Many people would suggest that it’s not even growing. And by wisdom here, I mean, our ability to make wise decisions for human future. I talked about this in the book under the idea about civilizational virtues. So if you think of humanity as a group of agents, in the same way that we think of say nation states as group agents, we talk about is it in America’s interest to promote this trade policy or something like that? We can think of what’s in humanity’s interests and we find that if we think about it this way, humanity is crazily impatient and imprudent.

If you think about the expected lifespan of humanity, a typical species lives for about a million years. Humanity is about 200,000 years old. We have something like 800,000 or a million or more years ahead of us if we play our cards right and we don’t lead to our own destruction. The analogy would be 20% of the way through our life, like an adolescent who’s just coming into his or her own power, but doesn’t have the wisdom or the patience to actually really pay any attention to this possible whole future ahead of them and so they’re just powerful enough to get themselves in trouble, but not yet wise enough to avoid that.

If you continue this analogy, what is often hard for humanity at the moment to think more than a couple of election cycles ahead at best, but that would correspond say eight years to just the next eight hours within this person’s life. For the kind of short term interests during the rest of the day, they put the whole rest of their future at risk. And so I think that that helps to see what this lack of wisdom looks like. It’s not that it’s just a highfalutin term of some sort, but you can kind of see what’s going on is that the person is incredibly imprudent and impatient. And I think that many others virtues or vices that we think of in an individual human’s life can be applied in this context and are actually illuminating about where we’re going wrong.

Lucas Perry: Wonderful. Part of the dynamic here in this wisdom versus power race seems to be one of the solutions being slowing down power seems untenable or that it just wouldn’t work. So it seems more like we have to focus on amplifying wisdom. Is this also how you view the dynamic?

Toby Ord: Yeah, that is. I think that if humanity was more coordinated, if we were able to make decisions in a unified manner better than we actually can. So, if you imagine this was a single player game, I don’t think it would be that hard. You could just be more careful with your development of power and make sure that you invest a lot in institutions, and in really thinking carefully about things. I mean, I think that the game is ours to lose, but unfortunately, we’re less coherent than that and if one country decides to hold off on developing things, then other countries might run ahead and produce similar amount of risk.

Theres this kind of the tragedy of the commons at this higher level and so I think that it’s extremely difficult in practice for humanity to go slow on progress of technology. And I don’t recommend that we try. So in particular, there’s only at the moment, only a small number of people who really care about these issues and are really thinking about the long-term future and what we could do to protect it. And if those people were to spend their time arguing against progress of technology, I think that it would be a really poor use of their energies and probably just annoy and alienate the people they were trying to convince. And so instead, I think that the only real way forward is to focus on improving wisdom.

I don’t think that’s impossible. I think that humanity’s wisdom, as you could see from my comment before about how we’re kind of disunified, partly, it involves being able to think better about things as individuals, but it also involves being able to think better collectively. And so I think that institutions for overcoming some of these tragedies of the commons or prisoner’s dilemmas at this international level, are an example of the type of thing that will make humanity make wiser decisions in our collective interest.

Lucas Perry: It seemed that you said by analogy, that humanity’s lifespan would be something like a million years as compared with other species.

Toby Ord: Mm-hmm (affirmative).

Lucas Perry: That is likely illustrative for most people. I think there’s two facets of this that I wonder about in your book and in general. The first is this idea of reaching existential escape velocity, where it would seem unlikely that we would have a reason to end in a million years should we get through the time of the precipice and the second is I’m wondering your perspective on Nick Bostrom calls what matters here in the existential condition, Earth-originating intelligent life. So, it would seem curious to suspect that even if humanity’s existential condition were secure that we would still be recognizable as humanity in some 10,000, 100,000, 1 million years’ time and not something else. So, I’m curious to know how the framing here functions in general for the public audience and then also being realistic about how evolution has not ceased to take place.

Toby Ord: Yeah, both good points. I think that the one million years is indicative of how long species last when they’re dealing with natural risks. It’s I think a useful number to try to show why there are some very well-grounded scientific reasons for thinking that a million years is entirely in the ballpark of what we’d expect if we look at other species. And even if you look at mammals or other hominid species, a million years still seems fairly typical, so it’s useful in some sense for setting more of a lower bound. There are species which have survived relatively unchanged for much longer than that. One example is the horseshoe crab, which is about 450 million years old whereas complex life is only about 540 million years old. So that’s something where it really does seem like it is possible to last for a very long period of time.

If you look beyond that the Earth should remain habitable for something in the order of 500 million or a billion years for complex life before it becomes too hot due to the continued brightening of our sun. If we took actions to limit that brightening, which look almost achievable with today’s technology, we would only need to basically shade the earth by about 1% of the energy coming at it and increase that by 1%, I think it’s every billion years, we will be able to survive as long as the sun would for about 7 billion more years. And I think that ultimately, we could survive much longer than that if we could reach our nearest stars and set up some new self-sustaining settlement there. And then if that could then spread out to some of the nearest stars to that and so on, then so long as we can reach about seven light years in one hop, we’d be able to settle the entire galaxy. There are stars in the galaxy that will still be burning in about 10 trillion years from now and there’ll be new stars for millions of times as long as that.

We could have this absolutely immense future in terms of duration and the technologies that are beyond our current reach and if you look at the energy requirements to reach nearby stars, they’re high, but they’re not that high compared to say, the output of the sun over millions of years. And if we’re talking about a scenario where we’d last millions of years anyway, it’s unclear why it would be difficult with the technology would reach them. It seems like the biggest challenge would be lasting that long in the first place, not getting to the nearest star using technology for millions of years into the future with millions of years of stored energy reserves.

So that’s the kind of big picture question about the timing there, but then you also ask about would it be humanity? One way to answer that is, unless we go to a lot of effort to preserve Homo sapiens as we are now then it wouldn’t be Homo sapiens. We might go to that effort if we decide that it’s really important that it be Homo sapiens and that we’d lose something absolutely terrible. If we were to change, we could make that choice, but if we decide that it would be better to actually allow evolution to continue, or perhaps to direct it by changing who we are with genetic engineering and so forth, then we could make that choice as well. I think that that is a really critically important choice for the future and I hope that we make it in a very deliberate and careful manner rather than just going gung-ho and letting people do whatever they want, but I do think that we will develop into something else.

But in the book, my focus is often on humanity in this kind of broad sense. Earth-originating intelligent life would kind of be a gloss on it, but that has the issue that suppose humanity did go extinct and suppose we got lucky and some other intelligent life started off again, I don’t want to count that in what I’m talking about, even though it would technically fit into Earth-originating intelligent life. Sometimes I put it in the book as humanity or our rightful heirs something like that. Maybe we would create digital beings to replace us, artificial intelligences of some sort. So long as they were the kinds of beings that could actually fulfill the potential that we have, they could realize one of the best trajectories that we could possibly reach, then I would count them. It could also be that we create something that succeeds us, but has very little value, then I wouldn’t count it.

So yeah, I do think that we may be greatly changed in the future. I don’t want that to distract the reader, if they’re not used to thinking about things like that because they might then think, “Well, who cares about that future because it will be some other things having the future.” And I want to stress that there will only be some other things having the future if we want it to be, if we make that choice. If that is a catastrophic choice, then it’s another existential risk that we have to deal with in the future and which we could prevent. And if it is a good choice and we’re like the caterpillar that really should become a butterfly in order to fulfill its potential, then we need to make that choice. So I think that is something that we can leave to future generations that it is important that they make the right choice.

Lucas Perry: One of the things that I really appreciate about your book is that it tries to make this more accessible for a general audience. So, I actually do like it when you use lower bounds on humanity’s existential condition. I think talking about billions upon billions of years can seem a little bit far out there and maybe costs some weirdness points and as much as I like the concept of Earth-originating intelligent life, I also think it costs some weirdness points.

And it seems like you’ve taken some effort to sort of make the language not so ostracizing by decoupling it some with effective altruism jargon and the kind of language that we might use in effective altruism circles. I appreciate that and find it to be an important step. The same thing I feel feeds in here in terms of talking about descendant scenarios. It seems like making things simple and leveraging human self-interest is maybe important here.

Toby Ord: Thanks. When I was writing the book, I tried really hard to think about these things, both in terms of communications, but also in terms of trying to understand what we have been talking about for all of these years when we’ve been talking about existential risk and similar ideas. Often when in effective altruism, there’s a discussion about the different types of cause areas that effective altruists are interested in. There’s people who really care about global poverty, because we can help others who are much poorer than ourselves so much more with our money, and also about helping animals who are left out of the political calculus and the economic calculus and we can see why it is that they’re interests are typically neglected and so we look at factory farms, and we can see how we could do so much good.

And then also there’s this third group of people and then the conversation drifts off a bit, it’s like who have this kind of idea about the future and it’s kind of hard to describe and how to kind of wrap up together. So I’ve kind of seen that as one of my missions over the last few years is really trying to work out what is it that that third group of people are trying to do? My colleague, Will MacAskill, has been working on this a lot as well. And what we see is that this other group of effective altruists are this long-termist group.

The first group is thinking about this cosmopolitan aspect as much as me and it’s not just people in my country that matter, it’s people across the whole world and some of those could be helped much more. And the second group is saying, it’s not just humans that could be helped. If we widen things up beyond the species boundary, then we can see that there’s so much more we could do for other conscious beings. And then this third group is saying, it’s not just our time that we can help, there’s so much we can do to help people perhaps across this entire future of millions of years or further into the future. And so the difference there, the point of leverage is this difference between what fraction of the entire future is our present generation is perhaps just a tiny fraction. And if we can do something that will help that entire future, then that’s where this could be really key in terms of doing something amazing with our resources and our lives.

Lucas Perry: Interesting. I actually had never thought of it that way. And I think it puts it really succinctly the differences between the different groups that people focused on global poverty are reducing spatial or proximity bias in people’s focus on ethics or doing good. Animal farming is a kind of anti-speciesism, broadening our moral circle of compassion to other species and then the long-termism is about reducing time-based ethical bias. I think that’s quite good.

Toby Ord: Yeah, that’s right. In all these cases, you have to confront additional questions. It’s not just enough to make this point and then it follows that things are really important. You need to know, for example, that there really are ways that people can help others in distant countries and that the money won’t be squandered. And in fact, for most of human history, there weren’t ways that we could easily help people in other countries just by writing out a check to the right place.

When it comes to animals, there’s a whole lot of challenging questions there about what is the effects of changing your diet or the effects of donating to a group that prioritize animals in campaigns against factory farming or similar and when it comes to the long-term future, there’s this real question about “Well, why isn’t it that people in the future would be just as able to protect themselves as we are? Why wouldn’t they be even more well-situated to attend to their own needs?” Given the history of economic growth and this kind of increasing power of humanity, one would expect them to be more empowered than us, so it does require an explanation.

And I think that the strongest type of explanation is around existential risk. Existential risks are things that would be an irrevocable loss. So, as I define them, which is a simplification, I think of it as the destruction of humanity’s long-term potential. So I think of our long term potential as you could think of this set of all possible futures that we could instantiate. If you think about all the different collective actions of humans that we could take across all time, this kind of sets out this huge kind of cloud of trajectories that humanity could go in and I think that this is absolutely vast. I think that there are ways if we play our cards right of lasting for millions of years or billions or trillions and affecting billions of different worlds across the cosmos, and then doing all kinds of amazing things with all of that future. So, we’ve got this huge range of possibilities at the moment and I think that some of those possibilities are extraordinarily good.

If we were to go extinct, though, that would collapse this set of possibilities to a much smaller set, which contains much worse possibilities. If we went extinct, there would be just one future, whatever it is that would happen without humans, because there’d be no more choices that humans could make. If we had an irrevocable collapse of civilization, something from which we could never recover, then that would similarly reduce it to a very small set of very meager options. And it’s possible as well that we could end up locked into some dystopian future, perhaps through economic or political systems, where we end up stuck in some very bad corner of this possibility space. So that’s our potential. Our potential is currently the value of the best realistically realizable worlds available to us.

If we fail in an existential catastrophe, that’s the destruction of almost all of this value, and it’s something that you can never get back, because it’s our very potential that would be being destroyed. That then has an explanation as to why it is that people in the future wouldn’t be better able to solve their own problems because we’re talking about things that could fail now, that helps explain why it is that there’s room for us to make such a contribution.

Lucas Perry: So if we were to very succinctly put the recommended definition or framing on existential risk that listeners might be interested in using in the future when explaining this to new people, what is the sentence that you would use?

Toby Ord: An existential catastrophe is the destruction of humanity’s long-term potential, and an existential risk is the risk of such a catastrophe.

Lucas Perry: Okay, so on this long-termism point, can you articulate a little bit more about what is so compelling or important about humanity’s potential into the deep future and which arguments are most compelling to you with a little bit of a framing here on the question of whether or not the long-termist’s perspective is compelling or motivating for the average person like, why should I care about people who are far away in time from me?

Toby Ord: So, I think that a lot of people if pressed and they’re told “does it matter equally much if a child 100 years in the future suffers as a child at some other point in time?” I think a lot of people would say, “Yeah, it matters just as much.” But that’s not how we normally think of things when we think about what charity to donate to or what policies to implement, but I do think that it’s not that foreign of an idea. In fact, the weird thing would be why it is that people in virtue of the fact that they live in different times matter different amounts.

A simple example of that would be suppose you do think that things further into the future matter less intrinsically. Economists sometimes represent this by a pure rate of time preference. It’s a component of a discount rate, which is just to do with things mattering less in the future, whereas most of the discount rate is actually to do with the fact that money is more important to have earlier which is actually a pretty solid reason, but that component doesn’t affect any of these arguments. It’s only this little extra aspect about things matter less just because we’re in the future. Suppose you have that 1% discount rate of that form. That means that someone’s older brother matters more than their younger brother, that their life is equally long and has the same kinds of experiences is fundamentally more important for their older child than the younger child, things like that. This just seems kind of crazy to most people, I think.

And similarly, if you have these exponential discount rates, which is typically the only kind that economists consider, it has these consequences that what happens in 10,000 years is way more important than what happens in 11,000 years. People don’t have any intuition like that at all, really. Maybe we don’t think that much about what happens in 10,000 years, but 11,000 is pretty much the same as 10,000 from our intuition, but these other views say, “Wow. No, it’s totally different. It’s just like the difference between what happens next year and what happens in a thousand years.”

It generally just doesn’t capture our intuitions and I think that what’s going on is not so much that we have a kind of active intuition that things that happen further into the future matter less and in fact, much less because they would have to matter a lot less to dampen the fact that we can have millions of years of future. Instead, what’s going on is that we just aren’t thinking about it. We’re not really considering that our actions could have irrevocable effects over the long distant future. And when we do think about that, such as within environmentalism, it’s a very powerful idea. The idea that we shouldn’t sacrifice, we shouldn’t make irrevocable changes to the environment that could damage the entire future just for transient benefits to our time. And people think, “Oh, yeah, that is a powerful idea.”

So I think it’s more that they’re just not aware that there are a lot of situations like this. It’s not just the case of a particular ecosystem that could be an example of one of these important irrevocable losses, but there could be these irrevocable losses at this much grander scale affecting everything that we could ever achieve and do. I should also explain there that I do talk a lot about humanity in the book. And the reason I say this is not because I think that non-human animals don’t count or they don’t have intrinsic value, I do. It’s because instead, only humanity is responsive to reasons and to thinking about this. It’s not the case that chimpanzees will choose to save other species from extinction and will go out and work out how to safeguard them from natural disasters that could threaten their ecosystems or things like that.

We’re the only ones who are even in the game of considering moral choices. So in terms of the instrumental value, humanity has this massive instrumental value, because what we do could affect, for better or for worse, the intrinsic value of all of the other species. Other species are going to go extinct in about a billion years, basically, all of them when the earth becomes uninhabitable. Only humanity could actually extend that lifespan. So there’s this kind of thing where humanity ends up being key because we are the decision makers. We are the relevant agents or any other relevant agents will spring from us. That will be things that our descendants or things that we create and choose how they function. So, that’s the kind of role that we’re playing.

Lucas Perry: So if there are people who just simply care about the short term, if someone isn’t willing to buy into these arguments about the deep future or realizing the potential of humanity’s future, like “I don’t care so much about that, because I won’t be alive for that.” There’s also an argument here that these risks may be realized within their lifetime or within their children’s lifetime. Could you expand that a little bit?

Toby Ord: Yeah, in the precipice, when I try to think about why this matters. I think the most obvious reasons are rooted in the present. The fact that it will be terrible for all of the people who are alive at the time when the catastrophe strikes. That needn’t be the case. You could imagine things that meet my definition of an existential catastrophe that it would cut off the future, but not be bad for the people who were alive at that time, maybe we all painlessly disappear at the end of our natural lives or something. But in almost all realistic scenarios that we’re thinking about, it would be terrible for all of the people alive at that time, they would have their lives cut short and witness the downfall of everything that they’ve ever cared about and believed in.

That’s a very obvious natural reason, but the reason that moves me the most is thinking about our long-term future, and just how important that is. This huge scale of everything that we could ever become. And you could think of that in very numerical terms or you could just think back over time and how far humanity has come over these 200,000 years. Imagine that going forward and how small a slice of things our own lives are and you can come up with very intuitive arguments to exceed that as well. It doesn’t have to just be multiply things out type argument.

But then I also think that there are very strong arguments that you could also have rooted in our past and in other things as well. Humanity has succeeded and has got to where we are because of this partnership of the generations. Edmund Burke had this phrase. It’s something where, if we couldn’t promulgate our ideas and innovations to the next generation, what technological level would be like. It would be like it was in the Paleolithic time, even a crude iron shovel would be forever beyond our reach. It was only through passing down these innovations and iteratively improving upon them, we could get billions of people working in cooperation over deep time to build this world around us.

If we think about the wealth and prosperity that we have the fact that we live as long as we do. This is all because this rich world was created by our ancestors and handed on to us and we’re the trustees of this vast inheritance and if we would have failed, if we’d be the first of 10,000 generations to fail to pass this on to our heirs, we will be the worst of all of these generations. We’d have failed in these very important duties and these duties could be understood as some kind of reciprocal duty to those people in the past or we could also consider it as duties to the future rooted in obligations to people in the past, because we can’t reciprocate to people who are no longer with us. The only kind of way you can get this to work is to pay it forward and have this system where we each help the next generation with the respect for the past generations.

So I think there’s another set of reasons more deontological type reasons for it and you could all have the reasons I mentioned in terms of civilizational virtues and how that kind of approach rooted in being a more virtuous civilization or species and I think that that is a powerful way of seeing it as well, to see that we’re very impatient and imprudent and so forth and we need to become more wise or alternatively, Max Tegmark has talked about this and Martin Rees, Carl Sagan and others have seen it as something based on a cosmic significance of humanity, that perhaps in all of the stars and all of the galaxies of the universe, perhaps this is the only place where there is either life at all or we’re the only place where there’s intelligent life or consciousness. There’s different versions of this and that could make this exceptionally important place and this very rare thing that could be forever gone.

So I think that there’s a whole lot of different reasons here and I think that previously, a lot of the discussion has been in a very technical version of the future directed one where people have thought, well, even if there’s only a tiny chance of extinction, our future could have 10 to the power of 30 people in it or something like that. There’s something about this argument that some people find it compelling, but not very many. I personally always found it a bit like a trick. It is a little bit like an argument that zero equals one where you don’t find it compelling, but if someone says point out the step where it goes wrong, you can’t see a step where the argument goes wrong, but you still think I’m not very convinced, there’s probably something wrong with this.

And then people who are not from the sciences, people from the humanities find it an actively alarming argument that anyone who would make moral decisions on the grounds of an argument like that. What I’m trying to do is to show that actually, there’s this whole cluster of justifications rooted in all kinds of principles that many people find reasonable and you don’t have to accept all of them by any means. The idea here is that if any one of these arguments works for you, then you can see why it is that you have reasons to care about not letting our future be destroyed in our time.

Lucas Perry: Awesome. So, there’s first this deontological argument about transgenerational duties to continue propagating the species and the projects and value which previous generations have cultivated. We inherit culture and art and literature and technology, so there is a duties-based argument to continue the stewardship and development of that. There is this cosmic significance based argument that says that consciousness may be extremely precious and rare, and that there is great value held in the balance here at the precipice on planet Earth and it’s important to guard and do the proper stewardship of that.

There is this short-term argument that says that there is some reasonable likelihood I think, total existential risk for the next century you put at one in six, which we can discuss a little bit more later, so that would also be very bad for us and our children and short-term descendants should that be realized in the next century. Then there is this argument about the potential of humanity in deep time. So I think we’ve talked a bit here about there being potentially large numbers of human beings in the future or our descendants or other things that we might find valuable, but I don’t think that we’ve touched on the part and change of quality.

There are these arguments on quantity, but there’s also I think, I really like how David Pearce puts it where he says, “One day we may have thoughts as beautiful as sunsets.” So, could you expand a little bit here this argument on quality that I think also feeds in and then also with regards to the digitalization aspect that may happen, that there are also arguments around subjective time dilation, which may lead to more better experience into the deep future. So, this also seems to be another important aspect that’s motivating for some people.

Toby Ord: Yeah. Humanity has come a long way and various people have tried to catalog the improvements in our lives over time. Often in history, this is not talked about, partly because history is normally focused on something of the timescale of a human life and things don’t change that much on that timescale, but when people are thinking about much longer timescales, I think they really do. Sometimes this is written off in history as Whiggish history, but I think that that’s a mistake.

I think that if you were to summarize the history of humanity in say, one page, I think that the dramatic increases in our quality of life and our empowerment would have to be mentioned. It’s so important. You probably wouldn’t mention the Black Death, but you would mention this. Yet, it’s very rarely talked about within history, but there are people talking about it and there are people who have been measuring these improvements. And I think that you can see how, say in the last 200 years, lifespans have more than doubled and in fact, even in the poorest countries today, lifespans are longer than they were in the richest countries 200 years ago.

We can now almost all read whereas very few people could read 200 years ago. We’re vastly more wealthy. If you think about this threshold we currently use of extreme poverty, it used to be the case 200 years ago that almost everyone was below that threshold. People were desperately poor and now almost everyone is above that threshold. There’s still so much more that we could do, but there have been these really dramatic improvements.

Some people seem to think that that story of well-being in our lives getting better, increasing freedoms, increasing empowerment of education and health, they think that that story runs somehow counter to their concern about existential risk that one is an optimistic story and one’s a gloomy story. Ultimately, what I’m thinking is that it’s because these trends seem to point towards very optimistic futures that would make it all the more important to ensure that we survive to reach such futures. If all the trends suggested that the future was just going to inevitably move towards a very dreary thing that had hardly any value in it, then I wouldn’t be that concerned about existential risk, so I think these things actually do go together.

And it’s not just in terms of our own lives that things have been getting better. We’ve been making major institutional reforms, so while there is regrettably still slavery in the world today, there is much less than there was in the past and we have been making progress in a lot of ways in terms of having a more representative and more just and fair world and there’s a lot of room to continue in both those things. And even then, a world that’s kind of like the best lives lived today, a world that has very little injustice or suffering, that’s still only a lower bound on what we could achieve.

I think one useful way to think about this is in terms of your peak experiences. These moments of luminous joy or beauty, the moments that you’ve been happiest, whatever they may be and you think about how much better they are than the typical moments. My typical moments are by no means bad, but I would trade hundreds or maybe thousands for more of these peak experiences, and that’s something where there’s no fundamental reason why we couldn’t spend much more of our lives at these peaks and have lives which are vastly better than our lives are today and that’s assuming that we don’t find even higher peaks and new ways to have even better lives.

It’s not just about the well-being in people’s lives either. If you have any kind of conception about the types of value that humanity creates, so much of our lives will be in the future, so many of our achievements will be in the future, so many of our societies will be in the future. There’s every reason to expect that these greatest successes in all of these different ways will be in this long future as well. There’s also a host of other types of experiences that might become possible. We know that humanity only has some kind of very small sliver of the space of all possible experiences. We see in a set of colors, this three-dimensional color space.

We know that there are animals that see additional color pigments, that can see ultraviolet, can see parts of reality that we’re blind to. Animals with magnetic sense that can sense what direction north is and feel the magnetic fields. What’s it like to experience things like that? We could go so much further exploring this space. If we can guarantee our future and then we can start to use some of our peak experiences as signposts to what might be experienceable, I think that there’s so much further that we could go.

And then I guess you mentioned the possibilities of digital things as well. We don’t know exactly how consciousness works. In fact, we know very little about how it works. We think that there’s some suggestive reasons to think that minds including consciousness are computational things such that we might be able to realize them digitally and then there’s all kinds of possibilities that would follow from that. You could slow yourself down like slow down the rate at which you’re computed in order to see progress zoom past you and kind of experience a dizzying rate of change in the things around you. Fast forwarding through the boring bits and skipping to the exciting bits one’s life if one was digital could potentially be immortal, have backup copies, and so forth.

You might even be able to branch into being two different people, have some choice coming up as to say whether to stay on earth or to go to this new settlement in the stars, and just split with one copy go into this new life and one staying behind or a whole lot of other possibilities. We don’t know if that stuff is really possible, but it’s just to kind of give a taste of how we might just be seeing this very tiny amount of what’s possible at the moment.

Lucas Perry: This is one of the most motivating arguments for me, the fact that the space of all possible minds is probably very large and deep and that the kinds of qualia that we have access to are very limited and the possibility of well-being not being contingent upon the state of the external world which is always in flux and is always impermanent, we’re able to have a science of well-being that was sufficiently well-developed such that well-being was information and decision sensitive, but not contingent upon the state of the external world that seems like a form of enlightenment in my opinion.

Toby Ord: Yeah. Some of these questions are things that you don’t often see discussed in academia, partly because there isn’t really a proper discipline that says that that’s the kind of thing you’re allowed to talk about in your day job, but it is the kind of thing that people are allowed to talk about in science fiction. Many science fiction authors have something more like space opera or something like that where the future is just an interesting setting to play out the dramas that we recognize.

But other people use the setting to explore radical, what if questions, many of which are very philosophical and some of which are very well done. I think that if you’re interested in these types of questions, I would recommend people read Diaspora by Greg Egan, which I think is the best and most radical exploration of this and at the start of the book, it’s a setting in a particular digital system with digital minds substantially in the future from where we are now that have been running much faster than the external world. Their lives lived thousands of times faster than the people who’ve remained flesh and blood, so culturally that vastly further on, and then you get to witness what it might be like to undergo various of these events in one’s life. And in the particular setting it’s in. It’s a world where physical violence is against the laws of physics.

So rather than creating utopia by working out how to make people better behaved, the longstanding project have tried to make us all act nicely and decently to each other. That’s clearly part of what’s going on, but there’s this extra possibility that most people hadn’t even thought about, where because it’s all digital. It’s kind of like being on a web forum or something like that, where if someone attempts to attack you, you can just make them disappear, so that they can no longer interfere with you at all. And it explores what life might be like in this kind of world where the laws of physics are consent based and you can just make it so that people have no impact on you if you’re not enjoying the kind of impact that they’re having is a fascinating setting to explore radically different ideas about the future, which very much may not come to pass.

But what I find exciting about these types of things is not so much that they’re projections of where the future will be, but that if you take a whole lot of examples like this, they span a space that’s much broader than you were initially thinking about for your probability distribution over where the future might go and they help you realize that there are radically different ways that it could go. This kind of expansion of your understanding about the space of possibilities, which is where I think it’s best as opposed to as a direct prediction that I would strongly recommend some Greg Egan for anyone who wants to get really into that stuff.

Lucas Perry: You sold me. I’m interested in reading it now. I’m also becoming mindful of our time here and have a bunch more questions I would like to get through, but before we do that, I also want to just throw out here. I’ve had a bunch of conversations recently on the question of identity and open individualism and closed individualism and empty individualism are some of the views here.

For the long-termist perspective, I think that it’s pretty much very or deeply informative for how much or how little one may care about the deep future or digital minds or our descendants in a million years or humans that are around a million years later. I think for many people who won’t be motivated by these arguments, they’ll basically just feel like it’s not me, so who cares? And so I feel like these questions on personal identity really help tug and push and subvert many of our commonly held intuitions about identity. So, sort of going off of your point about the potential of the future and how it’s quite beautiful and motivating.

A little funny quip or thought there is I’ve sprung into Lucas consciousness and I’m quite excited, whatever “I” means, for there to be like awakening into Dyson sphere consciousness in Andromeda or something, and maybe a bit of a wacky or weird idea for most people, but thinking more and more endlessly about the nature of personal identity makes thoughts like these more easily entertainable.

Toby Ord: Yeah, that’s interesting. I haven’t done much research on personal identity. In fact, the types of questions I’ve been thinking about when it comes to the book are more on how radical change would be needed before it’s no longer humanity, so kind of like the identity of humanity across time as opposed to the identity for a particular individual across time. And because I’m already motivated by helping others and I’m kind of thinking more about the question of why just help others in our own time as opposed to helping others across time. How do you direct your altruism, your altruistic impulses?

But you’re right that they could also be possibilities to do with individuals lasting into the future. There’s various ideas about how long we can last with lifespans extending very rapidly. It might be that some of the people who are alive now actually do directly experience some of this long-term future. Maybe there are things that could happen where their identity wouldn’t be preserved, because it’d be too radical a break. You’d become two different kinds of being and you wouldn’t really be the same person, but if being the same person is important to you, then maybe you could make smaller changes. I’ve barely looked into this at all. I know Nick Bostrom has thought about it more. There’s probably lots of interesting questions there.

Lucas Perry: Awesome. So could you give a short overview of natural or non-anthropogenic risks over the next century and why they’re not so important?

Toby Ord: Yeah. Okay, so the main natural risks I think we’re facing are probably asteroid or comet impacts and super volcanic eruptions. In the book, I also looked at stellar explosions like supernova and gamma ray bursts, although since I estimate the chance of us being wiped out by one of those in the next 100 years to be one in a billion, we don’t really need to worry about those.

But asteroids, it does appear that the dinosaurs were destroyed 65 million years ago by a major asteroid impact. It’s something that’s been very well studied scientifically. I think the main reason to think about it is A, because it’s very scientifically understood and B, because humanity has actually done a pretty good job on it. We only worked out 40 years ago that the dinosaurs were destroyed by an asteroid and that they could be capable of causing such a mass extinction. In fact, it was only in 1960, 60 years ago that we even confirmed that craters on the Earth’s surface were caused by asteroids. So we knew very little about this until recently.

And then we’ve massively scaled up our scanning of the skies. We think that in order to cause a global catastrophe, the asteroid would probably need to be bigger than a kilometer across. We’ve found about 95% of the asteroids between 1 and 10 kilometers across, and we think we’ve found all of the ones bigger than 10 kilometers across. We therefore know that since none of the ones were found are on a trajectory to hit us within the next 100 years that it looks like we’re very safe from asteroids.

Whereas super volcanic eruptions are much less well understood. My estimate for those for the chance that we could be destroyed in the next 100 years by one is about one in 10,000. In the case of asteroids, we have looked into it so carefully and we’ve managed to check whether any are coming towards us right now, whereas it can be hard to get these probabilities further down until we know more, so that’s why my what about the super volcanic corruptions is where it is. That the Toba eruption was some kind of global catastrophe a very long time ago, though the early theories that it might have caused a population bottleneck and almost destroyed humanity, they don’t seem to hold up anymore. It is still illuminating of having continent scale destruction and global cooling.

Lucas Perry: And so what is your total estimation of natural risk in the next century?

Toby Ord: About one in 10,000. All of these estimates are in order of magnitude estimates, but I think that it’s about the same level as I put the super volcanic eruption and the other known natural risks I would put as much smaller. One of the reasons that we can say these low numbers is because humanity has survived for 2000 centuries so far, and related species such as Homo erectus have survived for even longer. And so we just know that there can’t be that many things that could destroy all humans on the whole planet from these natural risks,

Lucas Perry: Right, the natural conditions and environment hasn’t changed so much.

Toby Ord: Yeah, that’s right. I mean, this argument only works if the risk has either been constant or expectably constant, so it could be that it’s going up and down, but we don’t know which then it will also work. The problem is if we have some pretty good reasons to think that the risks could be going up over time, then our long track record is not so helpful. And that’s what happens when it comes to what you could think of as natural pandemics, such as the coronavirus.

This is something where it’s got into humanity through some kind of human action, so it’s not exactly natural how it actually got into humanity in the first place and then its spread through humanity through airplanes, traveling to different continents very quickly, is also not natural and is a faster spread than you would have had over this long-term history of humanity. And thus, these kind of safety arguments don’t count as well as they would for things like asteroid impacts.

Lucas Perry: This class of risks then is risky, but less risky than the human-made risks, which are a result of technology, the fancy x-risk jargon for this is anthropogenic risks. Some of these are nuclear weapons, climate change, environmental damage, synthetic bio-induced pandemics or AI-enabled pandemics, unaligned artificial intelligence, dystopian scenarios and other risks. Could you say a little bit about each of these and why you view unaligned artificial intelligence as the biggest risk?

Toby Ord: Sure. Some of these anthropogenic risks we already face. Nuclear war is an example. What is particularly concerning is a very large scale nuclear war, such as between the U.S. and Russia and nuclear winter models have suggested that the soot from burning buildings could get lifted up into the stratosphere which is high enough that it wouldn’t get rained out, so it could stay in the upper atmosphere for a decade or more and cause widespread global cooling, which would then cause massive crop failures, because there’s not enough time between frosts to get a proper crop, and thus could lead to massive starvation and a global catastrophe.

Carl Sagan suggested it could potentially lead to our extinction, but the current people working on this, while they are very concerned about it, don’t suggest that it could lead to human extinction. That’s not really a scenario that they find very likely. And so even though I think that there is substantial risk of nuclear war over the next century, either an accidental nuclear war being triggered soon or perhaps a new Cold War, leading to a new nuclear war, I would put the chance that humanity’s potential is destroyed through nuclear war at about one in 1000 over the next 100 years, which is about where I’d put it for climate change as well.

There is debate as to whether climate change could really cause human extinction or a permanent collapse of civilization. I think the answer is that we don’t know. Similar with nuclear war, but they’re both such large changes to the world, these kind of unprecedentedly rapid and severe changes that it’s hard to be more than 99% confident that if that happens that we’d make it through and so this is difficult to eliminate risk that remains there.

In the book, I look at the very worst climate outcomes, how much carbon is there in the methane clathrates under the ocean and in the permafrost? What would happen if it was released? How much warming would there be? And then what would happen if you had very severe amounts of warming such as 10 degrees? And I try to sketch out what we know about those things and it is difficult to find direct mechanisms that suggests that we would go extinct or that we would collapse our civilization in a way from which you could never be restarted again, despite the fact that civilization arose five times independently in different parts of the worlds already, so we know that it’s not like a fluke to get it started again. So it’s difficult to see the direct reasons why it could happen, but we don’t know enough to be sure that it can’t happen. In my sense, that’s still an existential risk.

Then I also have a kind of catch all for other types of environmental damage, all of these other pressures that we’re putting on the planet. I think that it would be too optimistic to be sure that none of those could potentially cause a collapse from which we can never recover as well. Although when I look at particular examples that are suggested, such as the collapse of pollinating insects and so forth, for the particular things that are suggested, it’s hard to see how they could cause this, so it’s not that I am just seeing problems everywhere, but I do think that there’s something to this general style of argument that unknown effects of the stressors we’re putting on the planet could be the end for us.

So I’d put all of those kind of current types of risks at about one in 1,000 over the next 100 years, but then it’s the anthropogenic risks from technologies that are still on the horizon that scare me the most and this would be in keeping with this idea of humanity’s continued exponential growth in power where you’d expect the risks to be escalating every century. And I think that the ones that I’m most concerned about, in particular, engineered pandemics and the risk of unaligned artificial intelligence.

Lucas Perry: All right. I think listeners will be very familiar with many of the arguments around why unaligned artificial intelligence is dangerous, so I think that we could skip some of the crucial considerations there. Could you touch a little bit then on the risks of engineered pandemics, which may be more new and then give a little bit of your total risk estimate for this class of risks.

Toby Ord: Ultimately, we do have some kind of a safety argument in terms of the historical record when it comes to these naturally arising pandemics. There are ways that they could be more dangerous now than they could have been in the past, but there are also many ways in which they’re less dangerous. We have antibiotics. We have the ability to detect in real time these threats, sequence the DNA of the things that are attacking us, and then use our knowledge of quarantine and medicine in order to fight them. So we have reasons to look to our safety on that.

But there are cases of pandemics or pandemic pathogens being created to be even more spreadable or even more deadly than those that arise naturally because the natural ones are not being optimized to be deadly. The deadliness is only if that’s in service of them spreading and surviving and normally killing your host is a big problem for that. So there’s room there for people to try to engineer things to be worse than the natural ones.

One case is scientists looking to fight disease, like Ron Fouchier with the bird flu, deliberately made a more infectious version of it that could be transmitted directly from mammal to mammal. He did that because he was trying to help, but it was, I think, very risky and I think a very bad move and most of the scientific community didn’t think it was a good idea. He did it in a bio safety level three enhanced lab, which is not the highest level of biosecurity, that’s BSL four, and even at the highest level, there have been an escape of a pathogen from a BSL four facility. So these labs aren’t safe enough, I think, to be able to work on newly enhanced things that are more dangerous than anything that nature can create in a world where so far the biggest catastrophes that we know of were caused by pandemics. So I think that it’s pretty crazy to be working on such things until we have labs from which nothing has ever escaped.

But that’s not what really worries me. What worries me more is bio weapons programs and there’s been a lot of development of bio weapons in the 20th Century, in particular. The Soviet Union reportedly had 20 tons of smallpox that they had manufactured for example, and they had an accidental release of smallpox, which killed civilians in Russia. They had an accidental release of anthrax, blowing it out across the whole city and killing many people, so we know from cases like this, that they had a very large bioweapons program. And the Biological Weapons Convention, which is the leading institution at an international level to prohibit bio weapons is chronically underfunded and understaffed. The entire budget of the BWC is less than that of a typical McDonald’s.

So this is something where humanity doesn’t have its priorities in order. Countries need to work together to step that up and to give it more responsibilities, to actually do inspections and make sure that none of them are using bio weapons. And then I’m also really concerned by the dark side of the democratization of biotechnology. The fact that rapid developments that we make with things like Gene Drives and CRISPR. These two huge breakthroughs. They’re perhaps Nobel Prize worthy. That in both cases within two years, they are replicated by university students in science competitions.

So we now have a situation where two years earlier, there’s like one person in the world who could do it or no one who could do it, then one person and then within a couple of years, we have perhaps tens of thousands of people who could do it, soon millions. And so if that pool of people eventually includes people like those in the Aum Shinrikyo cults that was responsible for the Sarin gas in the Tokyo subway, who actively one of their goals was to destroy everyone in the world. Once enough people can do these things and could make engineered pathogens, you’ll get someone with this terrible but massively rare motivation, or perhaps even just a country like North Korea who wants to have a kind of blackmail policy to make sure that no one ever invades. That’s why I’m worried about that. These rapid advances are empowering us to make really terrible weapons.

Lucas Perry: All right, so wrapping things up here. How do we then safeguard the potential for humanity and Earth-originating intelligent life? You seem to give some advice on high level strategy, policy and individual level advice, and this is all contextualized within this grand plan for humanity, which is that we reach existential security by getting to a place where existential risk is decreasing every century that we then enter a period of long reflection to contemplate and debate what is good and how we might explore the universe and optimize it to express that good and then that we execute that and achieve our potential. So again, how do we achieve all this, how do we mitigate x-risk, how do we safeguard the potential of humanity?

Toby Ord: That’s an easy question to end on. So what I tried to do in the book is to try to treat this at a whole lot of different levels. You kind of refer to the most abstract level to some extent, the point of that abstract level is to show that we don’t need to get ultimate success right now, we don’t need to solve everything, we don’t need to find out what the fundamental nature of goodness is, and what worlds would be the best. We just need to make sure we don’t end up in the ones which are clearly among the worst.

The point of looking further onwards with the strategy is just to see that we can set some things aside for later. Our task now is to reach what I call existential security and that involves this idea that will be familiar to many people to do with existential risk, which is to look at particular risks and to work out how to manage them, and to avoid falling victim to them, perhaps by being more careful with technology development, perhaps by creating our protective technologies. For example, better bio surveillance systems to understand if bio weapons have been launched into the environment, so that we could contain them much more quickly or to develop say a better work on alignment with AI research.

But it also involves not just fighting fires, but trying to become the kind of society where we don’t keep lighting these fires. I don’t mean that we don’t develop the technologies, but that we build in the responsibility for making sure that they do not develop into existential risks as part of the cost of doing business. We want to get the fruits of all of these technologies, both for the long-term and also for the short-term, but we need to be aware that there’s this shadow cost when we develop new things, and we blaze forward with technology. There’s shadow cost in terms of risk, and that’s not normally priced in. We just kind of ignore that, but eventually it will come due. If we keep developing things that produce these risks, eventually, it’s going to get us.

So what we need to do to develop our wisdom, both in terms of changing our common sense conception of morality, to take this long-term future seriously or our debts to our ancestors seriously, and we also need the international institutions to help avoid some of these tragedies of the commons and so forth as well, to find these cases where we’d all be prepared to pay the cost to get the security if everyone else was doing it, but we’re not prepared to just do it unilaterally. We need to try to work out mechanisms where we can all go into it together.

There are questions there in terms of policy. We need more policy-minded people within the science and technology space. People with an eye to the governance of their own technologies. This can be done within professional societies, but also we need more technology-minded people in the policy space. We often are bemoan the fact that a lot of people in government don’t really know much about how the internet works or how various technologies work, but part of the problem is that the people who do know about how these things work, don’t go into government. It’s not just that you can blame the people in government for not knowing about your field. People who know about this field, maybe some of them should actually work in policy.

So I think we need to build that bridge from both sides and I suggest a lot of particular policy things that we could do. A good example in terms of how concrete and simple it can get is that we renew the New START Disarmament Treaty. This is due to expire next year. And as far as I understand, the U.S. government and Russia don’t have plans to actually renew this treaty, which is crazy, because it’s one of the things that’s most responsible for the nuclear disarmament. So, making sure that we sign that treaty again, it is a very actionable point that people can kind of motivate around and so on.

And I think that there’s stuff for everyone to do. We may think that existential risk is too abstract and can’t really motivate people in the way that some other causes can, but I think that would be a mistake. I’m trying to sketch a vision of it in this book that I think can have a larger movement coalesce around it and I think that if we look back a bit when it came to nuclear war, the largest protest in America’s history at that time was against nuclear weapons in Central Park in New York and it was on the grounds that this could be the end of humanity. And that the largest movement at the moment, in terms of standing up for a cause is on climate change and it’s motivated by exactly these ideas about irrevocable destruction of our heritage. It really can motivate people if it’s expressed the right way. And so that actually fills me with hope that things can change.

And similarly, when I think about ethics, and I think about how in the 1950s, there was almost no consideration of the environment within their conception of ethics. It just was considered totally outside of the domain of ethics or morality and not really considered much at all. And the same with animal welfare, it was scarcely considered to be an ethical question at all. And now, these are both key things that people are taught in their moral education in school. And we have an entire ministry for the environment and that was within 10 years of Silent Spring coming out, I think all, but one English speaking country had a cabinet level position on the environment.

So, I think that we really can have big changes in our ethical perspective, but we need to start an expansive conversation about this and start unifying these things together not to be just like the anti-nuclear movement and the anti-climate change movement where it’s fighting a particular fire, but to be aware that if we want to actually get out there preemptively for these things that we need to expand that to this general conception of existential risk and safeguarding humanity’s long-term potential, but I’m optimistic that we can do that.

That’s why I think my best guess is that there’s a one in six chance that we don’t make it through this Century, but the other way around, I’m saying there’s a five in six chance that I think we do make it through. If we really played our cards right, we could make it a 99% chance that we make it through this Century. We’re not hostages to fortune. We humans get to decide what the future of humanity will be like. There’s not much risk from external forces that we can’t deal with such as the asteroids. Most of the risk is of our own doing and we can’t just sit here and bemoan the fact we’re in some difficult prisoner’s dilemma with ourselves. We need to get out and solve these things and I think we can.

Lucas Perry: Yeah. This point on moving from these particular motivation and excitement around climate change and nuclear weapons issues to a broader civilizational concern with existential risk seems to be a crucial and key important step in developing the kind of wisdom that we talked about earlier. So yeah, thank you so much for coming on and thanks for your contribution to the field of existential risk with this book. It’s really wonderful and I recommend listeners read it. If listeners are interested in that, where’s the best place to pick it up? How can they follow you?

Toby Ord: You could check out my website at tobyord.com. You could follow me on Twitter @tobyordoxford or I think the best thing is probably to find out more about the book at theprecipice.com. On that website, we also have links as to where you can buy it in your country, including at independent bookstores and so forth.

Lucas Perry: All right, wonderful. Thanks again, for coming on and also for writing this book. I think that it’s really important for helping to shape the conversation in the world and understanding around this issue and I hope we can keep nailing down the right arguments and helping to motivate people to care about these things. So yeah, thanks again for coming on.

Toby Ord: Well, thank you. It’s been great to be here.