People may be biased towards thinking that the narrow slice of time they live in is the most important period in history, but statistically this is unlikely.
If people think that something will cause the apocalypse or bring about a utopian society, historically speaking they are likely to be wrong.
Part of the problem with these two is that whether an apocalypse happens or not often depends on whether people took the risk of it happening seriously. We absolutely, could have had a nuclear holocaust in the 70′s and 80′s; one of the reasons we didn’t is because people took it seriously and took steps to avert it.
And, of course, whether a time slice is the most important in history, in retrospect, will depend on whether you actually had an apocalypse. The 70′s would have seemed a lot more momentous if we had launched all of our nuclear warheads at each other.
For my part, my bet would be on something like:
O. Early applications of AI/AGI drastically increase human civilization’s sanity and coordination ability; enabling humanity to solve alignment, or slow down further descent into AGI, etc. (Not in principle mutex with all other answers.)
But more specifically:
P. Red-teams evaluating early AGIs demonstrate the risks of non-alignment in a very vivid way; they demonstrate, in simulation, dozens of ways in which the AGI would try to destroy humanity. This has an effect on world leaders similar to observing nuclear testing: It scares everyone into realizing the risk, and everyone stops improving AGI’s capabilities until they’ve figured out how to keep it from killing everyone.
But it does risk giving up something. Even the average tech person on a forum like Hacker News still thinks the risk of an AI apocalypse is so remote that only a crackpot would take it seriously. Their priors regarding the idea that anyone of sense could take it seriously are so low that any mention of safety seems to them a fig-leaf excuse to monopolize control for financial gain; as believable as Putin’s claims that he’s liberating the Ukraine from Nazis. (See my recent attempt to introduce the idea here .) The average person on the street is even further away from this I think.
The risk then of giving up “optics” is that you lose whatever influence you may have had entirely; you’re labelled a crackpot and nobody takes you seriously. You also risk damaging the influence of other people who are trying to be more conservative. (NB I’m not saying this will happen, but it’s a risk you have to consider.)
For instance, personally I think the reason so few people take AI alignment seriously is that we haven’t actually seen anything all that scary yet. If there were demonstrations of GPT-4, in simulation, murdering people due to mis-alignment, then this sort of a pause would be a much easier sell. Going full-bore “international treaty to control access to GPUs” now introduces the risk that, when GPT-6 is shown to murder people due to mis-alignment, people take it less seriously, because they’ve already decided AI alignment people are all crackpots.
I think the chances of an international treaty to control GPUs at this point is basically zero. I think our best bet for actually getting people to take an AI apocalypse seriously is to demonstrate an un-aligned system harming people (hopefully only in simulation), in a way that people can immediately see could extend to destroying the whole human race if the AI were more capable. (It would also give all those AI researchers something more concrete to do: figure out how to prevent this AI from doing this sort of thing; figure out other ways to get this AI to do something destructive.) Arguing to slow down AI research for other reasons—for instance, to allow society to adapt to the changes we’ve already seen—will give people more time to develop techniques for probing (and perhaps demonstrating) catastrophic alignment failures.