So gotta keep in mind that probabilities are in your head (I flip a coin, it’s already tails or heads in reality, but your credence should still be 50-50). I think it can be the case that we were always doomed even if weren’t yet justified in believing that.
Alternatively, it feels like this pushes up against philosophies of determinism and freewill. The whole “well the algorithm is a written program and it’ll choose what is chooses deterministically” but also from the inside there are choices.
I think a reason to have been uncertain before and update more now is just that timelines seem short. I used to have more hope because I thought we had a lot more time to solve both technical and coordination problems, and then there was the DL/transformers surprise. You make a good case and maybe 50 years more wouldn’t make a difference, but I don’t know, I wouldn’t have as high p-doom if we had that long.
I know that probabilities are in the map, not in the territory. I’m just wondering if we were ever sufficiently positively justified to anticipate a good future, or if we were just uncertain about the future and then projected our hopes and dreams onto this uncertainty, regardless of how realistic that was. In particular, the Glorious Transhumanist Future requires the same technological progress that can result in technological extinction, so I question whether the former should’ve ever been seen as the more likely or default outcome.
I’ve also wondered about how to think about doom vs. determinism. A related thorny philosophical issue is anthropics: I was born in 1988, so from my perspective the world couldn’t have possibly ended before then, but that’s no defense whatsoever against extinction after that point.
Re: AI timelines, again this is obviously speaking from hindsight, but I now find it hard to imagine how there could’ve ever been 50-year timelines. Maybe specific AI advances could’ve come a bunch of years later, but conversely, compute progress followed Moore’s Law and IIRC had no sign of slowing down, because compute is universally economically useful. And so even if algorithmic advances had been slower, compute progress could’ve made up for that to some extent.
Re: solving coordination problems: some of these just feel way too intractable. Take the US constitution, which governs your political system: IIRC it was meant to be frequently updated in constitutional conventions, but instead the political system ossified and the last meaningful amendment (18-year voting age) was ratified in 1971, or 54 years ago. Or, the US Senate made itself increasingly ungovernable with the filibuster, and even the current Republican-majority Senate didn’t deign to abolish it. Etc. Our political institutions lack automatic repair mechanisms, so they inevitably deteriorate over time, when what we needed was for them to improve over time instead.
I’m just wondering if we were ever sufficiently positively justified to anticipate a good future, or if we were just uncertain about the future and then projected our hopes and dreams onto this uncertainty, regardless of how realistic that was.
I think that’s a very reasonable question to be asking. My answer is I think it was justified, but not obvious.
My understanding is it wasn’t taken for granted that we had a way to get more progress with simply more compute until deep learning revolution, and even then people updated on specific additional data points for transformers, and even then people sometimes say “we’ve hit a wall!”
Maybe with more time we’d have time for the US system to collapse and be replaced with something fresh and equal to the challenges. To the extent the US was founded and set in motion by a small group of capable motivated people, it seems not crazy to think a small to large group such people could enact effective plans with a few decades.
One more virtue-turned-vice for my original comment: pacifism and disarmament: the world would be a more dangerous place if more countries had more nukes etc., and we might well have had a global nuclear war by now. But also, more war means more institutional turnover, and the destruction and reestablishment of institutions is about the only mechanism of institutional reform which actually works. Furthermore, if any country could threaten war or MAD against AI development, that might be one of the few things that could possibly actually enforce an AI Stop.
So gotta keep in mind that probabilities are in your head (I flip a coin, it’s already tails or heads in reality, but your credence should still be 50-50). I think it can be the case that we were always doomed even if weren’t yet justified in believing that.
Alternatively, it feels like this pushes up against philosophies of determinism and freewill. The whole “well the algorithm is a written program and it’ll choose what is chooses deterministically” but also from the inside there are choices.
I think a reason to have been uncertain before and update more now is just that timelines seem short. I used to have more hope because I thought we had a lot more time to solve both technical and coordination problems, and then there was the DL/transformers surprise. You make a good case and maybe 50 years more wouldn’t make a difference, but I don’t know, I wouldn’t have as high p-doom if we had that long.
I know that probabilities are in the map, not in the territory. I’m just wondering if we were ever sufficiently positively justified to anticipate a good future, or if we were just uncertain about the future and then projected our hopes and dreams onto this uncertainty, regardless of how realistic that was. In particular, the Glorious Transhumanist Future requires the same technological progress that can result in technological extinction, so I question whether the former should’ve ever been seen as the more likely or default outcome.
I’ve also wondered about how to think about doom vs. determinism. A related thorny philosophical issue is anthropics: I was born in 1988, so from my perspective the world couldn’t have possibly ended before then, but that’s no defense whatsoever against extinction after that point.
Re: AI timelines, again this is obviously speaking from hindsight, but I now find it hard to imagine how there could’ve ever been 50-year timelines. Maybe specific AI advances could’ve come a bunch of years later, but conversely, compute progress followed Moore’s Law and IIRC had no sign of slowing down, because compute is universally economically useful. And so even if algorithmic advances had been slower, compute progress could’ve made up for that to some extent.
Re: solving coordination problems: some of these just feel way too intractable. Take the US constitution, which governs your political system: IIRC it was meant to be frequently updated in constitutional conventions, but instead the political system ossified and the last meaningful amendment (18-year voting age) was ratified in 1971, or 54 years ago. Or, the US Senate made itself increasingly ungovernable with the filibuster, and even the current Republican-majority Senate didn’t deign to abolish it. Etc. Our political institutions lack automatic repair mechanisms, so they inevitably deteriorate over time, when what we needed was for them to improve over time instead.
I think that’s a very reasonable question to be asking. My answer is I think it was justified, but not obvious.
My understanding is it wasn’t taken for granted that we had a way to get more progress with simply more compute until deep learning revolution, and even then people updated on specific additional data points for transformers, and even then people sometimes say “we’ve hit a wall!”
Maybe with more time we’d have time for the US system to collapse and be replaced with something fresh and equal to the challenges. To the extent the US was founded and set in motion by a small group of capable motivated people, it seems not crazy to think a small to large group such people could enact effective plans with a few decades.
One more virtue-turned-vice for my original comment: pacifism and disarmament: the world would be a more dangerous place if more countries had more nukes etc., and we might well have had a global nuclear war by now. But also, more war means more institutional turnover, and the destruction and reestablishment of institutions is about the only mechanism of institutional reform which actually works. Furthermore, if any country could threaten war or MAD against AI development, that might be one of the few things that could possibly actually enforce an AI Stop.