I’m being dramatic by calling it a dying world, but everyone’s worried about the future. Climate change, running out of fossil fuels, social problems, death of current humans, etc. Likely not an actual extinction of humanity, but hard times for sure. AI going well could be the easy way out of all that, if it’s actually as big as we think it might be. I think the accelerationists would not be as keen if the world was otherwise stable.
Another way of saying it is that our current reckless trajectory only makes sense if you view it as a gambit to stave off the rest of the things that are coming for us (which are mostly the fruits of our past and current recklessness). I’m sympathetic to the thought, even if it might also kill us.
Re: doomers and ethicists agreeing: The position that the authors of The AI Con take is that the doomers feed the mythos of AI as godlike by treating it as a doomsday machine. This almost-reverence fuels the ambitions and excitement of the accelerationists, while also reducing enthusiasm for tackling the more mundane challenges.
Yudkowsky still wants resources to go towards solving alignment, and if AI is a dud, that wouldn’t be necessary. I view the potential animosity between ethicists and doomers as primarily a fight over attention/funding. Ethicists see themselves as being choked out by people working towards solving fictional problems, and that creates resentment and dismissal. And doomers do often think focusing on the mundane harms is a waste of time. Ideally the perspectives would be coherent/compatible, and finding that bridge, or at least holding space for both, is the aim of this post.
our current reckless trajectory only makes sense if you view it as a gambit to stave off the rest of the things that are coming for us
At this point, I think the AI race is driven by competitive dynamics. AI looks like a path to profit and power, and if you don’t reach for it, someone else will. For those involved, this removes the need to even ask whether to do it: it’s a foregone conclusion that someone will do it. The only thing I see even putting a dent in this competitive dynamics, is if something happens that terrifies even people like Musk, Trump, and Xi, terrifying enough that they would put aside their differences and truly organize a halt to the race.
You’re definitely challenging a key piece of my perspective here, and I’ve thought a good bit about how to respond. What I’ve come up with, is… I think all of us are involved. The labs don’t exist in a vacuum, and the opinion of the public does have an impact. So I think looking at scopes of agency larger than the individual is a helpful thing to do.
In this piece I’m describing the choice that is getting made on behalf of humanity, from the lens of humanity. Because it really does affect all of us. But that’s also why I take a hands-off kind of approach here, because it’s not necessarily my role to say or know what I think humanity should be doing. I’m just an ignorant grain of sand.
I’m being dramatic by calling it a dying world, but everyone’s worried about the future. Climate change, running out of fossil fuels, social problems, death of current humans, etc. Likely not an actual extinction of humanity, but hard times for sure. AI going well could be the easy way out of all that, if it’s actually as big as we think it might be. I think the accelerationists would not be as keen if the world was otherwise stable.
Another way of saying it is that our current reckless trajectory only makes sense if you view it as a gambit to stave off the rest of the things that are coming for us (which are mostly the fruits of our past and current recklessness). I’m sympathetic to the thought, even if it might also kill us.
Re: doomers and ethicists agreeing: The position that the authors of The AI Con take is that the doomers feed the mythos of AI as godlike by treating it as a doomsday machine. This almost-reverence fuels the ambitions and excitement of the accelerationists, while also reducing enthusiasm for tackling the more mundane challenges.
Yudkowsky still wants resources to go towards solving alignment, and if AI is a dud, that wouldn’t be necessary. I view the potential animosity between ethicists and doomers as primarily a fight over attention/funding. Ethicists see themselves as being choked out by people working towards solving fictional problems, and that creates resentment and dismissal. And doomers do often think focusing on the mundane harms is a waste of time. Ideally the perspectives would be coherent/compatible, and finding that bridge, or at least holding space for both, is the aim of this post.
At this point, I think the AI race is driven by competitive dynamics. AI looks like a path to profit and power, and if you don’t reach for it, someone else will. For those involved, this removes the need to even ask whether to do it: it’s a foregone conclusion that someone will do it. The only thing I see even putting a dent in this competitive dynamics, is if something happens that terrifies even people like Musk, Trump, and Xi, terrifying enough that they would put aside their differences and truly organize a halt to the race.
You’re definitely challenging a key piece of my perspective here, and I’ve thought a good bit about how to respond. What I’ve come up with, is… I think all of us are involved. The labs don’t exist in a vacuum, and the opinion of the public does have an impact. So I think looking at scopes of agency larger than the individual is a helpful thing to do.
In this piece I’m describing the choice that is getting made on behalf of humanity, from the lens of humanity. Because it really does affect all of us. But that’s also why I take a hands-off kind of approach here, because it’s not necessarily my role to say or know what I think humanity should be doing. I’m just an ignorant grain of sand.