Thank you for your high-quality engagement on this and for including the clear statement!
I think my most substantial disagreement with you on the difficulty of a shutdown is related to longtermism. Most normal people would not take a 5% risk of destroying the world in order to greatly improve their lives and the lives of their children. That isn’t because they are longtermist, but primarily because they are simply horrified by the concept of destroying the world.
It is in fact almost entirely utilitarians who are in favor of taking that risk, because they are able to justify it to themselves after doing some simplified calculation. Ordinary people, rational or irrational, who just want good things for themselves and their kids usually don’t want to risk their own lives, certainly don’t want to risk their kids’ lives, and it wouldn’t cross their mind to risk other people’s kids’ lives, when put in stark terms.
“Human civilization should not be made to collapse in the next few decades” and “humanity should survive for a good long while” are longtermist positions, but they are also what >90% of people in every nation on earth already believe.
Most normal people would not take a 5% risk of destroying the world in order to greatly improve their lives and the lives of their children.
Polls suggest that most normal people expect AGI to be bad for them and they don’t want it. I’m more speculating here, but I think the typical expectation is something like “AGI will put me out of a job; billionaires will get even richer and I’ll get nothing.”
This isn’t terribly decision-relevant except for deciding what type of alignment work to do. But that does seem nontrival. My bottom line is: push for pause/slowdown, but don’t get overoptimistic. Simultaneously work toward alignment on the current path, as fast as possible, because that might well be our only chance.
To your point:
I take your point on the standard reasoning. I agree that most adults would turn down even a 95-5%, 19 vs 1 odds of improving their lives with a small chance of destroying the world.
But I’m afraid those with decision-making power would take far worse odds in private, where it matters. That’s because for them, it’s not just a better life; it’s immortality vs. dying soon. And that tends to change decision-making.
I added and marked an edit to make this part of the logic explicit.
Most humans with decision-making power, e.g. in goverment, are 50+ years old, and mostly older since power tends to accumulate at least until sharp cognitive declines set in. There’s a pretty good chance they will die of natural causes if ASI isn’t created to do groundbreaking medical research within their lifetimes.
That’s on top of any actual nationalistic tendencies, or fears of being killed, enslaved, tortured, or worse, mocked, by losing the race to one’s political enemies covertly pursuing ASI.
And that’s on top of worrying that sociopaths (or similarly cold/selfish decision-makers) are over-represented in the halls of power. Those arguments seem pretty strong to me, too.
How this would unfold is highly unclear to me. I think it’s important to develop gears-level models of how these processes might happen, as Raemon suggests in this post.
My guess is that covert programs are between likely and inevitable. Public pressure will be for caution; private and powerful opinions will be much harder to predict.
As for the public statements, it works just fine to say, or more likely convince yourself, that you think aligment is solvable with little chance of failure, and that patriotism and horror over (insert enemy ideology here) controlling the future are your motivations.
Thank you for your high-quality engagement on this and for including the clear statement!
I think my most substantial disagreement with you on the difficulty of a shutdown is related to longtermism. Most normal people would not take a 5% risk of destroying the world in order to greatly improve their lives and the lives of their children. That isn’t because they are longtermist, but primarily because they are simply horrified by the concept of destroying the world.
It is in fact almost entirely utilitarians who are in favor of taking that risk, because they are able to justify it to themselves after doing some simplified calculation. Ordinary people, rational or irrational, who just want good things for themselves and their kids usually don’t want to risk their own lives, certainly don’t want to risk their kids’ lives, and it wouldn’t cross their mind to risk other people’s kids’ lives, when put in stark terms.
“Human civilization should not be made to collapse in the next few decades” and “humanity should survive for a good long while” are longtermist positions, but they are also what >90% of people in every nation on earth already believe.
Polls suggest that most normal people expect AGI to be bad for them and they don’t want it. I’m more speculating here, but I think the typical expectation is something like “AGI will put me out of a job; billionaires will get even richer and I’ll get nothing.”
This isn’t terribly decision-relevant except for deciding what type of alignment work to do. But that does seem nontrival. My bottom line is: push for pause/slowdown, but don’t get overoptimistic. Simultaneously work toward alignment on the current path, as fast as possible, because that might well be our only chance.
To your point:
I take your point on the standard reasoning. I agree that most adults would turn down even a 95-5%, 19 vs 1 odds of improving their lives with a small chance of destroying the world.
But I’m afraid those with decision-making power would take far worse odds in private, where it matters. That’s because for them, it’s not just a better life; it’s immortality vs. dying soon. And that tends to change decision-making.
I added and marked an edit to make this part of the logic explicit.
Most humans with decision-making power, e.g. in goverment, are 50+ years old, and mostly older since power tends to accumulate at least until sharp cognitive declines set in. There’s a pretty good chance they will die of natural causes if ASI isn’t created to do groundbreaking medical research within their lifetimes.
That’s on top of any actual nationalistic tendencies, or fears of being killed, enslaved, tortured, or worse, mocked, by losing the race to one’s political enemies covertly pursuing ASI.
And that’s on top of worrying that sociopaths (or similarly cold/selfish decision-makers) are over-represented in the halls of power. Those arguments seem pretty strong to me, too.
How this would unfold is highly unclear to me. I think it’s important to develop gears-level models of how these processes might happen, as Raemon suggests in this post.
My guess is that covert programs are between likely and inevitable. Public pressure will be for caution; private and powerful opinions will be much harder to predict.
As for the public statements, it works just fine to say, or more likely convince yourself, that you think aligment is solvable with little chance of failure, and that patriotism and horror over (insert enemy ideology here) controlling the future are your motivations.