Every known plan for a post-AGI world is one which I do not expect my loved ones to survive.
I think your life expectancy and that of your loved ones (at least from a mundane perspective) is longer if AGI is developed than if it isn’t.
Btw, the OGI model is not primarily intended for a post-AGI world, but rather for a near-term or intermediary stage.
However, I agree that if somebody thinks that we should completely stop AGI then the OGI model would presumably not be the way to go. It is presented as an alternative to other governance models for the development of AGI (such as Manhattan project, CERN, Intelsat, etc.). This paper doesn’t address the desirability of developing AGI.
to shut down frontier AI development and preserve our very lives — a thing that unlike alignment we actually know is possible to achieve —
Fwiw, I think it’s more likely that AI will be aligned than that it will be shut down.
I am grateful that you have spread awareness of the risk of human extinction from AI. I am genuinely saddened that you seem to be working to bring it about.
One has to take the rough with the smooth… (But really, you seem to be misattributing motive to me here.)
If we here who know the stakes are not united in our call to shut down frontier AI development and preserve our very lives — a thing that unlike alignment we actually know is possible to achieve — then what was the rationalist project ever about?
I see it more like a flickering candle straining to create a small patch of visibility in an otherwise rather dark environment. Strong calls for unanimity and falling into line with a political campaign message is a wind that might snuff it out.
I think your life expectancy and that of your loved ones (at least from a mundane perspective) is longer if AGI is developed than if it isn’t.
You must have extreme confidence about this, or else your attitude about AGI would be grossly cavalier. Were it put to me, I would never take a bet at 99 to 1 that humanity will survive and flourish beyond my wildest dreams, vs. quickly come to a permanent end. That is a terrible deal. I genuinely love humanity, the monkeys that we are, and it is not my place to play games with other people’s lives. I have not asked for a longer life, and I especially would not do so in exchange for even a small risk of immediate death. Most importantly, if I never asked everyone else what risks they are willing to shoulder, then I shouldn’t even consider rolling dice on their behalf.
Fwiw, I think it’s more likely that AI will be aligned than that it will be shut down.
I am aware that you think this, and I struggle to understand why. There are tractable frameworks for global governance solutions that stand a good chance of being able to prevent the emergence of AGI in the near term, which leaves time for more work to be done on more robust governance as well as on alignment. There are no such tractable frameworks for AGI alignment. There is not even a convincing proof that AGI alignment is solvable in principle. Granting that it is solvable, why hasn’t it already been solved? How can you have such extreme confidence that it will be solved within the next few years, when we are by many measures no closer to a solution than we were two decades ago?
you seem to be misattributing motive to me here
I do not mean to attribute motive. Only to point out the difference between what it appears you are doing, and what it appears you think you are doing. I will eat crow if you can point to where you have said that if AGI development can be halted in principle (until it is shown to be safe), then it should be. That AGI should not be built if it cannot be built safely is a minimum necessary statement for sanity on this issue, which requires only that you imagine you could be incorrect about the ease of very soon solving a problem that no one knows how to begin to solve.
I see it more like a flickering candle straining to create a small patch of visibility in an otherwise rather dark environment. Strong calls for unanimity and falling into line with a political campaign message is a wind that might snuff it out.
I can empathize with the sentiment, but this is an outdated view. The public overwhelmingly want AI regulation, want to slow AI down, want AI companies to have to prove their models are safe, and want an international treaty. Salience is low, but rising. People take action when they understand the risks to their families. Tens of thousands of people have contacted their political representatives to demand regulation of AI, over ten thousand of which have done so through ControlAI’s tool alone. Speaking of ControlAI, they have the support of over 50 UK lawmakers, and are making strides in their ongoing US campaign. In a much shorter campaign, PauseAI UK secured the support of 60 parliamentarians in calling for Google Deepmind to honor its existing commitments. The proposed 10-year moratorium on US states being allowed to regulate AI was defeated due at least in part to hundreds of phone calls made to congressional staffers by activist groups. US Congressman Raja Krishnamoorthi, the ranking member of the Select Committee on the CCP, recently had this to say:
Whether it’s American AI or Chinese AI, it should not be released until we know it’s safe. … This is just common sense.
These beginnings could never have happened through quiet dealings and gently laid plans. They happened because people were honest and loud. The governance problem (for genuine, toothed governance) has been very responsive to additional effort, in a way that the alignment problem never has. An ounce of genuine outrage has been significantly more productive than a barrel of stratagems and backroom dealings.
The light of humanity’s resistance to extinction is not a flickering candle. It is a bonfire. It doesn’t need to be shielded now, if indeed it ever did. It needs the oxygen of a rushing wind.
“I think your life expectancy and that of your loved ones (at least from a mundane perspective) is longer if AGI is developed than if it isn’t.”
You must have extreme confidence about this, or else your attitude about AGI would be grossly cavalier.
Regarding attitudes about AGI, that’s probably a bigger topic for another time. But regarding your and your loved ones’ life expectancy, from a mundane perspective (which leaves out much that is actually very relevant), it would presumably be some small number of decades without AGI—less if the people you love are elderly or seriously ill. Given aligned AGI, it could be extremely long (and immensely better in quality). So even if we assume that AGI would arrive soon unless stopped (e.g. in 5 years) and would result in immediate death if unaligned (which is very far from a given), then it seems like your life expectancy would be vastly longer if AGI developed even if the chance of alignment were quite small.
These beginnings could never have happened through quiet dealings and gently laid plans. They happened because people were honest and loud.
I don’t doubt that loud people sometimes make things happen, though all-too-often the things they make happen turn out to have been for the worse. For my own part, I don’t feel there’s such a deficit of loud people in the world that it is my calling to rush out and join them. This is partly a matter of personality, but I hope there’s a niche from which one can try to contribute in a more detached manner (and that there is value in a “rationalist project” that seeks to protect and facilitate that).
So even if we assume that AGI would arrive soon unless stopped (e.g. in 5 years) and would result in immediate death if unaligned (which is very far from a given), then it seems like your life expectancy would be vastly longer if AGI developed even if the chance of alignment were quite small.
This naive expected value calculation completely leaves out what it actually means for humanity to come to an end: if you ever reach zero, you cannot keep playing the game. As I said, I would not take this chance even if the odds were 99 to 1 in favor of it going well. It would be deeply unethical to create AGI under that level of uncertainty, especially since the uncertainty may be reduced given time, and our current situation is almost certainly not that favorable.
I am not so egoistic as to value my own life (and even the lives of my loved ones) highly enough to make that choice on everyone else’s behalf, and on behalf of the whole future of known sentient life. But I also don’t personally have any specific wishes to live a very long life myself. I appreciate my life for what it is, and I don’t see any great need to improve it to a magical degree or live for vastly longer. There are people who individually have such terrible lives that it is rational for them to take large risks onto themselves to improve their circumstances, and there are others who simply have a very high appetite for risk. Those situations do not apply to most people.
We have been monkeys in shoes for a very long time. We have lived and suffered and rejoiced and died for eons. It would not be a crime against being for things to keep happening roughly the way they always have, with all of the beauty and horror we have always known. What would be a crime against being is to risk a roughly immediate, permanent end to everything of value, for utopian ideals that are shared by almost none of the victims. Humanity has repeatedly warned about this in our stories about supervillains and ideologue despots alike.
Under our shared reality, there is probably no justification for your view that I would ever accept. In that sense, it is not important to me what your justification is. On the other hand, I do have a model of people who hold your view, which may not resemble you in particular:
I view the willingness to gamble away all of value itself as an expression of ingratitude for the value that we do have, and I view the willingness to do this on everyone else’s behalf as a complete disregard for the inviolable consent of others.
I think your life expectancy and that of your loved ones (at least from a mundane perspective) is longer if AGI is developed than if it isn’t.
Btw, the OGI model is not primarily intended for a post-AGI world, but rather for a near-term or intermediary stage.
However, I agree that if somebody thinks that we should completely stop AGI then the OGI model would presumably not be the way to go. It is presented as an alternative to other governance models for the development of AGI (such as Manhattan project, CERN, Intelsat, etc.). This paper doesn’t address the desirability of developing AGI.
Fwiw, I think it’s more likely that AI will be aligned than that it will be shut down.
One has to take the rough with the smooth… (But really, you seem to be misattributing motive to me here.)
I see it more like a flickering candle straining to create a small patch of visibility in an otherwise rather dark environment. Strong calls for unanimity and falling into line with a political campaign message is a wind that might snuff it out.
You must have extreme confidence about this, or else your attitude about AGI would be grossly cavalier. Were it put to me, I would never take a bet at 99 to 1 that humanity will survive and flourish beyond my wildest dreams, vs. quickly come to a permanent end. That is a terrible deal. I genuinely love humanity, the monkeys that we are, and it is not my place to play games with other people’s lives. I have not asked for a longer life, and I especially would not do so in exchange for even a small risk of immediate death. Most importantly, if I never asked everyone else what risks they are willing to shoulder, then I shouldn’t even consider rolling dice on their behalf.
I am aware that you think this, and I struggle to understand why. There are tractable frameworks for global governance solutions that stand a good chance of being able to prevent the emergence of AGI in the near term, which leaves time for more work to be done on more robust governance as well as on alignment. There are no such tractable frameworks for AGI alignment. There is not even a convincing proof that AGI alignment is solvable in principle. Granting that it is solvable, why hasn’t it already been solved? How can you have such extreme confidence that it will be solved within the next few years, when we are by many measures no closer to a solution than we were two decades ago?
I do not mean to attribute motive. Only to point out the difference between what it appears you are doing, and what it appears you think you are doing. I will eat crow if you can point to where you have said that if AGI development can be halted in principle (until it is shown to be safe), then it should be. That AGI should not be built if it cannot be built safely is a minimum necessary statement for sanity on this issue, which requires only that you imagine you could be incorrect about the ease of very soon solving a problem that no one knows how to begin to solve.
I can empathize with the sentiment, but this is an outdated view. The public overwhelmingly want AI regulation, want to slow AI down, want AI companies to have to prove their models are safe, and want an international treaty. Salience is low, but rising. People take action when they understand the risks to their families. Tens of thousands of people have contacted their political representatives to demand regulation of AI, over ten thousand of which have done so through ControlAI’s tool alone. Speaking of ControlAI, they have the support of over 50 UK lawmakers, and are making strides in their ongoing US campaign. In a much shorter campaign, PauseAI UK secured the support of 60 parliamentarians in calling for Google Deepmind to honor its existing commitments. The proposed 10-year moratorium on US states being allowed to regulate AI was defeated due at least in part to hundreds of phone calls made to congressional staffers by activist groups. US Congressman Raja Krishnamoorthi, the ranking member of the Select Committee on the CCP, recently had this to say:
These beginnings could never have happened through quiet dealings and gently laid plans. They happened because people were honest and loud. The governance problem (for genuine, toothed governance) has been very responsive to additional effort, in a way that the alignment problem never has. An ounce of genuine outrage has been significantly more productive than a barrel of stratagems and backroom dealings.
The light of humanity’s resistance to extinction is not a flickering candle. It is a bonfire. It doesn’t need to be shielded now, if indeed it ever did. It needs the oxygen of a rushing wind.
Regarding attitudes about AGI, that’s probably a bigger topic for another time. But regarding your and your loved ones’ life expectancy, from a mundane perspective (which leaves out much that is actually very relevant), it would presumably be some small number of decades without AGI—less if the people you love are elderly or seriously ill. Given aligned AGI, it could be extremely long (and immensely better in quality). So even if we assume that AGI would arrive soon unless stopped (e.g. in 5 years) and would result in immediate death if unaligned (which is very far from a given), then it seems like your life expectancy would be vastly longer if AGI developed even if the chance of alignment were quite small.
I don’t doubt that loud people sometimes make things happen, though all-too-often the things they make happen turn out to have been for the worse. For my own part, I don’t feel there’s such a deficit of loud people in the world that it is my calling to rush out and join them. This is partly a matter of personality, but I hope there’s a niche from which one can try to contribute in a more detached manner (and that there is value in a “rationalist project” that seeks to protect and facilitate that).
This naive expected value calculation completely leaves out what it actually means for humanity to come to an end: if you ever reach zero, you cannot keep playing the game. As I said, I would not take this chance even if the odds were 99 to 1 in favor of it going well. It would be deeply unethical to create AGI under that level of uncertainty, especially since the uncertainty may be reduced given time, and our current situation is almost certainly not that favorable.
I am not so egoistic as to value my own life (and even the lives of my loved ones) highly enough to make that choice on everyone else’s behalf, and on behalf of the whole future of known sentient life. But I also don’t personally have any specific wishes to live a very long life myself. I appreciate my life for what it is, and I don’t see any great need to improve it to a magical degree or live for vastly longer. There are people who individually have such terrible lives that it is rational for them to take large risks onto themselves to improve their circumstances, and there are others who simply have a very high appetite for risk. Those situations do not apply to most people.
We have been monkeys in shoes for a very long time. We have lived and suffered and rejoiced and died for eons. It would not be a crime against being for things to keep happening roughly the way they always have, with all of the beauty and horror we have always known. What would be a crime against being is to risk a roughly immediate, permanent end to everything of value, for utopian ideals that are shared by almost none of the victims. Humanity has repeatedly warned about this in our stories about supervillains and ideologue despots alike.
Under our shared reality, there is probably no justification for your view that I would ever accept. In that sense, it is not important to me what your justification is. On the other hand, I do have a model of people who hold your view, which may not resemble you in particular: I view the willingness to gamble away all of value itself as an expression of ingratitude for the value that we do have, and I view the willingness to do this on everyone else’s behalf as a complete disregard for the inviolable consent of others.