“That would be a silly way for everyone on Earth to die, if nobody dared to talk about the danger, or argue high estimates of that danger, and it happened without any effort at stopping it.”
“So let’s not die! Let’s save everyone!”
I deeply appreciate the core sentiments and logic in this post. If one has a major interest in preventing machine superintelligence from leading to human extinction, then one might also hold a moral stance that non-violence is essential to preserving all human life. However, humanity currently confronts two simultaneous extinction-level events unfolding at different rates in real time (the climate crisis and rapid advancements in AI). In the face of such enormous stakes, it is not surprising to me that logic and strategy necessary for effectively slowing down AI advancements are being overshadowed by emotions. Violence is not solely associated with an illogical attempt to advance a cause. Sometimes it is based on grief or mental illness. Sometimes violence is an expression of powerlessness or outrage about decisions and conditions that have been established by few but whose consequences are experienced by many (or all!).
In response to the powerlessness and grief of this moment in human history, Eliezer’s Unteachable Methods of Sanity contain relevant insights.
Yet, the persuasive arc of this post relates more to those who seek to take strategic action to prevent unnecessary harm from superintelligence. For that specific segment of readers, I agree it is essential to emphasize for those considering violence as a logical or strategic action, that non-violence in justice movements has historically been more effective, and also that the extensive infrastructure supporting AI development cannot be easily overcome by targeting one key figure or one data enter.
I am rooting for humanity, including both a desire to advocate for a slowdown of AI development, but also to have as graceful a collective death as possible to the extent the rise of AI is increasingly out of our hands. From that standpoint, these are the guiding questions I have and that I look forward to reading about from writers more well-versed in the theory and practice of AI safety.
1) What is the timeframe in which “imposition of law” and reaching a treaty agreement would need to be adopted in order to have a significant chance of preventing (or at least slowing down) risky AI advancements? For example, based on doubling rates reported by METR is it accurate to say a halt is needed immediately as humans prospects diminish notably if AI development continues over the next seven months?
2) How can people from all levels of societal influence (institutional leaders, community leaders, everyday citizens) best contribute to communicating the urgency and options remaining for humanity at this current moment?
3) Given that the window for human intervention is rapidly closing, and AI advancement to date (particularly with the proliferation of agentic AI) poses new risks for societal and economic volatility, what are the guiding principles for how humans can at least have an honorable death as a species? (i.e. What can we do to decrease the likelihood of “silly ways to die” such as chaos and unraveling of society?)
“That would be a silly way for everyone on Earth to die, if nobody dared to talk about the danger, or argue high estimates of that danger, and it happened without any effort at stopping it.”
“So let’s not die! Let’s save everyone!”
I deeply appreciate the core sentiments and logic in this post. If one has a major interest in preventing machine superintelligence from leading to human extinction, then one might also hold a moral stance that non-violence is essential to preserving all human life. However, humanity currently confronts two simultaneous extinction-level events unfolding at different rates in real time (the climate crisis and rapid advancements in AI). In the face of such enormous stakes, it is not surprising to me that logic and strategy necessary for effectively slowing down AI advancements are being overshadowed by emotions. Violence is not solely associated with an illogical attempt to advance a cause. Sometimes it is based on grief or mental illness. Sometimes violence is an expression of powerlessness or outrage about decisions and conditions that have been established by few but whose consequences are experienced by many (or all!).
In response to the powerlessness and grief of this moment in human history, Eliezer’s Unteachable Methods of Sanity contain relevant insights.
Yet, the persuasive arc of this post relates more to those who seek to take strategic action to prevent unnecessary harm from superintelligence. For that specific segment of readers, I agree it is essential to emphasize for those considering violence as a logical or strategic action, that non-violence in justice movements has historically been more effective, and also that the extensive infrastructure supporting AI development cannot be easily overcome by targeting one key figure or one data enter.
I am rooting for humanity, including both a desire to advocate for a slowdown of AI development, but also to have as graceful a collective death as possible to the extent the rise of AI is increasingly out of our hands. From that standpoint, these are the guiding questions I have and that I look forward to reading about from writers more well-versed in the theory and practice of AI safety.
1) What is the timeframe in which “imposition of law” and reaching a treaty agreement would need to be adopted in order to have a significant chance of preventing (or at least slowing down) risky AI advancements? For example, based on doubling rates reported by METR is it accurate to say a halt is needed immediately as humans prospects diminish notably if AI development continues over the next seven months?
2) How can people from all levels of societal influence (institutional leaders, community leaders, everyday citizens) best contribute to communicating the urgency and options remaining for humanity at this current moment?
3) Given that the window for human intervention is rapidly closing, and AI advancement to date (particularly with the proliferation of agentic AI) poses new risks for societal and economic volatility, what are the guiding principles for how humans can at least have an honorable death as a species? (i.e. What can we do to decrease the likelihood of “silly ways to die” such as chaos and unraveling of society?)