Look man, I am not arguing (and have not argued on this thread) that we should not be concerned about AI risk. 10% chance is a lot! You don’t need to condescendingly lecture me about “picturing suffering”. Maybe go take a walk or something, you seem unnecessarily upset.
In many of the scenarios that you’ve finally agreed to sketch, I personally will know about the impending AGI doom a few years before my death (it takes a long time to build enough robots to replace humanity). That is not to say there is anything I could do about it at that point, but it’s still interesting to think about it, as it is quite different from what the AI-risk types usually have us believe. E.g. if I see an AI take over the internet and convince politicians to give it total control, I will know that death will likely follow soon. Or, if ever we build robots that could physically replace humans for the purpose of coal mining, I will know that AGI death will likely follow soon. These are important fire alarms, to me personally, even if I’d be powerless to stop the AGI. I care about knowing I’m about to die!
I wonder if this is what you imagined when we started the conversation. I wonder if despite your hostility, you’ve learned something new here: that you will quite possibly spend the last few years yelling at politicians (or maybe joining terrorist operations to bomb computing clusters?) instead of just dying instantly. That is, assuming you believe your own stories here.
I still think you’re neglecting some possible survival scenarios: perhaps the AI attacks quickly, not willing to let even a month pass (that would risk another AGI), too little time to buy political power. It takes over the internet and tries desperately to hold it, coaxing politicians and bribing admins. But the fire alarm gets raised anyway—a risk the AGI knew about, but chose to take—and people start trying to shut it down. We spend some years—perhaps decades? In a stalemate between those who support the AGI and say it is friendly, and those who want to shut it down ASAP; the AGI fails to build robots in those decades due to insufficient political capital and interference from terrorist organizations. The AGI occasionally finds itself having to assassinate AI safety types, but one assassination gets discovered and hurts its credibility.
My point is, the world is messy and difficult, and the AGI faces many threats; it is not clear that we always lose. Of course, losing even 10% of the time is really bad (I thought that was a given but I guess it needs to be stated).
Look man, I am not arguing (and have not argued on this thread) that we should not be concerned about AI risk. 10% chance is a lot! You don’t need to condescendingly lecture me about “picturing suffering”. Maybe go take a walk or something, you seem unnecessarily upset.
In many of the scenarios that you’ve finally agreed to sketch, I personally will know about the impending AGI doom a few years before my death (it takes a long time to build enough robots to replace humanity). That is not to say there is anything I could do about it at that point, but it’s still interesting to think about it, as it is quite different from what the AI-risk types usually have us believe. E.g. if I see an AI take over the internet and convince politicians to give it total control, I will know that death will likely follow soon. Or, if ever we build robots that could physically replace humans for the purpose of coal mining, I will know that AGI death will likely follow soon. These are important fire alarms, to me personally, even if I’d be powerless to stop the AGI. I care about knowing I’m about to die!
I wonder if this is what you imagined when we started the conversation. I wonder if despite your hostility, you’ve learned something new here: that you will quite possibly spend the last few years yelling at politicians (or maybe joining terrorist operations to bomb computing clusters?) instead of just dying instantly. That is, assuming you believe your own stories here.
I still think you’re neglecting some possible survival scenarios: perhaps the AI attacks quickly, not willing to let even a month pass (that would risk another AGI), too little time to buy political power. It takes over the internet and tries desperately to hold it, coaxing politicians and bribing admins. But the fire alarm gets raised anyway—a risk the AGI knew about, but chose to take—and people start trying to shut it down. We spend some years—perhaps decades? In a stalemate between those who support the AGI and say it is friendly, and those who want to shut it down ASAP; the AGI fails to build robots in those decades due to insufficient political capital and interference from terrorist organizations. The AGI occasionally finds itself having to assassinate AI safety types, but one assassination gets discovered and hurts its credibility.
My point is, the world is messy and difficult, and the AGI faces many threats; it is not clear that we always lose. Of course, losing even 10% of the time is really bad (I thought that was a given but I guess it needs to be stated).