Solving death is one of those things you don’t want to do if you want species to prosper.
Why not just avoid creating AGI all together and setup the civilization ourselves in such a way that we don’t doom ourselves. Why even focus on the alignment problem?
So far the process of solving it has only increased the chances of doom…
In other words I am not convinced that focusing on the alignment problem decreases our chances of doom more than focusing on the “stable civilization without agis” problem.
I am not optimistic about civilization either. I mean, we have tons of wisdom available all around us (e.g. Wikipedia, pirated textbooks), people just don’t care, they prefer sharing conspiracy theories and pseudoscience on social networks. We have discovered quantum physics, but we still have theocracies that support religious terrorists. We obsess about cultural appropriation, while many people live in literal slavery. We can’t even rely on the fact that smart and civilized countries will outcompete the stupid and uncivilized ones, because it turns out that the latter can simply buy or steal the technology, and the former are incapable of making all their citizens smart, so they are always potentially one election away from a disaster.
Of course AIs have a potential to make it much worse. But the problem is, in current situation, we can’t even coordinate on stopping them. As weapons, they are far more dangerous than nukes, because at least the nukes are not dual use, so there is no profit motive for getting halfway there.
I guess my point is that we should not see civilization and AIs as alternatives. It’s rather, the AIs are so dangerous because the civilization is so fucked up that we don’t have much of a chance to develop the AIs in a safe way.
I agree with what you’re saying but don’t see how it relates to my take... To me it seems like both people in AI labs and people trying to solve alignment are trying to create and control “god”. What about not creating it in the first place? Like MIRI dedicated it’s whole existence to “solving alignment” instead of “stable civilization”. Honestly the latter seems like an easier problem.
Solving death is one of those things you don’t want to do if you want species to prosper.
Why not just avoid creating AGI all together and setup the civilization ourselves in such a way that we don’t doom ourselves. Why even focus on the alignment problem?
So far the process of solving it has only increased the chances of doom…
In other words I am not convinced that focusing on the alignment problem decreases our chances of doom more than focusing on the “stable civilization without agis” problem.
I am not optimistic about civilization either. I mean, we have tons of wisdom available all around us (e.g. Wikipedia, pirated textbooks), people just don’t care, they prefer sharing conspiracy theories and pseudoscience on social networks. We have discovered quantum physics, but we still have theocracies that support religious terrorists. We obsess about cultural appropriation, while many people live in literal slavery. We can’t even rely on the fact that smart and civilized countries will outcompete the stupid and uncivilized ones, because it turns out that the latter can simply buy or steal the technology, and the former are incapable of making all their citizens smart, so they are always potentially one election away from a disaster.
Of course AIs have a potential to make it much worse. But the problem is, in current situation, we can’t even coordinate on stopping them. As weapons, they are far more dangerous than nukes, because at least the nukes are not dual use, so there is no profit motive for getting halfway there.
I guess my point is that we should not see civilization and AIs as alternatives. It’s rather, the AIs are so dangerous because the civilization is so fucked up that we don’t have much of a chance to develop the AIs in a safe way.
I agree with what you’re saying but don’t see how it relates to my take...
To me it seems like both people in AI labs and people trying to solve alignment are trying to create and control “god”.
What about not creating it in the first place?
Like MIRI dedicated it’s whole existence to “solving alignment” instead of “stable civilization”. Honestly the latter seems like an easier problem.