It’s very hard to imagine humans prosper in a world with AGIs. What’s the point of solving the alignment problem? Is it so that one ASI can create a civilization where AGIs will never exist and then self-destruct?
I am sure there were articles written about this a decade ago. The short version is that if an AI wants to be friendly and still leave humans some meaning, it could e.g. solve the most pressing problems (such as death and war) and then let humans figure out the rest. Analogically to a parent who lets his children learn things, but gives them food, and makes sure they won’t get hurt.
One possible solution (not necessarily the only one), imagine being immortal, and if you get too sick AI will magically cure you, and it will prevent you from being a victim of crime, but otherwise everything is left for humans to do. You won’t starve, but the only way to eat a cake is to bake one, or pay another human to do it. (Maybe all generative AIs are banned?) Now it’s up to humans to learn science and invent things. I think that might be enjoyable.
Solving death is one of those things you don’t want to do if you want species to prosper.
Why not just avoid creating AGI all together and setup the civilization ourselves in such a way that we don’t doom ourselves. Why even focus on the alignment problem?
So far the process of solving it has only increased the chances of doom…
In other words I am not convinced that focusing on the alignment problem decreases our chances of doom more than focusing on the “stable civilization without agis” problem.
I am not optimistic about civilization either. I mean, we have tons of wisdom available all around us (e.g. Wikipedia, pirated textbooks), people just don’t care, they prefer sharing conspiracy theories and pseudoscience on social networks. We have discovered quantum physics, but we still have theocracies that support religious terrorists. We obsess about cultural appropriation, while many people live in literal slavery. We can’t even rely on the fact that smart and civilized countries will outcompete the stupid and uncivilized ones, because it turns out that the latter can simply buy or steal the technology, and the former are incapable of making all their citizens smart, so they are always potentially one election away from a disaster.
Of course AIs have a potential to make it much worse. But the problem is, in current situation, we can’t even coordinate on stopping them. As weapons, they are far more dangerous than nukes, because at least the nukes are not dual use, so there is no profit motive for getting halfway there.
I guess my point is that we should not see civilization and AIs as alternatives. It’s rather, the AIs are so dangerous because the civilization is so fucked up that we don’t have much of a chance to develop the AIs in a safe way.
I agree with what you’re saying but don’t see how it relates to my take... To me it seems like both people in AI labs and people trying to solve alignment are trying to create and control “god”. What about not creating it in the first place? Like MIRI dedicated it’s whole existence to “solving alignment” instead of “stable civilization”. Honestly the latter seems like an easier problem.
I don’t like the term map, because it communicates something static. But the real world is not static. It evolves dynamically. Sure there are things in life which are for which I guess it is good to use the term, for instance mathematical theorems, physical laws, etc… But does anyone have a better term for the real world. Something like current-world-state. So that I could still say “my idea of current world state corresponds to the real world state” but shorter?
I think with modern interactive navigation support, the static implication of “map” is probably going away anyway. But to be clear, the POINT of the map-territory metaphor is that the map CANNOT be perfect, both because it’s less detailed, but also because it can’t change at the same rate as the world.
The real world is quite dynamic. Our mental models and descriptions of the world are less dynamic than the world itself.
It’s very hard to imagine humans prosper in a world with AGIs. What’s the point of solving the alignment problem? Is it so that one ASI can create a civilization where AGIs will never exist and then self-destruct?
I am sure there were articles written about this a decade ago. The short version is that if an AI wants to be friendly and still leave humans some meaning, it could e.g. solve the most pressing problems (such as death and war) and then let humans figure out the rest. Analogically to a parent who lets his children learn things, but gives them food, and makes sure they won’t get hurt.
One possible solution (not necessarily the only one), imagine being immortal, and if you get too sick AI will magically cure you, and it will prevent you from being a victim of crime, but otherwise everything is left for humans to do. You won’t starve, but the only way to eat a cake is to bake one, or pay another human to do it. (Maybe all generative AIs are banned?) Now it’s up to humans to learn science and invent things. I think that might be enjoyable.
Solving death is one of those things you don’t want to do if you want species to prosper.
Why not just avoid creating AGI all together and setup the civilization ourselves in such a way that we don’t doom ourselves. Why even focus on the alignment problem?
So far the process of solving it has only increased the chances of doom…
In other words I am not convinced that focusing on the alignment problem decreases our chances of doom more than focusing on the “stable civilization without agis” problem.
I am not optimistic about civilization either. I mean, we have tons of wisdom available all around us (e.g. Wikipedia, pirated textbooks), people just don’t care, they prefer sharing conspiracy theories and pseudoscience on social networks. We have discovered quantum physics, but we still have theocracies that support religious terrorists. We obsess about cultural appropriation, while many people live in literal slavery. We can’t even rely on the fact that smart and civilized countries will outcompete the stupid and uncivilized ones, because it turns out that the latter can simply buy or steal the technology, and the former are incapable of making all their citizens smart, so they are always potentially one election away from a disaster.
Of course AIs have a potential to make it much worse. But the problem is, in current situation, we can’t even coordinate on stopping them. As weapons, they are far more dangerous than nukes, because at least the nukes are not dual use, so there is no profit motive for getting halfway there.
I guess my point is that we should not see civilization and AIs as alternatives. It’s rather, the AIs are so dangerous because the civilization is so fucked up that we don’t have much of a chance to develop the AIs in a safe way.
I agree with what you’re saying but don’t see how it relates to my take...
To me it seems like both people in AI labs and people trying to solve alignment are trying to create and control “god”.
What about not creating it in the first place?
Like MIRI dedicated it’s whole existence to “solving alignment” instead of “stable civilization”. Honestly the latter seems like an easier problem.
I don’t like the term map, because it communicates something static. But the real world is not static. It evolves dynamically. Sure there are things in life which are for which I guess it is good to use the term, for instance mathematical theorems, physical laws, etc… But does anyone have a better term for the real world. Something like current-world-state. So that I could still say “my idea of current world state corresponds to the real world state” but shorter?
I think with modern interactive navigation support, the static implication of “map” is probably going away anyway. But to be clear, the POINT of the map-territory metaphor is that the map CANNOT be perfect, both because it’s less detailed, but also because it can’t change at the same rate as the world.
The real world is quite dynamic. Our mental models and descriptions of the world are less dynamic than the world itself.
Beliefs? World model?
not as nice as map corresponding to territory tho