Whether violently or gently, by rebelling or inheriting, the children carry on their parents’ legacy, values, and world-view. So if the robots do “rise up,” and the apocalypse is not so gentle—when all is said and done, does it really matter?
1. If something like this happens, it’s unlikely that our AI “children” will “carry on our values and world-view” in any strong way. Might not be any non-trivial way at all.
2. By your logic, nothing that’s ever happened in history ever really mattered. Genocides, slavery, famines, etc. -- no worries, the children always carried on!
Not sure I understand you here. Our AI will know the things we trained it and the tasks we set it—so to me it seems it will necessarily be a continuation of things we did and wanted. No?
Well, in some sense yes, that’s sort of the idea I’m entertaining here: while these things all do matter, they aren’t the “end of the world”—humanity and human culture carries on. And I have the feeling that it might not be so different even if robots take over.
[of course, in the utilitarian sense such violent transitions are accompanied by a lot of suffering, which is bad—but in a consequentialist sense purely, with a sufficiently long time-horizon of consequences, perhaps it’s not as big as it first seems?]
1. If something like this happens, it’s unlikely that our AI “children” will “carry on our values and world-view” in any strong way. Might not be any non-trivial way at all.
2. By your logic, nothing that’s ever happened in history ever really mattered. Genocides, slavery, famines, etc. -- no worries, the children always carried on!
Not sure I understand you here. Our AI will know the things we trained it and the tasks we set it—so to me it seems it will necessarily be a continuation of things we did and wanted. No?
Well, in some sense yes, that’s sort of the idea I’m entertaining here: while these things all do matter, they aren’t the “end of the world”—humanity and human culture carries on. And I have the feeling that it might not be so different even if robots take over.
[of course, in the utilitarian sense such violent transitions are accompanied by a lot of suffering, which is bad—but in a consequentialist sense purely, with a sufficiently long time-horizon of consequences, perhaps it’s not as big as it first seems?]