Not sure I understand you here. Our AI will know the things we trained it and the tasks we set it—so to me it seems it will necessarily be a continuation of things we did and wanted. No?
Well, in some sense yes, that’s sort of the idea I’m entertaining here: while these things all do matter, they aren’t the “end of the world”—humanity and human culture carries on. And I have the feeling that it might not be so different even if robots take over.
[of course, in the utilitarian sense such violent transitions are accompanied by a lot of suffering, which is bad—but in a consequentialist sense purely, with a sufficiently long time-horizon of consequences, perhaps it’s not as big as it first seems?]
Not sure I understand you here. Our AI will know the things we trained it and the tasks we set it—so to me it seems it will necessarily be a continuation of things we did and wanted. No?
Well, in some sense yes, that’s sort of the idea I’m entertaining here: while these things all do matter, they aren’t the “end of the world”—humanity and human culture carries on. And I have the feeling that it might not be so different even if robots take over.
[of course, in the utilitarian sense such violent transitions are accompanied by a lot of suffering, which is bad—but in a consequentialist sense purely, with a sufficiently long time-horizon of consequences, perhaps it’s not as big as it first seems?]