I agree with the sentiment that there are philosophical difficulties that AI needs to take into account, but that very likely take far too long to formulate. Simpler kinds of indirect normativity that involve prediction of uploads allow delaying that work to after AI.
So this issue doesn’t block all actionable work, as its straightforward form would suggest. There might be no need for the activities to be in this order in physical time. Instead it motivates work on the simpler kinds of indirect normativity that would allow such philosophical investigations to take place inside AI’s values. In particular, it motivates figuring out what kind of thing AI’s values are, in sufficient generality so that it would be able to represent the results of unexpected future philosophical progress.
I agree with the sentiment that there are philosophical difficulties that AI needs to take into account, but that very likely take far too long to formulate. Simpler kinds of indirect normativity that involve prediction of uploads allow delaying that work to after AI.
So this issue doesn’t block all actionable work, as its straightforward form would suggest. There might be no need for the activities to be in this order in physical time. Instead it motivates work on the simpler kinds of indirect normativity that would allow such philosophical investigations to take place inside AI’s values. In particular, it motivates figuring out what kind of thing AI’s values are, in sufficient generality so that it would be able to represent the results of unexpected future philosophical progress.