Personally, I worry about AIs being philosophically incompetent, and think it’d be cool to work on, except that I have no idea whether marginal progress on this would be good or bad. (Probably that’s not the reason for most people’s lack of interest, though.)
Like 1 but more general, it seems plausible that value-by-my-lights-on-reflection is highly sensitive to the combination of values, decision theory, epistemology, etc which civilization ends up with, such that I should be clueless about the value of little marginal nudges to philosophical competence;
Personally, I worry about AIs being philosophically incompetent, and think it’d be cool to work on, except that I have no idea whether marginal progress on this would be good or bad. (Probably that’s not the reason for most people’s lack of interest, though.)
Is it because of one of the reasons on this list, or something else?
I had in mind
1 and 2 on your list;
Like 1 but more general, it seems plausible that value-by-my-lights-on-reflection is highly sensitive to the combination of values, decision theory, epistemology, etc which civilization ends up with, such that I should be clueless about the value of little marginal nudges to philosophical competence;
Ryan’s reply to your comment.