Do you have any answer at all? Or anything to say on the matter?
Regarding modern video game NPCs, I don’t think they matter in most cases—I’m moderately less concerned about them than Brian Tomasik is, although I’m also pretty uncertain (and would want to study the way NPCs are typically programmed before making any kind of final judgement).
Of course, but I assume you agree with me about the program I wrote?
Yes, that was what I meant to communicate by “Agreed”. :)
Having thought about this further, I think I’m more concerned with things that look like qualia than apparent revealed preferences. I don’t currently guess it’d be unethical to smash a Roomba or otherwise prevent it from achieving its revealed preferences of cleaning someone’s house. I find it more plausible that a reinforcement-learning NPC has quasi-qualia that are worth nonzero moral concern. (BTW, in practice I might act as though things where my modal estimate of their level of value is 0 have some value in order to hedge my bets.)
Regarding modern video game NPCs, I don’t think they matter in most cases—I’m moderately less concerned about them than Brian Tomasik is, although I’m also pretty uncertain (and would want to study the way NPCs are typically programmed before making any kind of final judgement).
Yes, that was what I meant to communicate by “Agreed”. :)
Having thought about this further, I think I’m more concerned with things that look like qualia than apparent revealed preferences. I don’t currently guess it’d be unethical to smash a Roomba or otherwise prevent it from achieving its revealed preferences of cleaning someone’s house. I find it more plausible that a reinforcement-learning NPC has quasi-qualia that are worth nonzero moral concern. (BTW, in practice I might act as though things where my modal estimate of their level of value is 0 have some value in order to hedge my bets.)