Indeed, that’s the conclusion I came to. What I wonder now is how we operant-condition ourselves without just reinforcing reinforcement itself. Which, I suppose, is more or less precisely what the Friendly AI problem is.
Indeed, that’s the conclusion I came to. What I wonder now is how we operant-condition ourselves without just reinforcing reinforcement itself. Which, I suppose, is more or less precisely what the Friendly AI problem is.