That sort of subject is inherently implicit in the kind of decision-theoretic questions that MIRI-style AI research involves. More generally, when one is thinking about astronomical-scale questions, and aggregating utilities, and so on, it is a matter of course that cosmically bad outcomes are as much of a theoretical possibility as cosmically good outcomes.
Now, the idea that one might need to specifically think about the bad outcomes, in the sense that preventing them might require strategies separate from those required for achieving good outcomes, may depend on additional assumptions that haven’t been conventional wisdom here.
Now, the idea that one might need to specifically think about the bad outcomes, in the sense that preventing them might require strategies separate from those required for achieving good outcomes, may depend on additional assumptions that haven’t been conventional wisdom here.
Right, I took this idea to be one of the main contributions of the article, and assumed that this was one of the reasons why cousin_it felt it was important and novel.
That sort of subject is inherently implicit in the kind of decision-theoretic questions that MIRI-style AI research involves. More generally, when one is thinking about astronomical-scale questions, and aggregating utilities, and so on, it is a matter of course that cosmically bad outcomes are as much of a theoretical possibility as cosmically good outcomes.
Now, the idea that one might need to specifically think about the bad outcomes, in the sense that preventing them might require strategies separate from those required for achieving good outcomes, may depend on additional assumptions that haven’t been conventional wisdom here.
Right, I took this idea to be one of the main contributions of the article, and assumed that this was one of the reasons why cousin_it felt it was important and novel.