Specifically, the idea is that AI going well for humans would require a detailed theory of how to encode human values in form suitable for machine optimization, and the relevance of deep learning is that Yudkowsky and Soares think that deep learning is on track to provide the superhuman optimization without the theory of values. You’re correct to note that this is a stance according to which “artificial life is by default bad, dangerous, or disvaluable,” but I think the way you contrast it with the claim that “biological life is by default good or preferable” is getting the nuances slightly wrong: independently-evolved biological aliens with superior intelligence would also be dangerous for broadly similar reasons.
Didn’t you have a post where you argued that it’s a consequence of their view that biological aliens are better, morally speaking, than artificial earth originating life, or did I misunderstand?
Specifically, the idea is that AI going well for humans would require a detailed theory of how to encode human values in form suitable for machine optimization, and the relevance of deep learning is that Yudkowsky and Soares think that deep learning is on track to provide the superhuman optimization without the theory of values. You’re correct to note that this is a stance according to which “artificial life is by default bad, dangerous, or disvaluable,” but I think the way you contrast it with the claim that “biological life is by default good or preferable” is getting the nuances slightly wrong: independently-evolved biological aliens with superior intelligence would also be dangerous for broadly similar reasons.
Didn’t you have a post where you argued that it’s a consequence of their view that biological aliens are better, morally speaking, than artificial earth originating life, or did I misunderstand?