If the question is, “Is the Moon made of Gouda?”, and someone puts forth the argument that “The Moon is almost certainly made of Gouda”, how is that not assuming the conclusion? Proposing calling ‘A[G]I’ (notice the equivocation on AI and AGI) a “really powerful optimization process” is like saying “we shouldn’t call it ‘Moon’, ‘Moon’ is too vague; we should call it Giant Heavenly Gouda Ball”. How is that not assuming the conclusion? Especially when the arguments for naming it ‘Giant Heavenly Gouda Ball’ amount to “we all agree it’s Giant, Heavenly, and a Ball, and it’s intuitively obvious that even though the Earth isn’t made of Gouda and we’ve never actually been to the ‘Moon’, the ‘Moon’ is almost certainly made of Gouda”.
Repeatedly bringing up the conclusion in different words as if doing so constituted an argument is actively harmful. This stresses me out a lot, and that’s why I’m being destructive. Even if reasserting the conclusion as an argument is not what ciphergoth intended, you must know that’s how it will be taken by the majority of LessWrongers and, more importantly, third parties who will be introduced to AI risk by those LessWrongers, e.g. Stuart.
Nonetheless, I am receptive to your claim of destruction, and will try to adjust my actions accordingly.
If the question is, “Is the Moon made of Gouda?”, and someone puts forth the argument that “The Moon is almost certainly made of Gouda”, how is that not assuming the conclusion? Proposing calling ‘A[G]I’ (notice the equivocation on AI and AGI) a “really powerful optimization process” is like saying “we shouldn’t call it ‘Moon’, ‘Moon’ is too vague; we should call it Giant Heavenly Gouda Ball”. How is that not assuming the conclusion? Especially when the arguments for naming it ‘Giant Heavenly Gouda Ball’ amount to “we all agree it’s Giant, Heavenly, and a Ball, and it’s intuitively obvious that even though the Earth isn’t made of Gouda and we’ve never actually been to the ‘Moon’, the ‘Moon’ is almost certainly made of Gouda”.
Repeatedly bringing up the conclusion in different words as if doing so constituted an argument is actively harmful. This stresses me out a lot, and that’s why I’m being destructive. Even if reasserting the conclusion as an argument is not what ciphergoth intended, you must know that’s how it will be taken by the majority of LessWrongers and, more importantly, third parties who will be introduced to AI risk by those LessWrongers, e.g. Stuart.
Nonetheless, I am receptive to your claim of destruction, and will try to adjust my actions accordingly.