This article is a bit long. If it would not do violence to the ideas, I would prefer it had been broken up into a short series.
I think you’re altogether correct, but with the caveat that “Friendly AI is useless and doomed to failure” is not a necessary conclusion of this piece.
Any one of us has consistent intuitions
I think this is false. Most of us have inconsistent intuitions, just like we have inconsistent beliefs. Though this strengthens, not undermines, your point.
This means that our present ethical arguments are largely the result of cultural change over the past few thousand years; and that the next few hundred years of change will provide ample grounds for additional arguments even if we resolve today’s disagreements.
Indeed, but hopefully we’ll get better at adapting ourselves to the changing times.
This article is a bit long. If it would not do violence to the ideas, I would prefer it had been broken up into a short series.
Plus, I could have gotten more karma that way. :)
It started out small. (And more wrong.)
I think you’re altogether correct, but with the caveat that “Friendly AI is useless and doomed to failure” is not a necessary conclusion of this piece.
This article is a bit long. If it would not do violence to the ideas, I would prefer it had been broken up into a short series.
I think you’re altogether correct, but with the caveat that “Friendly AI is useless and doomed to failure” is not a necessary conclusion of this piece.
I think this is false. Most of us have inconsistent intuitions, just like we have inconsistent beliefs. Though this strengthens, not undermines, your point.
Indeed, but hopefully we’ll get better at adapting ourselves to the changing times.
Plus, I could have gotten more karma that way. :)
It started out small. (And more wrong.)
Agreed.