Without necessarily disagreeing, I’m curious exactly how far back you want to push this. The natural outcome of technological development has been clear to sufficiently penetrating thinkers since the nineteenth century. Samuel Butler saw it. George Eliot saw it. Following Butler, should “every machine of every sort [...] be destroyed by the well-wisher of his species,” that we should “at once go back to the primeval condition of the race”?
Without necessarily disagreeing, I’m curious exactly how far back you want to push this. The natural outcome of technological development has been clear to sufficiently penetrating thinkers since the nineteenth century. Samuel Butler saw it. George Eliot saw it. Following Butler, should “every machine of every sort [...] be destroyed by the well-wisher of his species,” that we should “at once go back to the primeval condition of the race”?
In 1951, Turing wrote that “it seems probable that once the machine thinking method had started, it would not take long to outstrip our feeble powers [...] At some stage therefore we should have to expect the machines to take control”.
Turing knew. He knew, and he went and founded the field of computer science anyway. What a terrible person, right?
I don’t know. At least to Shane Legg.
According to Eliezer, free will is an illusion, so Shane doesn’t really have a choice.