Would be nice if this had a summary. The article is basically about how concerns about AI consciousness are missing the point, because an AI doesn’t need to be conscious to be dangerous.
Thanks! LW was malfunctioning when I posted this, otherwise I would have.
This may be a naive and over-simplified stance, so educate me if I’m being ignorant--
but isn’t promiting anything that speeds up AI reasearch the absolute worst thing we can do? If the fate of humanity rests on the outcome of the race between solving the friendly AI problem and reaching intelligent AI, shouldn’t we only support research that goes exclusively into the former, and perhaps even try to slow down the latter? The link you shared seems to fall into the former category, aiming for general promotion of the idea and accelerating research.
Feel free to just provide a link if the argument has been discussed before.