most AI designs see screaming humans as no more important or special than pissing rats.
No AI design that we currently have can even conceive of humans. They’re in a don’t know state, not a don’t care state. They are safe because they are too dumb to be dangerous. Danger is a combination of high intelligence and misalignment.
Or you might be talking about abstract, theoretical AGI and ASI. It is true that most possible ASI designs don’t care about humans, but it is not useful, because AI design is not taking a random potshot into design space. AI designers don’t want AI that do random stuff: they are always trying to solve some sort of control or alignment problem parallel with achieving intelligence. Since danger is a combination of high intelligence and misalignment, dangerous ASI would require efforts at creating intelligence to suddenly outstrip efforts at aligning it. The key word being “suddenly”. If progress to continues to be incremental, there is not much to worry about.
It might not like this situation and might plot to change it.
No AI design that we currently have can even conceive of humans. They’re in a don’t know state, not a don’t care state. They are safe because they are too dumb to be dangerous. Danger is a combination of high intelligence and misalignment.
Or you might be talking about abstract, theoretical AGI and ASI. It is true that most possible ASI designs don’t care about humans, but it is not useful, because AI design is not taking a random potshot into design space. AI designers don’t want AI that do random stuff: they are always trying to solve some sort of control or alignment problem parallel with achieving intelligence. Since danger is a combination of high intelligence and misalignment, dangerous ASI would require efforts at creating intelligence to suddenly outstrip efforts at aligning it. The key word being “suddenly”. If progress to continues to be incremental, there is not much to worry about.
Or it might not care.