Is it possible to develop specialized (narrow) AI that surpasses every human at infecting/destroying GPU systems, but won’t wipe us out? LLM-powered Stuxnet would be an example. Bacteria isn’t smarter than humans, but it is still very dangerous. It seems like a digital counterpart could prevent GPUs and so, prevent AGI.
(Obviously, I’m not advocating for this in particular since it would mean the end of the internet and I like the internet. It seems likely, however, that there are pivotal acts possible by narrow AI that prevent AGI without actually being AGI.)
This has always been fascinating to me, but I think there is a clear answer: Good and Bad things cluster together across different axes, not just spatially. If you find something good, keep going in that direction. There is probably more Good around it. If you find something bad, flee. There is probably a lot more Bad around too.
Both clusters of good and bad have limits, and bad things do happen next to good things. But the good and bad clusters tend to congregate quite densely with each other.
This example seems to me as though it is looking at the wrong dimensions/axes (economics, teachers, neighborhood may be less influential than friends, family, and romantic) and so misses that there are still clear clusters of good and bad surrounding each person.
Understanding that good and bad things cluster together has driven the largest change in my life and has been a huge improvement for me.