I think this more feature-than-bug – the problem is that it’s overwhelming. There are multiple ways to be overwhelming, what we want to avoid is a situation where an overwhelming, unfriendly AI exists. One way is not build AI of a given power level. The other is to increase the robustness of civilization. (I agree the term is fuzzy, but I think realistically the territory is fuzzy).
When you’re thinking about how to mitigate the risks, it really matters which of these we’re talking about. I think there is some level of AI capability at which it’s basically hopeless to control the AIs; this is what I use “galaxy-brained superintelligence” to refer to. If you just want to talk about AIs that pose substantial risk of takeover, you probably shouldn’t use the word superintelligence in there, because they don’t obviously have to be superintelligences to pose takeover risk. (And it’s weird to use “overwhelmingly” as an adverb that modifies “superintelligent”, because the overwhelmingness isn’t about the level of intelligence, it’s about that and also the world. You could say “overwhelming, superintelligent AI” if you want to talk specifically about AIs that are overwhelming and also superintelligent, but that’s normally not what we want to talk about.)
I might retract the exact phrasing of my reply comment.
I think I was originally using overwhelmingly basically the way you’re using “galaxy brained”, and I feel like I have quibbles about the exact semantics of that phrase that feel about as substantial as your concern about overwhelming. (i.e. there is also a substantive difference between a very powerful brain hosted in a datacenter on Earth, and an AI that with a galaxy of resources)
What I mean by “overwhelmingly superintelligent” is “so fucking smart that humanity would have to have qualitatively changed in a similar orders-of-magnitude degree”, which probably in practice means humans also have to have augmented their own intelligence, or have escalated their AI control schemes pretty far, carefully wielding significantly-[but-not-overwhelming/galaxy-brained]-AI that oversees all of Earth’s security and is either aligned or the humans are really at threading the needle on control for quite powerful systems.
When you’re thinking about how to mitigate the risks, it really matters which of these we’re talking about. I think there is some level of AI capability at which it’s basically hopeless to control the AIs; this is what I use “galaxy-brained superintelligence” to refer to. If you just want to talk about AIs that pose substantial risk of takeover, you probably shouldn’t use the word superintelligence in there, because they don’t obviously have to be superintelligences to pose takeover risk. (And it’s weird to use “overwhelmingly” as an adverb that modifies “superintelligent”, because the overwhelmingness isn’t about the level of intelligence, it’s about that and also the world. You could say “overwhelming, superintelligent AI” if you want to talk specifically about AIs that are overwhelming and also superintelligent, but that’s normally not what we want to talk about.)
I might retract the exact phrasing of my reply comment.
I think I was originally using overwhelmingly basically the way you’re using “galaxy brained”, and I feel like I have quibbles about the exact semantics of that phrase that feel about as substantial as your concern about overwhelming. (i.e. there is also a substantive difference between a very powerful brain hosted in a datacenter on Earth, and an AI that with a galaxy of resources)
What I mean by “overwhelmingly superintelligent” is “so fucking smart that humanity would have to have qualitatively changed in a similar orders-of-magnitude degree”, which probably in practice means humans also have to have augmented their own intelligence, or have escalated their AI control schemes pretty far, carefully wielding significantly-[but-not-overwhelming/galaxy-brained]-AI that oversees all of Earth’s security and is either aligned or the humans are really at threading the needle on control for quite powerful systems.