Why are we giving up on plain “superintelligence” so quickly? According to Wikipedia:
A superintelligence is a hypothetical agent that possesses intelligence surpassing that of the most giftedhuman minds. Philosopher Nick Bostrom defines superintelligence as “any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest”.
According to Google AI Overview:
Superintelligence (or Artificial Superintelligence—ASI) is a hypothetical AI that vastly surpasses human intellect in virtually all cognitive domains, possessing superior scientific creativity, general wisdom, and social skills, operating at speeds and capacities far beyond human capability, and potentially leading to profound societal transformation or existential risks if not safely aligned with human goals.
I don’t think I saw anyone use “superintelligence” as “better than a majority of humans on some specific tasks” before very recently. (Was DeepBlue a superintelligence? Is a calculator superintelligence?)
Partly because I don’t think a Superintelligence by that definition is actually, intrinsically, that threatening. I think it is totally possible to build That without everyone dying.
The “It” that is not possible to build without everyone dying is an intelligence that is either overwhelmingly smarter than all humanity, or, a moderate non-superintelligence that is situationally aware with the element of surprise such that it can maneuver to become overwhelmingly smarter than humanity.
I think meanwhile there are good reasons for people to want to talk about various flavors of weak superintelligence, and trying to force them to use some other word for that seems doomed.
There seem to be two underlying motivations here, which are best kept separate.
One motivation is having a good vocabulary to talk about fine-grained distinctions. I’m on board with this one. We might want to distinguish e.g.:
Smarter than a median human along all AI-risk-relevant axes
Smarter than the smartest human along all AI-risk-relevant axes
Smarter than all of humanity put together along all AI-risk-relevant axes
Smart enough to have a 50% success probability to kill all humans if it chooses to, given current level of countermeasures
Smart enough to have a 50% success probability to kill all humans if it chooses to, even if best-case countermeasures are in place (this particular distinction inspired by Buck’s comments on this thread)
But then, first, it is clear that existing AI is not superintelligence according to any of the above interpretations. Second, I see no reason not to use catchy words like “hyperintelligence”, per One’s suggestion. (Although I agree that there is an advantage to choosing more descriptive terms.)
Another motivation is staying ahead of the hype cycles and epistemic warfare on twitter or whatnot. This one I take issue with.
I don’t have an account on twitter, and I hope that I never will have. Twisting ourselves into pretzels with ridiculous words like “AIdon’tkilleveryoneism” is incompatible with creating a vocabulary optimized for actually thinking and having productive discussions among people who are trying to be the adults in the room. Let the twitterites use whatever anti-language they want. The people trying to do beneficial politics there: I sincerely wish you luck, but I’m laboring in a different trench, and let’s use the proper tool for each task separately.
I understand that there can be practical difficulties such as, what if LW ends up using a language so different from the outside world that it will become inaccessible to outsiders, even when those outsiders would otherwise make valuable contributions. There are probably some tradeoffs that are reasonable to make with such considerations in mind. But let’s at least not abandon any linguistic position at the slightest threatening gesture of the enemy.
Two categories that don’t quite match the ones you laid out here.
I think there is something like “being a good citizen when trying to create jargon.” Don’t pick a word that everyone will predictably misunderstand, or will predictably really want to use for some other more common thing, if you want to also be able to have conversations with that “everyone.”
This isn’t (primarily) about fighting political/hype cycles, it’s just… like, well one negative example I updated on: Eliezer defines meta-honesty to be “be at least as honest as a highly honest person AND ALSO always be honest about under what circumstances you will be honest.” He tacked on the first part for a reason (to avoid accidentally encouraging people to use “metahonesty” for clever self-serving arguments). But, frankly, “metahonesty” is a pretty self-explanatory word if it just means the second thing, and most people will probably interpret it to mean just the “be honest about being honest” part.
I think the bundle-of-concepts Eliezer wanted to point to should be called something more like “Eliezer’s Code of (Meta)-honesty” or something catchier but more oddly specific. And let “metahonesty” just be a technical term that isn’t also trying to be a code of honor, that means what it sounds like it should mean.
...
Also, re: staying ahead of a political race. It’s kinda reasonable to just Not Wanna Play That Game, but, note that a lot of the stakes here is not “doing politics Out There somewhere”, it’s having terminology that keeps making sense in the intellectual circles. If most of the people studying AI, even from perspective of AI safety, end up studing “weak superintelligences”, trying to preserve a definition that uses it to always mean “overwhelmingly strong” is setting yourself up for a lot of annoying conversations just while trying to discuss concepts intellectually.
I think the explicit suggestion is to retreat to a more specific term rather than fight against the co-option of superintelligence to hype spikily human-level AI.
I agree that superintelligence has the right usage historically, and the right intuitive connotation.
Superman isn’t slightly stronger than the strongest human, let alone the average. He’s in a different category. That’s what’s evoked. But technically super just means better, so slightly better than human technically qualifies. So I see the term getting steadily co-opted for marketing, and agree we should have a separate term.
Why are we giving up on plain “superintelligence” so quickly? According to Wikipedia:
According to Google AI Overview:
I don’t think I saw anyone use “superintelligence” as “better than a majority of humans on some specific tasks” before very recently. (Was DeepBlue a superintelligence? Is a calculator superintelligence?)
I think the distinction is between “smarter and more capable than any human” versus “smarter and more capable than humanity as a whole”
The former is what you refer to, which could still be “Careful Moderate Superintelligence” in the view of the post.
Partly because I don’t think a Superintelligence by that definition is actually, intrinsically, that threatening. I think it is totally possible to build That without everyone dying.
The “It” that is not possible to build without everyone dying is an intelligence that is either overwhelmingly smarter than all humanity, or, a moderate non-superintelligence that is situationally aware with the element of surprise such that it can maneuver to become overwhelmingly smarter than humanity.
I think meanwhile there are good reasons for people to want to talk about various flavors of weak superintelligence, and trying to force them to use some other word for that seems doomed.
There seem to be two underlying motivations here, which are best kept separate.
One motivation is having a good vocabulary to talk about fine-grained distinctions. I’m on board with this one. We might want to distinguish e.g.:
Smarter than a median human along all AI-risk-relevant axes
Smarter than the smartest human along all AI-risk-relevant axes
Smarter than all of humanity put together along all AI-risk-relevant axes
Smart enough to have a 50% success probability to kill all humans if it chooses to, given current level of countermeasures
Smart enough to have a 50% success probability to kill all humans if it chooses to, even if best-case countermeasures are in place (this particular distinction inspired by Buck’s comments on this thread)
But then, first, it is clear that existing AI is not superintelligence according to any of the above interpretations. Second, I see no reason not to use catchy words like “hyperintelligence”, per One’s suggestion. (Although I agree that there is an advantage to choosing more descriptive terms.)
Another motivation is staying ahead of the hype cycles and epistemic warfare on twitter or whatnot. This one I take issue with.
I don’t have an account on twitter, and I hope that I never will have. Twisting ourselves into pretzels with ridiculous words like “AIdon’tkilleveryoneism” is incompatible with creating a vocabulary optimized for actually thinking and having productive discussions among people who are trying to be the adults in the room. Let the twitterites use whatever anti-language they want. The people trying to do beneficial politics there: I sincerely wish you luck, but I’m laboring in a different trench, and let’s use the proper tool for each task separately.
I understand that there can be practical difficulties such as, what if LW ends up using a language so different from the outside world that it will become inaccessible to outsiders, even when those outsiders would otherwise make valuable contributions. There are probably some tradeoffs that are reasonable to make with such considerations in mind. But let’s at least not abandon any linguistic position at the slightest threatening gesture of the enemy.
Two categories that don’t quite match the ones you laid out here.
I think there is something like “being a good citizen when trying to create jargon.” Don’t pick a word that everyone will predictably misunderstand, or will predictably really want to use for some other more common thing, if you want to also be able to have conversations with that “everyone.”
This isn’t (primarily) about fighting political/hype cycles, it’s just… like, well one negative example I updated on: Eliezer defines meta-honesty to be “be at least as honest as a highly honest person AND ALSO always be honest about under what circumstances you will be honest.” He tacked on the first part for a reason (to avoid accidentally encouraging people to use “metahonesty” for clever self-serving arguments). But, frankly, “metahonesty” is a pretty self-explanatory word if it just means the second thing, and most people will probably interpret it to mean just the “be honest about being honest” part.
I think the bundle-of-concepts Eliezer wanted to point to should be called something more like “Eliezer’s Code of (Meta)-honesty” or something catchier but more oddly specific. And let “metahonesty” just be a technical term that isn’t also trying to be a code of honor, that means what it sounds like it should mean.
...
Also, re: staying ahead of a political race. It’s kinda reasonable to just Not Wanna Play That Game, but, note that a lot of the stakes here is not “doing politics Out There somewhere”, it’s having terminology that keeps making sense in the intellectual circles. If most of the people studying AI, even from perspective of AI safety, end up studing “weak superintelligences”, trying to preserve a definition that uses it to always mean “overwhelmingly strong” is setting yourself up for a lot of annoying conversations just while trying to discuss concepts intellectually.
I think the explicit suggestion is to retreat to a more specific term rather than fight against the co-option of superintelligence to hype spikily human-level AI.
I agree that superintelligence has the right usage historically, and the right intuitive connotation.
Superman isn’t slightly stronger than the strongest human, let alone the average. He’s in a different category. That’s what’s evoked. But technically super just means better, so slightly better than human technically qualifies. So I see the term getting steadily co-opted for marketing, and agree we should have a separate term.