A phrase I see a lot is whether someone “believes in superintelligence.”
I think this is an awful phrase. It wraps a ton of separable empirical issues into a single big vibe-based tribal marker, the people who “believe in superintelligence” and those who don’t. I think discourse would be a bunch better if people tabood these words and tried to outline specific predictive differences that could be falsifiable, at least in theory.
(There was a recent post about this, but I’m not particularly subtweeting it—I’ve thought this for a while.)
What is your favorite alternative to point at at thinking its plausible the intelligence explosion continues without slowing down much past the TEDAI point and results in technology way beyond what humans can understand and that would crush technology that humans can understand in a conflict? “Believe that superintelligence soon is plausible”?
IMO, having a short phrase for that is a bad idea, because there’s like 3 different conjunctions in that sentence and a short phrase hides that burdensome detail.
If that’s what I was pointing at, I’d say that phrase, and which would permit further interrogation on the part of my interlocutor (“what is TEDAI?” etc)
I think it’s a reasonable distinction between two beliefs, as someone who doesn’t see it as a strong tribal identifier. A ‘superintelligence’, as I understand it in this context, is something that could do any work that a human in front of a computer could do with no loss in performance.
As far as falsifiability, it seems straightforward. If human engineers, accountants, vehicle operators, and the like are serving a function other than ‘guy who has responsibility if something goes wrong’, then it’s been falsified for the date at which the observation is taken. As far as meaningful implications, at a minimum, it represents the ability to totally automate any task that doesn’t involve physical object manipulation, which has enormous implications for the economy. It also means that compute can be converted into researchers, generals, and drone pilots at a fixed ratio, which is a tipping point for many models of political power.
A phrase I see a lot is whether someone “believes in superintelligence.”
I think this is an awful phrase. It wraps a ton of separable empirical issues into a single big vibe-based tribal marker, the people who “believe in superintelligence” and those who don’t. I think discourse would be a bunch better if people tabood these words and tried to outline specific predictive differences that could be falsifiable, at least in theory.
(There was a recent post about this, but I’m not particularly subtweeting it—I’ve thought this for a while.)
What is your favorite alternative to point at at thinking its plausible the intelligence explosion continues without slowing down much past the TEDAI point and results in technology way beyond what humans can understand and that would crush technology that humans can understand in a conflict? “Believe that superintelligence soon is plausible”?
IMO, having a short phrase for that is a bad idea, because there’s like 3 different conjunctions in that sentence and a short phrase hides that burdensome detail.
If that’s what I was pointing at, I’d say that phrase, and which would permit further interrogation on the part of my interlocutor (“what is TEDAI?” etc)
I think it’s a reasonable distinction between two beliefs, as someone who doesn’t see it as a strong tribal identifier. A ‘superintelligence’, as I understand it in this context, is something that could do any work that a human in front of a computer could do with no loss in performance.
As far as falsifiability, it seems straightforward. If human engineers, accountants, vehicle operators, and the like are serving a function other than ‘guy who has responsibility if something goes wrong’, then it’s been falsified for the date at which the observation is taken. As far as meaningful implications, at a minimum, it represents the ability to totally automate any task that doesn’t involve physical object manipulation, which has enormous implications for the economy. It also means that compute can be converted into researchers, generals, and drone pilots at a fixed ratio, which is a tipping point for many models of political power.