Superintelligence has usually been used to mean far more intelligent than a human in the ways that practically matter. Current systems aren’t there yet, but they will be soon.
Seed AI has been most commonly used to mean AI that improves itself fast.
Yes, if you took the component words seriously as definitions, you’d conclude that we already have ASI and seed AI. But that’s not how language usually works.
I think it is much more true that we have not reached ASI or seed AI than that we have.
I think this essay assumes a definitional… definition? of language, that is simply not how language works. The constructivist view of language is that words mean what people mean when they say them. This is I think a more accurate theory in that it describes reality better. It is how language really works and describes what words really mean.
We might prefer a world in which words were crisply defined, but we do not live in that world.
So I think that not only is there an intuitive sense in which we have not yet reached seed AI or even recursively self-improving AI, or super intelligence, but the practical implications of blurring that line saying we’re already there would be very harmful. Harmful. Those terms were all invented to describe the incredible danger of coming up against AI that can outsmart us quickly and easily in the domains that lead directly to power, and can improve itself fast enough to be unexpectedly dangerous. The terms were invented for that purpose and should be reserved for that purpose. In a practical sense. It is a bonus that that’s also how they’re commonly used.
Superintelligence has usually been used to mean far more intelligent than a human in the ways that practically matter. Current systems aren’t there yet, but they will be soon.
Seed AI has been most commonly used to mean AI that improves itself fast.
Yes, if you took the component words seriously as definitions, you’d conclude that we already have ASI and seed AI. But that’s not how language usually works.
I think it is much more true that we have not reached ASI or seed AI than that we have.
I think this essay assumes a definitional… definition? of language, that is simply not how language works. The constructivist view of language is that words mean what people mean when they say them. This is I think a more accurate theory in that it describes reality better. It is how language really works and describes what words really mean.
We might prefer a world in which words were crisply defined, but we do not live in that world.
So I think that not only is there an intuitive sense in which we have not yet reached seed AI or even recursively self-improving AI, or super intelligence, but the practical implications of blurring that line saying we’re already there would be very harmful. Harmful. Those terms were all invented to describe the incredible danger of coming up against AI that can outsmart us quickly and easily in the domains that lead directly to power, and can improve itself fast enough to be unexpectedly dangerous. The terms were invented for that purpose and should be reserved for that purpose. In a practical sense. It is a bonus that that’s also how they’re commonly used.