A really good name choice would work for the annoying people on reddit I sometimes talk to who insist that these things aren’t intelligent. I have interviewed a few and the consensus seems to be that they are powerful, but the thing they do to get that power is not intelligence. I suspect the crowd with this opinion defines intelligence in a way that means it can only be achieved by something I’d call an aligned ai, or perhaps they define it in a way that can only be achieved by an uncontrollable AI, somewhere in that space.
...anyway, how about… deep learning! or large ML? or sequential token generators. depends how specific you want to be. should alphafold count? it’s a transformer. should wavenet count? should sora count? should deepmind gato count?
re: “models of what”—most naively, they’re models of the dataset.
For those who insist these aren’t intelligent I like my extremely general term “Outcome Influencing System (OIS)” pronounced “oh-ee” and defined as “any system composed of capabilities and preferences which uses its capabilities informed by its preferences to influence reality towards outcomes it prefers”. Then it becomes trivially true that these things have capabilities and the capabilities are improving.
A really good name choice would work for the annoying people on reddit I sometimes talk to who insist that these things aren’t intelligent. I have interviewed a few and the consensus seems to be that they are powerful, but the thing they do to get that power is not intelligence. I suspect the crowd with this opinion defines intelligence in a way that means it can only be achieved by something I’d call an aligned ai, or perhaps they define it in a way that can only be achieved by an uncontrollable AI, somewhere in that space.
...anyway, how about… deep learning! or large ML? or sequential token generators. depends how specific you want to be. should alphafold count? it’s a transformer. should wavenet count? should sora count? should deepmind gato count?
re: “models of what”—most naively, they’re models of the dataset.
For those who insist these aren’t intelligent I like my extremely general term “Outcome Influencing System (OIS)” pronounced “oh-ee” and defined as “any system composed of capabilities and preferences which uses its capabilities informed by its preferences to influence reality towards outcomes it prefers”. Then it becomes trivially true that these things have capabilities and the capabilities are improving.