For its performances, current AI can pick up to 2 of 3 from:
Interesting (generates outputs that are novel and useful)
Superhuman (outperforms humans)
General (reflective of understanding that is genuinely applicable cross-domain)
AlphaFold’s outputs are interesting and superhuman, but not general. Likewise other Alphas.
LLM outputs are a mix. There’s a large swath of things that it can do superhumanly, e.g. generating sentences really fast or various kinds of search. Search is, we could say, weakly novel in a sense; LLMs are superhumanly fast at doing a form of search which is not very reflective of general understanding. Quickly generating poems with words that all start with the letter “m” or very quickly and accurately answering stereotyped questions like analogies is superhuman, and reflects a weak sort of generality, but is not interesting.
ImageGen is superhuman and a little interesting, but not really general.
Many architectures + training setups constitute substantive generality (can be applied to many datasets), and produce interesting output (models). However, considered as general training setups (i.e., to be applied to several contexts), they are subhuman.
For its performances, current AI can pick up to 2 of 3 from:
Interesting (generates outputs that are novel and useful)
Superhuman (outperforms humans)
General (reflective of understanding that is genuinely applicable cross-domain)
AlphaFold’s outputs are interesting and superhuman, but not general. Likewise other Alphas.
LLM outputs are a mix. There’s a large swath of things that it can do superhumanly, e.g. generating sentences really fast or various kinds of search. Search is, we could say, weakly novel in a sense; LLMs are superhumanly fast at doing a form of search which is not very reflective of general understanding. Quickly generating poems with words that all start with the letter “m” or very quickly and accurately answering stereotyped questions like analogies is superhuman, and reflects a weak sort of generality, but is not interesting.
ImageGen is superhuman and a little interesting, but not really general.
Many architectures + training setups constitute substantive generality (can be applied to many datasets), and produce interesting output (models). However, considered as general training setups (i.e., to be applied to several contexts), they are subhuman.