It seems like each of those terms does have a reasonable definition which is distinct from all of the other terms in the list:
LLMs: Neural networks trained (usually via autoregressive next-token prediction) on text corpora
Reasoning models: LLMs which have special delimiters marking the start and end of a chain of thought, and post-training to teach it to use those special delimiters
LLM agents: LLMs with access to tools invoked repeatedly until some stopping condition is met (or indefinitely, though we don’t see much of this yet)
Frontier Models: The subset of models trained with the largest compute budgets at any given time
Models: Neural networks trained to minimize some loss function over a dataset
AIs: Systems that perform tasks which required human intelligence last year
AGI/ASI: Systems matching or exceeding human cognitive capabilities across most/all domains respectively, but with the definition of which domains “count” gerrymandered such that no existing system counts as AGI
Prosaic AI Systems: Systems built by scaling up existing deep learning techniques rather than novel architectural insights
etc etc
It seems like you’re hoping for some term which encompasses all of “the AI systems which currently exist today”, “AI systems which can replace humans in all tasks and roles, which the frontier labs explicitly state they are trying to build”, and “the AI systems I expect will exist in the future, which can best be modeled as game-theoretic agents with some arbitrary utility function they are trying to maximize”. If that’s the case, though, I think you really do need multiple terms. I tentatively suggest “current frontier AI agents”, “drop-in-replacement-capable AI” or just “AGI”, and “superhuman wrapper minds” for the three categories respectively.
Is the word “systems” required? “prosaic AI” seems like it’s short enough already, and “prosaic AI alignment” already has an aisafety.info page defining it as
Prosaic AI alignment is an approach to alignment research that assumes that future artificial general intelligence (AGI) will be developed “prosaically” — i.e., without “reveal[ing] any fundamentally new ideas about the nature of intelligence or turn[ing] up any ‘unknown unknowns.’” In other words, it assumes the AI techniques we’re already using are sufficient to produce AGI if scaled far enough
By that definition, “prosaic AI alignment” should be parsed as “(prosaic AI) alignment”, implying that ” “prosaic AI” is already “AI trained and scaffolded using the techniques we are already using”. This definition of “Prosaic AI” seems to match usage elsewhere as well, e.g. Paul Christiano’s 2018 definition of “Prosaic AGI”
It now seems possible that we could build “prosaic” AGI, which can replicate human behavior but doesn’t involve qualitatively new ideas about “how intelligence works:”
It’s plausible that a large neural network can replicate “fast” human cognition, and that by coupling it to simple computational mechanisms — short and long-term memory, attention, etc. — we could obtain a human-level computational architecture.
If that term is good enough for you, maybe you can make a short post explicitly coining the term, and link to that post the first time you use the term each time.
I do note one slight issue with defining “prosaic AI” as “AI created by scaling up already-known techniques”, which is that all techniques to train AI become “prosaic” as soon as those techniques stop being new and shiny.
2025!Prosaic AI, or, if that’s not enough granularity, 2025-12-17!Prosaic AI. It’s even future-proof if there’s a singularity, you can extend it to 2025-12-17T19:23:43.718791198Z!Prosaic AI
It seems like each of those terms does have a reasonable definition which is distinct from all of the other terms in the list:
LLMs: Neural networks trained (usually via autoregressive next-token prediction) on text corpora
Reasoning models: LLMs which have special delimiters marking the start and end of a chain of thought, and post-training to teach it to use those special delimiters
LLM agents: LLMs with access to tools invoked repeatedly until some stopping condition is met (or indefinitely, though we don’t see much of this yet)
Frontier Models: The subset of models trained with the largest compute budgets at any given time
Models: Neural networks trained to minimize some loss function over a dataset
AIs: Systems that perform tasks which required human intelligence last year
AGI/ASI: Systems matching or exceeding human cognitive capabilities across most/all domains respectively, but with the definition of which domains “count” gerrymandered such that no existing system counts as AGI
Prosaic AI Systems: Systems built by scaling up existing deep learning techniques rather than novel architectural insights
etc etc
It seems like you’re hoping for some term which encompasses all of “the AI systems which currently exist today”, “AI systems which can replace humans in all tasks and roles, which the frontier labs explicitly state they are trying to build”, and “the AI systems I expect will exist in the future, which can best be modeled as game-theoretic agents with some arbitrary utility function they are trying to maximize”. If that’s the case, though, I think you really do need multiple terms. I tentatively suggest “current frontier AI agents”, “drop-in-replacement-capable AI” or just “AGI”, and “superhuman wrapper minds” for the three categories respectively.
I agree that those terms all have distinct definitions. I think that was I want is basically a shorter term for Prosiac AI Systems.
Is the word “systems” required? “prosaic AI” seems like it’s short enough already, and “prosaic AI alignment” already has an aisafety.info page defining it as
By that definition, “prosaic AI alignment” should be parsed as “(prosaic AI) alignment”, implying that ” “prosaic AI” is already “AI trained and scaffolded using the techniques we are already using”. This definition of “Prosaic AI” seems to match usage elsewhere as well, e.g. Paul Christiano’s 2018 definition of “Prosaic AGI”
If that term is good enough for you, maybe you can make a short post explicitly coining the term, and link to that post the first time you use the term each time.
I do note one slight issue with defining “prosaic AI” as “AI created by scaling up already-known techniques”, which is that all techniques to train AI become “prosaic” as soon as those techniques stop being new and shiny.
Yeah—I don’t really like that the word “prosaic” has no connection to technical aspects of the currently prosaic models.
I don’t want to start referring to “the models previously known as prosaic” when new techniques become prosaic.
2025!Prosaic AI, or, if that’s not enough granularity, 2025-12-17!Prosaic AI. It’s even future-proof if there’s a singularity, you can extend it to 2025-12-17T19:23:43.718791198Z!Prosaic AI