Is the word “systems” required? “prosaic AI” seems like it’s short enough already, and “prosaic AI alignment” already has an aisafety.info page defining it as
Prosaic AI alignment is an approach to alignment research that assumes that future artificial general intelligence (AGI) will be developed “prosaically” — i.e., without “reveal[ing] any fundamentally new ideas about the nature of intelligence or turn[ing] up any ‘unknown unknowns.’” In other words, it assumes the AI techniques we’re already using are sufficient to produce AGI if scaled far enough
By that definition, “prosaic AI alignment” should be parsed as “(prosaic AI) alignment”, implying that ” “prosaic AI” is already “AI trained and scaffolded using the techniques we are already using”. This definition of “Prosaic AI” seems to match usage elsewhere as well, e.g. Paul Christiano’s 2018 definition of “Prosaic AGI”
It now seems possible that we could build “prosaic” AGI, which can replicate human behavior but doesn’t involve qualitatively new ideas about “how intelligence works:”
It’s plausible that a large neural network can replicate “fast” human cognition, and that by coupling it to simple computational mechanisms — short and long-term memory, attention, etc. — we could obtain a human-level computational architecture.
If that term is good enough for you, maybe you can make a short post explicitly coining the term, and link to that post the first time you use the term each time.
I do note one slight issue with defining “prosaic AI” as “AI created by scaling up already-known techniques”, which is that all techniques to train AI become “prosaic” as soon as those techniques stop being new and shiny.
2025!Prosaic AI, or, if that’s not enough granularity, 2025-12-17!Prosaic AI. It’s even future-proof if there’s a singularity, you can extend it to 2025-12-17T19:23:43.718791198Z!Prosaic AI
I agree that those terms all have distinct definitions. I think that was I want is basically a shorter term for Prosiac AI Systems.
Is the word “systems” required? “prosaic AI” seems like it’s short enough already, and “prosaic AI alignment” already has an aisafety.info page defining it as
By that definition, “prosaic AI alignment” should be parsed as “(prosaic AI) alignment”, implying that ” “prosaic AI” is already “AI trained and scaffolded using the techniques we are already using”. This definition of “Prosaic AI” seems to match usage elsewhere as well, e.g. Paul Christiano’s 2018 definition of “Prosaic AGI”
If that term is good enough for you, maybe you can make a short post explicitly coining the term, and link to that post the first time you use the term each time.
I do note one slight issue with defining “prosaic AI” as “AI created by scaling up already-known techniques”, which is that all techniques to train AI become “prosaic” as soon as those techniques stop being new and shiny.
Yeah—I don’t really like that the word “prosaic” has no connection to technical aspects of the currently prosaic models.
I don’t want to start referring to “the models previously known as prosaic” when new techniques become prosaic.
2025!Prosaic AI, or, if that’s not enough granularity, 2025-12-17!Prosaic AI. It’s even future-proof if there’s a singularity, you can extend it to 2025-12-17T19:23:43.718791198Z!Prosaic AI