Inspiration as a Scarce Resource

Link post

[Epistemic status: Speculation][1]

Generative AI (GenAI)[2] is getting pretty good. Using it involves a certain amount of technical skill; getting a stable diffusion model to do what you want can take all kinds of byzantine prompt engineering. It’s getting better, though, and I don’t doubt that soon we’ll have all kinds of automated or assisted methods for generating prompts. ChatGPT is still a little spotty for some use cases, and it’s not quite as impressive compared to a human artist as stable diffusion, but the technology is getting a lot stronger.

Full disclosure: I read this post, and wanted to distill one particular aspect of it: the idea that novelty—particularly, “subjectively interesting novelty”, or “useful novelty”, to be distinguished from the post’s noise-like notion of novelty—constitutes a kind of scarce resource that we use to train generative models.

If all we want is noise, we can apply noise; it’s not difficult to shove a ton of random trash into the weights. The difficult part is to increase the amount of substantive concepts the generator has access to. New tropes, new ideas, new styles… things that feel novel, fresh, noteworthy. Things facilitated by inspiration, in a word.


i. inspiration

We could spend a lot of time defining different kinds of inspiration—it doesn’t seem like aesthetic inspiration should be totally fungible with “practical” inspiration, for instance—but I primarily want to delineate how this broad class of thing comes to be.

People report having new ideas in all kinds of different ways. Some of them seem kind of combinatoric: you read or watch or hear something, connect it to something you’ve seen before, and an interesting relationship falls out. I think there’s something deeper to this. You aren’t exactly “combining ideas”; the specific, variably idiosyncratic understanding of something you have in your head, already tied or half-tied to all kinds of fuzzy intuitions and semi-processed thoughts, is having a kind of encounter with this new experience you’re having. And then, hopefully, that encounter assembles something you can export, or at least use.

We can also model inspiration in a more abstract way, which might help make sense of “sudden flashes”: we harness noise to pull our ideas in different directions, and then select for coherence. Run this process a few times, and you get a new idea. This undertaking takes place inside your brain, so it gets to make use of all kinds of resources that you haven’t made individually presentable or useful yet. It’s not exactly random: it’s a result of your experiences, your cultural context, your biological condition, your media consumption, your day-to-day life. This makes our outputs more likely to be compatible with those most similar to us.

Using this resource requires it to be extracted. It exists potentially in the brains of current and future humans, and is actualized by transformation into communicable ideas. It exists in art, in textbooks, and in odd turns of phrase.


ii. imagination

It’s not my intent to make a sweeping claim like “AI can’t be creative”. It’s not even my intent to say current-day GenAI can’t be creative, especially since I’m not quite sure what that means. Stable diffusion and GPT can generalize from available data and create outputs that didn’t exist before, which looks a lot like “creativity”. We can theorize, in a broad sense, that GenAI learning and prompting involves similar “encounters” to the ones that happen when we have novel experiences. But unlike a human mind, it does this to a degree of indirection: it has a partial picture of our outputs that it uses to make an even more partial picture of our tastes, dreams, and faculties.

This shouldn’t be insurmountable to sufficiently advanced AI. Even ignoring the (speculative) powers of superintelligence to extrapolate wildly from small amounts of data, we can imagine that, in the future, AI will be able to “select for coherence” in the same way that we do, creating more novel data that it can use to inspire itself without becoming unapproachable to humans.

It’s clear that that time isn’t now, though. Some models can barely make use of unlabeled data. It’s unclear how long it is from now. And before that leap in capabilities happens, we have a resource problem: how do we feed more inspiration to our models? We have to do it by piecemeal: by interacting with models, and by feeding them human-created content.

The wealth of human artistic and linguistic expression already available for processing has not been anything like exhausted by current machine learning efforts. But even if we could make use of all of it, I think we have reason to believe some of it will be outdated or less than entirely useful, and in some sense the whole thing may be incomplete.


iii. approbation

Don’t get me wrong. Models a few generations ahead trained with the full breadth of (recorded) human artistic achievement could create an absurd cornucopia of amazing art. Maybe more than enough to exhaust the consumptive preferences of most people. I’m trying to make a more subtle point. The basic outline follows something like this:

  1. There existed styles that have been forgotten, and there could be styles that don’t yet exist.

  2. There are soft limits to how far a GenAI can extrapolate from existing data.

  3. Ergo, in conditions that respect these soft limits, certain styles will not be represented in outputs.

This generalizes beyond the common-language concept of “style”, but I’m going to stick with that for now. The burning question: why does this matter? I’m making an assumption here: we don’t want art (or whatever other kinds of thought cultural we get GenAI to produce) to stagnate. Some repetition and iteration on the same thing is probably fine—we seem to accept it in the form of literary tropes and themes—but we also crave things that seem genuinely novel to us.

This is all to articulate a particular non-sentimental way that GenAI is unlikely to obliviate the role of many kinds of writer and graphical artist in the short to medium-term. If we want to avoid stagnation, we need people producing and extracting novelty that they can then incorporate into GenAI.

But there is also one related sentimental reason that might last into the long term.

I’ve been sloppy on one particular account (though certainly also others): I’ve been treating novelty extraction as a search over a pre-existing space. I think the reality of novelty has aspects of a more free-flowing movement; as ideas spread, tastes change. It’s not that we’re (just) discovering more things that are aesthetically compatible with humans. The underlying thing, aesthetics, is moving along with us.

And so, there’s a sentimental argument that it should be humans guiding that process. That it would be a tragedy if we passed the torch of long term cultural progress to AI. Perhaps even that it could cause us to diverge when the human process could help us converge; I haven’t thought about this enough, and the idea could use some hammering out, so take that with a grain of salt. Thankfully, I don’t think most artists are eagerly waiting to become prompt-technicians or retire to a life of pure consumption.


See also:
Novelty Generation—The Art of Good Ideas

  1. ^

    [Deontic status: Inadvisable]

  2. ^

    I feel like there has to be a better way to say this. Maybe if we all agreed to disambiguate between “GAI” and “AGI”? But that’s kind of confusing. Alas.

No comments.