I think this post is pointing at something very real, but doesn’t quite cleave reality at the joints...
Take “I have the idea of a story”, where “the idea” is “a vague image of one tiny scene in a story”. If you actually properly zoom-in on that image, tons of concrete high-level details about the actual story would likely fall out of it. There would be the object-level features of the scene (what characters are present, what their narrative roles are); the assumptions sitting at the back of your mind regarding how these details should be interpreted (what genre conventions have been established, what’s the overall context in which this scene makes sense); the implicit vibe you want the readers to have when reading that scene (what emotional state they should be in, what they should expect to happen next, whether they should be positively/negatively disposed towards specific characters); so on.
When you start from a vague image, these details aren’t immediately available in the form of declarative knowledge. You basically need to solve the inverse vibing problem: answer the question of “what concrete details would result in this vibe I’m aiming for?”. But solving that problem is very much possible, and gets easier with practice. Afterwards, you’d end up with a general idea of the story structure that could embed/generate your initial vague image.
Similar for “let’s build a startup that does X”, or “let’s build a theory of complex systems”, or “let’s solve alignment via Y”. You may be starting from indirect desiderata – e. g., target observables, or semi-metaphorical plans/vibes – but those still function as constraints. If you approach it correctly, you’d still be able to propagate those constraints around your world-model, all the way to (constraints over) “what is it I’m building?” or “what motor actions I should take next?”.
So the issue isn’t really that the “dreams of ideas” don’t have details. They do, just in an implicit/compressed form.
My take on a similar concept is: Suppose we have some goal, such as “write an interesting story”, or “make a million-dollar startup”, or “solve alignment”, or “incentivize innovation”. An “empty” idea is an idea that seems like it solves the hard parts of achieving this goal, but actually doesn’t. For example,
A vague image of a scene which, when you “unfold” it into the story (via the above approach), results in a story you don’t find interesting at all.
A startup idea which, when you lock down the implementational details, just reduces to “an LLM wrapper”, not capable of attracting any investment.
An alignment scheme which turns out not to solve any of the core difficulties at all: under it, they’re still open questions answering which is where the real work is.
An innovation-incentivizing plan which amounts to “put calls for innovation into a policy document”, which will obviously not actually result in tons of innovations.
In the constraint-propagation interpretation: “empty ideas” are ideas that, once you condition your plans on implementing them, fail to prune the implementational possibility-space to only those constructions that achieve your goal. After the conditioning, you still have to figure out how to make the story interesting, how to attract investors/users, how to solve the hard problems, how to incentivize innovation.
The difference from “dream ideas” is that those ideas are very much possible to translate into implementational details. It’s just that those details are unhelpful for solving the problem you care about.
I think the reason why dreams of ideas are generally not that useful is because they are usually “empty”. The specific details of ideas are so important that if you haven’t actually figured out what those details are, then your prior should be that, when you try to flesh it out, the idea is probably not going to be much better than what you would come up with 5 seconds of wishful thinking.
I think this post is pointing at something very real, but doesn’t quite cleave reality at the joints...
Take “I have the idea of a story”, where “the idea” is “a vague image of one tiny scene in a story”. If you actually properly zoom-in on that image, tons of concrete high-level details about the actual story would likely fall out of it. There would be the object-level features of the scene (what characters are present, what their narrative roles are); the assumptions sitting at the back of your mind regarding how these details should be interpreted (what genre conventions have been established, what’s the overall context in which this scene makes sense); the implicit vibe you want the readers to have when reading that scene (what emotional state they should be in, what they should expect to happen next, whether they should be positively/negatively disposed towards specific characters); so on.
When you start from a vague image, these details aren’t immediately available in the form of declarative knowledge. You basically need to solve the inverse vibing problem: answer the question of “what concrete details would result in this vibe I’m aiming for?”. But solving that problem is very much possible, and gets easier with practice. Afterwards, you’d end up with a general idea of the story structure that could embed/generate your initial vague image.
Similar for “let’s build a startup that does X”, or “let’s build a theory of complex systems”, or “let’s solve alignment via Y”. You may be starting from indirect desiderata – e. g., target observables, or semi-metaphorical plans/vibes – but those still function as constraints. If you approach it correctly, you’d still be able to propagate those constraints around your world-model, all the way to (constraints over) “what is it I’m building?” or “what motor actions I should take next?”.
So the issue isn’t really that the “dreams of ideas” don’t have details. They do, just in an implicit/compressed form.
My take on a similar concept is: Suppose we have some goal, such as “write an interesting story”, or “make a million-dollar startup”, or “solve alignment”, or “incentivize innovation”. An “empty” idea is an idea that seems like it solves the hard parts of achieving this goal, but actually doesn’t. For example,
A vague image of a scene which, when you “unfold” it into the story (via the above approach), results in a story you don’t find interesting at all.
A startup idea which, when you lock down the implementational details, just reduces to “an LLM wrapper”, not capable of attracting any investment.
An alignment scheme which turns out not to solve any of the core difficulties at all: under it, they’re still open questions answering which is where the real work is.
An innovation-incentivizing plan which amounts to “put calls for innovation into a policy document”, which will obviously not actually result in tons of innovations.
In the constraint-propagation interpretation: “empty ideas” are ideas that, once you condition your plans on implementing them, fail to prune the implementational possibility-space to only those constructions that achieve your goal. After the conditioning, you still have to figure out how to make the story interesting, how to attract investors/users, how to solve the hard problems, how to incentivize innovation.
The difference from “dream ideas” is that those ideas are very much possible to translate into implementational details. It’s just that those details are unhelpful for solving the problem you care about.
I think the reason why dreams of ideas are generally not that useful is because they are usually “empty”. The specific details of ideas are so important that if you haven’t actually figured out what those details are, then your prior should be that, when you try to flesh it out, the idea is probably not going to be much better than what you would come up with 5 seconds of wishful thinking.
Seems plausible, yep!