In this example, you’re trying to make various planning decisions; those planning decisions call on predictions; and the predictions are about (other) planning decisions; and these form a loopy network. This is plausibly an intrinsic / essential problem for intelligences, because it involves the intelligence making predictions about its own actions—and those actions are currently under consideration—and those actions kinda depend on those same predictions. The difficulty of predicting “what will I do” grows in tandem with the intelligence, so any sort of problem that makes a call to the whole intelligence might unavoidably make it hard to separate predictions from decisions.
A further wrinkle / another example is that a question like “what should I think about (in particular, what to gather information about / update about)”, during the design process, wants these predictions. For example, I run into problems like:
I’m doing some project X.
I could do a more ambitious version of X, or a less ambitious version of X.
If I’m doing the more ambitious version of X, I want to work on pretty different stuff right now, at the beginning, compared to if I’m doing the less ambitious version. Example 1: a programming project; should I put in the work ASAP to redo the basic ontology (datatypes, architecture), or should I just try to iterate a bit on the MVP and add epicycles? Example 2: an investigatory blog post; should I put in a bunch of work to get a deeper grounding in the domain I’m talking about, or should I just learn enough to check that the specific point I’m making probably makes sense?
The question of whether to do ambitious X vs. non-ambitious X also depends on / gets updated by those computations that I’m considering how to prioritize.
Another kind of example is common knowledge. What people actually do seems to be some sort of “conjecture / leap of faith”, where at some point they kinda just assume / act-as-though there is common knowledge. Even in theory, how is this supposed to work, for agents of comparable complexity* to each other? Notably, Lobian handshake stuff doesn’t AFAICT especially look like it has predictions / decisions separated out.
*(Not sure what complexity should mean in this context.)
A basic issue with a lot of deliberate philanthropy is the tension between:
In many domains, much of the biggest gains are likely to come from marginal opportunities. E.g. because they have more value of information, more large upsides, more addressing neglected areas (and therefore plausibly strategically important.
Marginal opportunities are harder to evaluate.
There’s less preexisting understanding, on the part of fund allocators.
The people applying would tend to be less tested.
Therefore, it’s easier to game.
The kneejerk solution I’d propose is “proof of novel work”. If you want funding to do X, you should show that you’ve done something to address X that others haven’t done. That could be a detailed insightful write-up (which indicates serious thinking / fact-finding); that could be some you did on the side, which isn’t necessarily conceptually novel but is useful work on X that others were not doing; etc.
I assume that this is an obvious / not new idea, so I’m curious where it doesn’t work. Also curious what else has been tried. (E.g. many organizations do “don’t apply, we only give to {our friends, people we find through our own searches, people who are already getting funding, …}”.)