There are many questions where verification is no easier than generation, e.g. “Is this chess move best?” is no easier than “What’s the best chess move?” Both are EXPTIME-complete.
Philosophy might have a similar complexity to ’What’s the best chess move?”, i.e. “What argument X is such that for all counterarguments X1 there exists a countercounterargument X2 such that for all countercountercounterarguments X3...”, i.e. you explore the game tree of philosophical discourse.
I’m not convinced by this response (incidentally here I’ve found a LW post making a similar claim). If your only justification for “is move X best” is “because I’ve tried all others”, that doesn’t exactly seem like usefully accumulated knowledge. You can’t generalize from it, for one thing.
And for philosophy, if we’re still only on the level of endless arguments and counterarguments, that doesn’t seem like useful philosophical progress at all, certainly not something a human or AI should use as a basis for further deductions or decisions.
What’s an example of useful existing knowledge we’ve accumulated that we can’t in retrospect verify far more easily than we acquired it?
There are many questions where verification is no easier than generation, e.g. “Is this chess move best?” is no easier than “What’s the best chess move?” Both are EXPTIME-complete.
Philosophy might have a similar complexity to ’What’s the best chess move?”, i.e. “What argument X is such that for all counterarguments X1 there exists a countercounterargument X2 such that for all countercountercounterarguments X3...”, i.e. you explore the game tree of philosophical discourse.
I’m not convinced by this response (incidentally here I’ve found a LW post making a similar claim). If your only justification for “is move X best” is “because I’ve tried all others”, that doesn’t exactly seem like usefully accumulated knowledge. You can’t generalize from it, for one thing.
And for philosophy, if we’re still only on the level of endless arguments and counterarguments, that doesn’t seem like useful philosophical progress at all, certainly not something a human or AI should use as a basis for further deductions or decisions.
What’s an example of useful existing knowledge we’ve accumulated that we can’t in retrospect verify far more easily than we acquired it?