An AI answer to a philosophical question has a possible problem we haven’t had to face before: what if we’re too dumb to understand it? [...] What if AI comes up with a conclusion for which even the smartest human can’t understand the arguments or experiments or whatever new method the AI developed? If other AIs agree with the conclusion, I think we will have no choice but to go along. But that marks the end of philosophy as a human activity.
One caveat here is that regardless of the field, verifying that an answer is correct should be far easier than coming up with that correct answer, so in principle that still leaves a lot of room for human-understandable progress by AIs in pretty much all fields. It doesn’t necessarily leave a lot of time, though, if that kind of progress requires a superhuman AI in the first place.
I’m not convinced by this response (incidentally here I’ve found a LW post making a similar claim). If your only justification for “is move X best” is “because I’ve tried all others”, that doesn’t exactly seem like usefully accumulated knowledge. You can’t generalize from it, for one thing.
And for philosophy, if we’re still only on the level of endless arguments and counterarguments, that doesn’t seem like useful philosophical progress at all, certainly not something a human or AI should use as a basis for further deductions or decisions.
What’s an example of useful existing knowledge we’ve accumulated that we can’t in retrospect verify far more easily than we acquired it?