I think our positions on this are pretty close, but I may put a bit more weight on other “plausible stories” for solving metaphilosophy relative to your “plausible story”. (I’m not sure if overall I’m more or less optimistic than you are.)
If we imagine being in a state where we believe running computation X would solve hard philosophical problem Y, then it would seem that we already have a great deal of philosophical knowledge about Y, or a more general class of problems that includes Y.
It seems quite possible that understanding the general class of problems that includes Y is easier than understanding Y itself, and that allows us to find a computation X that would solve Y without much understanding of Y itself. As an analogy, suppose Y is some complex decision problem that we have little understanding of, and X is an AI that is programmed with a good decision theory.
More generally, we could look at the history difficulty of solving a problem vs. the difficulty of automating it. For example: the difficulty of walking vs. the difficulty of programming a robot to walk;
This does not seem like a very strong argument for your position. My suggestion in the OP is that humans already know the equivalent of “walking” (i.e., doing philosophy), we’re just doing it very slowly. Given this, your analogies don’t seem very conclusive about the difficulty of solving metaphilosophy or whether we have to make a bunch more progress on object-level philosophical problems before we can solve metaphilosophy.
Creating AI for solving hard philosophical problems is like passing hot potato from right hand to left.
For example, I want to solve the problem of qualia. I can’t solve it myself, but may be I can create super-intelligent AI which will help me to solve it? Now I start to working on AI, and soon encounter the the control problem. Trying to solve the control problem, I would have to specify nature of human values, and soon I will find the need to tell something about existing and nature of qualia. Now the circle is done: I have the same problem of qualia, but packed inside the control problem. If I make some assumption about what qualia should be, they will probably affect the final answer by AI.
However, I still could use some forms of AI to solve qualia problem: if I use google search, I could quickly find all relevant articles, identify the most cited, newest, maybe create an argument map. This is where Drexler’s CAIS may help.
Maybe one AI philosophy service could look like: would ask you a bunch of other questions that are simpler than the problem of qualia, then show you what those answers imply about the problem of qualia if you use some method of reconciling those answers.
In fact, when I use Google Scholar to find new articles about e.g. qualia, I already use narrow AI to advance my understanding. So AI could be useful in thinking about philosophical problems. What I am afraid of is AI’s decisions based on incomprehensible AI-created philosophy.
I think our positions on this are pretty close, but I may put a bit more weight on other “plausible stories” for solving metaphilosophy relative to your “plausible story”. (I’m not sure if overall I’m more or less optimistic than you are.)
It seems quite possible that understanding the general class of problems that includes Y is easier than understanding Y itself, and that allows us to find a computation X that would solve Y without much understanding of Y itself. As an analogy, suppose Y is some complex decision problem that we have little understanding of, and X is an AI that is programmed with a good decision theory.
This does not seem like a very strong argument for your position. My suggestion in the OP is that humans already know the equivalent of “walking” (i.e., doing philosophy), we’re just doing it very slowly. Given this, your analogies don’t seem very conclusive about the difficulty of solving metaphilosophy or whether we have to make a bunch more progress on object-level philosophical problems before we can solve metaphilosophy.
Creating AI for solving hard philosophical problems is like passing hot potato from right hand to left.
For example, I want to solve the problem of qualia. I can’t solve it myself, but may be I can create super-intelligent AI which will help me to solve it? Now I start to working on AI, and soon encounter the the control problem. Trying to solve the control problem, I would have to specify nature of human values, and soon I will find the need to tell something about existing and nature of qualia. Now the circle is done: I have the same problem of qualia, but packed inside the control problem. If I make some assumption about what qualia should be, they will probably affect the final answer by AI.
However, I still could use some forms of AI to solve qualia problem: if I use google search, I could quickly find all relevant articles, identify the most cited, newest, maybe create an argument map. This is where Drexler’s CAIS may help.
Maybe one AI philosophy service could look like: would ask you a bunch of other questions that are simpler than the problem of qualia, then show you what those answers imply about the problem of qualia if you use some method of reconciling those answers.
In fact, when I use Google Scholar to find new articles about e.g. qualia, I already use narrow AI to advance my understanding. So AI could be useful in thinking about philosophical problems. What I am afraid of is AI’s decisions based on incomprehensible AI-created philosophy.