Creating AI for solving hard philosophical problems is like passing hot potato from right hand to left.
For example, I want to solve the problem of qualia. I can’t solve it myself, but may be I can create super-intelligent AI which will help me to solve it? Now I start to working on AI, and soon encounter the the control problem. Trying to solve the control problem, I would have to specify nature of human values, and soon I will find the need to tell something about existing and nature of qualia. Now the circle is done: I have the same problem of qualia, but packed inside the control problem. If I make some assumption about what qualia should be, they will probably affect the final answer by AI.
However, I still could use some forms of AI to solve qualia problem: if I use google search, I could quickly find all relevant articles, identify the most cited, newest, maybe create an argument map. This is where Drexler’s CAIS may help.
Maybe one AI philosophy service could look like: would ask you a bunch of other questions that are simpler than the problem of qualia, then show you what those answers imply about the problem of qualia if you use some method of reconciling those answers.
In fact, when I use Google Scholar to find new articles about e.g. qualia, I already use narrow AI to advance my understanding. So AI could be useful in thinking about philosophical problems. What I am afraid of is AI’s decisions based on incomprehensible AI-created philosophy.
Creating AI for solving hard philosophical problems is like passing hot potato from right hand to left.
For example, I want to solve the problem of qualia. I can’t solve it myself, but may be I can create super-intelligent AI which will help me to solve it? Now I start to working on AI, and soon encounter the the control problem. Trying to solve the control problem, I would have to specify nature of human values, and soon I will find the need to tell something about existing and nature of qualia. Now the circle is done: I have the same problem of qualia, but packed inside the control problem. If I make some assumption about what qualia should be, they will probably affect the final answer by AI.
However, I still could use some forms of AI to solve qualia problem: if I use google search, I could quickly find all relevant articles, identify the most cited, newest, maybe create an argument map. This is where Drexler’s CAIS may help.
Maybe one AI philosophy service could look like: would ask you a bunch of other questions that are simpler than the problem of qualia, then show you what those answers imply about the problem of qualia if you use some method of reconciling those answers.
In fact, when I use Google Scholar to find new articles about e.g. qualia, I already use narrow AI to advance my understanding. So AI could be useful in thinking about philosophical problems. What I am afraid of is AI’s decisions based on incomprehensible AI-created philosophy.