Basically the only reason to do it would be time pressure… Do you agree with this?
In some sense I agree. If there were no time pressure, then we would want to proceed in only the very safest way possible, which would not involve AI at all. My best guess would be to do a lot of philosphical and strategic thinking as unmodified and essentially unaided humans, perhaps for a very very long time. After that you might decide on a single, maximally inoffensive computional aid, and then repeat. But this seems like quite an alien scenario!
I am not sold that in milder cases you would be much better off with e.g. a normative AI than black box designs. Why is it less error prone? It seems like normative AI must perform well across a wide range of unanticipated environments, to a much greater extent than with black box designs, and with clearer catastrophic consequences for failure. It seems like you would want to do something that remains under the control of something as close to a human as possible, for as long as possible.
In some sense the black box approach is clearly more dangerous (ignoring time limits), since it doesn’t really get you closer to your goal. We will probably have to solve these other problems eventually. The black box metaphilosophical AI is really more like a form of cognitive enhancement. But it seems like enhancement is basically the right thing to do for now, even if we make the time crunch quite a bit milder.
In some sense I agree. If there were no time pressure, then we would want to proceed in only the very safest way possible, which would not involve AI at all. My best guess would be to do a lot of philosphical and strategic thinking as unmodified and essentially unaided humans, perhaps for a very very long time. After that you might decide on a single, maximally inoffensive computional aid, and then repeat. But this seems like quite an alien scenario!
I am not sold that in milder cases you would be much better off with e.g. a normative AI than black box designs. Why is it less error prone? It seems like normative AI must perform well across a wide range of unanticipated environments, to a much greater extent than with black box designs, and with clearer catastrophic consequences for failure. It seems like you would want to do something that remains under the control of something as close to a human as possible, for as long as possible.
In some sense the black box approach is clearly more dangerous (ignoring time limits), since it doesn’t really get you closer to your goal. We will probably have to solve these other problems eventually. The black box metaphilosophical AI is really more like a form of cognitive enhancement. But it seems like enhancement is basically the right thing to do for now, even if we make the time crunch quite a bit milder.