Black-Box Metaphilosophical AI is also risky, because it’s hard to test/debug something that you don’t understand.
On the other hand, to the extent that our uncertainty about whether different BBMAI designs do philosophy correctly is independent, we can build multiple ones and see what outputs they agree on. (Or a design could do this internally, achieving the same effect.)
it’s unclear why such an AI won’t cause disaster in the time period before it achieves philosophical competence.
This seems to be an argument for building a hybrid of what you call metaphilosophical and normative AIs, where the normative part “only” needs to be reliable enough to prevent initial disaster, and the metaphilosophical part can take over afterward.
On the other hand, to the extent that our uncertainty about whether different BBMAI designs do philosophy correctly is independent, we can build multiple ones and see what outputs they agree on. (Or a design could do this internally, achieving the same effect.)
This seems to be an argument for building a hybrid of what you call metaphilosophical and normative AIs, where the normative part “only” needs to be reliable enough to prevent initial disaster, and the metaphilosophical part can take over afterward.