I’m also interested in this topic but it feels very hard to directly make progress. It seems to require solving a lot of philosophy, which has as its subject matter the entire universe and how we know about it, so solving metaphilosophy in a really satisfying way seems to almost require rationally apprehending all of existence and our place within it, which seems really hard, or maybe even fundamentally impossible(or perhaps there are ways of making progress in metaphilosophy without solving most of philosophy first, but finding such ways also seems hard)
That said I do have some more indirect/outside-view theories which make me think we could obtain a good future even if we can’t directly solve metaphilosophy before getting AGI. I think we can see philosophy as the process by which “messy”/incoherent agents, which arose from evolution or other non-agentic means, can become more coherent and unified-agent-like. So obtaining an AI which can do philosophy would not consist of hardcoding a ‘philosophy algorithm’, but creating base agents which are messy in similar ways to us, who will then hopefully resolve that messiness and become more coherent in similar ways also(and thereafter make use of the universe in approximately as good of a way as we would have). A whole brain emulation would obviously qualify, but I think that it’s also plausible that we could develop a decent high-level understanding of how human brain algorithms work and create AIs that are similar enough for philosophical purposes without literally scanning peoples’ brains(which seems like it will take too long relative to AI) For this reason and others I think creating sufficiently-human-like AI is a promising route for obtaining a good future(but this topic also seems curiously neglected).
As to why few other people are trying to solve metaphilosophy, I think there are just very few people with the temperament to become interested in such things, then the few that do end up deciding that some other topic has a better combination of importance/neglectedness/tractability/personal fit to invest major effort in.
I’m also interested in this topic but it feels very hard to directly make progress. It seems to require solving a lot of philosophy, which has as its subject matter the entire universe and how we know about it, so solving metaphilosophy in a really satisfying way seems to almost require rationally apprehending all of existence and our place within it, which seems really hard, or maybe even fundamentally impossible(or perhaps there are ways of making progress in metaphilosophy without solving most of philosophy first, but finding such ways also seems hard)
That said I do have some more indirect/outside-view theories which make me think we could obtain a good future even if we can’t directly solve metaphilosophy before getting AGI. I think we can see philosophy as the process by which “messy”/incoherent agents, which arose from evolution or other non-agentic means, can become more coherent and unified-agent-like. So obtaining an AI which can do philosophy would not consist of hardcoding a ‘philosophy algorithm’, but creating base agents which are messy in similar ways to us, who will then hopefully resolve that messiness and become more coherent in similar ways also(and thereafter make use of the universe in approximately as good of a way as we would have). A whole brain emulation would obviously qualify, but I think that it’s also plausible that we could develop a decent high-level understanding of how human brain algorithms work and create AIs that are similar enough for philosophical purposes without literally scanning peoples’ brains(which seems like it will take too long relative to AI) For this reason and others I think creating sufficiently-human-like AI is a promising route for obtaining a good future(but this topic also seems curiously neglected).
As to why few other people are trying to solve metaphilosophy, I think there are just very few people with the temperament to become interested in such things, then the few that do end up deciding that some other topic has a better combination of importance/neglectedness/tractability/personal fit to invest major effort in.