we use [philosophy] to solve science as a methodological problem (philosophy of science)
That was true when Popper actually did that in the 1930s. But I think the Popperian “philosophy of science” (i.e. hypothesis generation, falsifiability, and paradigm shifts) is now “obvious strategy implications from the theory of approximate Bayesian reasoning” (which was already under development in the 1930s, but wasn’t fully developed until about the 1950s), so IMO it has since become a matter of mathematics/logic (and since the rise of AI as a field of engineering, also engineering). So I see science as having now been put on a basis stronger, from a Naturalism point of view, than philosophy was able to provide.
In general, I’m a lot more optimistic about AI-assisted science and mathematics than I am about AI-assisted metaphilosophy. Partly because I think there are areas, such as ethics, where there are reasons (like Evolutionary Moral Psychology) to think that human moral intuitions might actually map to successful adaptive strategies for co-evolution of cooperation in positive-sum games — and I’m less clear why AI would necessarily have useful intuitions.
That was true when Popper actually did that in the 1930s. But I think the Popperian “philosophy of science” (i.e. hypothesis generation, falsifiability, and paradigm shifts) is now “obvious strategy implications from the theory of approximate Bayesian reasoning” (which was already under development in the 1930s, but wasn’t fully developed until about the 1950s), so IMO it has since become a matter of mathematics/logic (and since the rise of AI as a field of engineering, also engineering). So I see science as having now been put on a basis stronger, from a Naturalism point of view, than philosophy was able to provide.
In general, I’m a lot more optimistic about AI-assisted science and mathematics than I am about AI-assisted metaphilosophy. Partly because I think there are areas, such as ethics, where there are reasons (like Evolutionary Moral Psychology) to think that human moral intuitions might actually map to successful adaptive strategies for co-evolution of cooperation in positive-sum games — and I’m less clear why AI would necessarily have useful intuitions.