The problems with local positivism seem to me… kinda important philosophically, but less so in practice.
Yes, most of the time they don’t matter, but then sometimes they do! I think in particular the wrongness of logical positivism matters a lot if you’re trying to solve a problem like proving that an AI is aligned with human flourishing because there’s a specific, technical answer you want to guarantee but it requires formalizing a lot of concepts that normally squeak by because all the formal work is being done by humans who share assumptions. But when you need the AI to share those assumptions, things get dicier.
Yes, most of the time they don’t matter, but then sometimes they do! I think in particular the wrongness of logical positivism matters a lot if you’re trying to solve a problem like proving that an AI is aligned with human flourishing because there’s a specific, technical answer you want to guarantee but it requires formalizing a lot of concepts that normally squeak by because all the formal work is being done by humans who share assumptions. But when you need the AI to share those assumptions, things get dicier.