So I’ve been thinking about how to assign probabilities to true/false assignments over claims in the context of a probabilistic argument mapping program. Inevitably I’ve been confronted with the liar’s paradox and a million related headaches. I have some tentative ideas on how I’d address these: basically allowing sentences in a language to access the probabilities of truth assignments but then replacing those probabilities with conditional probabilities to ensure existence and doing some entropy maximization stuff to hopefully get uniqueness. However, before I write this all up I want to check what the current state of probabilistic logic is on Lesswrong. When I search I mostly see stuff like http://intelligence.org/files/DefinabilityTruthDraft.pdf or more recently https://www.lesswrong.com/posts/KbCHcb8yyjAMFAAPJ/when-wishful-thinking-works. Are these kind of texts the current forefront of this topic that I should put my post in conversation with? If not, what is? Thanks!
I’m trying to follow up on the formalism I described in my first post while incorporating the suggestions in the comments.