I find that Less Wrong is a conflation of about six topics:
Singularitarianism (e.g. discussion of SIAI, world-saving)
Topics in AI (e.g. decision theory, machine learning)
Topics in philosophy (e.g. metaethics, anthropic principle)
Epistemic rationality (e.g. dissolving questions, training mental skills, cognitive biases)
Applied rationality (e.g. how to avoid akrasia, how to acquire skills efficiently)
Rationality community (e.g. meetups, exchanging knowledge)
These don’t all seem to fit together entirely comfortably. Ideally, I’d split these into three more-coherent sections (singularitarianism and AI, philosophy and epistemic rationality, and applied rationality and community), each of which I think could probably be more effective as their own space.
Since being introduced to Less Wrong and clarifying that ‘truth’ is a property of beliefs corresponding to how accurately they let you predict the world, I’ve separated ‘validity’ from ‘truth’.
The syllogism “All cups are green; Socrates is a cup; therefore Socrates is green” is valid within the standard system of logic, but it doesn’t correspond to anything meaningful. But the reason that we view logic as more than a curiosity is that we can use logic and true premises to reach true conclusions. Logic is useful because it produces true beliefs.
Some mathematical statements follow the rules of math; we call them valid, and they would be just as valid in any other universe. Math as a system is useful because (in our universe) we can use mathematical models to arrive at predictively accurate conclusions.
Bringing ‘truth’ into it is just confusing.