The scientific/rationalist ways of reasoning leave large swaths of the world unmodeled. A purely Bayesian agent could have spent OOMs more compute and modeled even these swaths based on flimsy evidence, but the agent usually doesn’t bother to do so.
Reasoning based on harder-to-formalize things is usually more prone to errors which can catastrophically derail the agent.
I suspect that you meant something like attempts to reinforce vibes-based parts of the world model with rationality questioning hard-to-believe results (think of Yudkowsky’s derision of empiricists which would end up believing that the Ponzi scheme does produce revenue; this example would have sceptics point out that revenue would have to come from somewhere and that the Ponzi scheme didn’t provide an explanation) or outright heuristics protecting from adversarial attacks (e.g. mankind had a Russian sociologist claim[1] that it’s the rational mind which has to be reinforced by tradition; however, such a reinforcement could be achievable by a different heuristics).
Perhaps a point of terminology, I’d say vibestemics is itself about the fact that your epistemics, whatever they are, are grounded in vibes (via care). However, this is tangled up with the fact that to believe that this core vibestemic claim is true is to automatically imply that there is no one right epistemic process, but rather epistemic processes that are instrumentally useful depending on what you care about doing (hence the contingency on care).
The specific vibe of the post-rationality is, as I would frame it, to value completeness over consistency, whereas traditional rationality makes the opposite choice (and pre-rationality doesn’t even try to value either, except in that it will try to hallucinate its way to both if pressed).
I had this dialogue with Claude Opus 4.5 on vibestemics and your vision of epistemics as a whole. As far as I understand it, vibestemics is supposed to stitch the benefits of two approaches:
The scientific/rationalist ways of reasoning leave large swaths of the world unmodeled. A purely Bayesian agent could have spent OOMs more compute and modeled even these swaths based on flimsy evidence, but the agent usually doesn’t bother to do so.
Reasoning based on harder-to-formalize things is usually more prone to errors which can catastrophically derail the agent.
I suspect that you meant something like attempts to reinforce vibes-based parts of the world model with rationality questioning hard-to-believe results (think of Yudkowsky’s derision of empiricists which would end up believing that the Ponzi scheme does produce revenue; this example would have sceptics point out that revenue would have to come from somewhere and that the Ponzi scheme didn’t provide an explanation) or outright heuristics protecting from adversarial attacks (e.g. mankind had a Russian sociologist claim[1] that it’s the rational mind which has to be reinforced by tradition; however, such a reinforcement could be achievable by a different heuristics).
Ironically, the sociologist also used a Ponzi-like scheme as an example.
Perhaps a point of terminology, I’d say vibestemics is itself about the fact that your epistemics, whatever they are, are grounded in vibes (via care). However, this is tangled up with the fact that to believe that this core vibestemic claim is true is to automatically imply that there is no one right epistemic process, but rather epistemic processes that are instrumentally useful depending on what you care about doing (hence the contingency on care).
The specific vibe of the post-rationality is, as I would frame it, to value completeness over consistency, whereas traditional rationality makes the opposite choice (and pre-rationality doesn’t even try to value either, except in that it will try to hallucinate its way to both if pressed).