More specifically, my position is anti-reductionist, and rationalist-empiricist-reductionists dismiss anti-reductionists as cranks. As long as you are trying to model whether I am that and then dismiss me if you find I am, it is a waste of time to try to communicate my position to you.
tailcalled
Thing is just from the conclusions it won’t be obvious that the meta-level theory is better. The improvement can primarily be understood in the context of the virtues of the meta-level theory.
But that would probe the power of the arguments whereas really I’m trying to probe the obviousness of the claims.
I can think of reasons why you’d like to know what theories would be smart to make using this framework, e.g. so you can make those theories instead of bothering to learn the framework. However, that’s not a reason it would be good for me to share it with you, since I think that’d just distract you from the point of my theory.
This may well be true (though I think not), but what is your argument about not even linking to your original posts?
I don’t know of anyone who seems to have understood the original posts, so I kinda doubt people can understand the point of them. Plus often what I’m writing about is a couple of steps removed from the original posts.
Or how often you don’t explain yourself even in completely unrelated subjects?
Part of the probing is to see which of the claims I make will seem obviously true and which of them will just seem senseless.
Why?
It’s mainly good for deciding what phenomena to make narrow theories about.
For me though, what would get me much more on-board with your thoughts are actual examples of you using these ideas to model things nobody else can model (mathematically!) in as broad a spectrum of fields as you claim. That, or a much more compact & streamlined argument.
I think this is the crux. To me after understanding these ideas, it’s retroactively obvious that they are modelling all sorts of phenomena. My best guess is that the reason you don’t see it is that you don’t see the phenomena that are failing to be modelled by conventional methods (or at least don’t understand how those phenomena related to the birds-eye perspective), so you don’t realize what new thing is missing. And I can’t easily cure this kind of cluelessness with examples, because my theories aren’t necessary if you just consider a single very narrow and homogenous phenomenon as then you can just make a special-built theory for that.
The details are in the book. I’m mainly writing the OP to inform clueless progressives who might’ve dismissed Ayn Rand for being a right-wing misogynist that despite this they might still find her book insightful.
Recently I’ve been starting to think it could go many other ways than my predictions above suggest. So it’s probably safer to say that the futurist/rationalist predictions are all wrong than that any particular prediction I can make is right.
I’m still mostly optimistic though.
The thing about slop effects is that my updates (attempted to be described e.g. here https://www.lesswrong.com/s/gEvTvhr8hNRrdHC62 ) makes huge fractions of LessWrong look like slop to me. Some of the increase in vagueposting is basically lazy probing for whether rationalists will get the problem if framed in different ways than the original longform.
I don’t think RL or other AI-centered agency constructions will ever become very agentic.
- Jun 18, 2025, 10:32 PM; 12 points) 's comment on tailcalled’s Shortform by (
I mean basically all the conventionally conceived dangers.
- Jun 18, 2025, 10:40 PM; 6 points) 's comment on tailcalled’s Shortform by (
I mostly don’t believe in AI x-risk anymore, but the few AI x-risks that I still consider plausible are increased by broadcasting why I don’t believe in AI x-risk, so I don’t feel like explaining myself.
Ayn Rand’s book “The Fountainhead” is an accidental deconstruction of patriarchy that shows how it is fractally terrible.
We don’t know ahead of time the qualitative way in which people will later make impactful posts, so this can’t actually focus on rewarding the posts that would naturally be impactful. Instead it will encourage people to assume that others have good reason for their judgement even if they can’t figure out what those reasons are.
The clearest example anyone has given of futarchy is markets to replace the CEO if doing so leads to greater profit for the company. But I’d rather have the CEO replaced if doing so leads to greater probability they’ll give LessWrong all of the money that’d otherwise be profits. These two options seem anti-correlated, so it’s not clear why we should support futarchy (unless futarchy advocates start advocating for markets that help direct money to LessWrong).
Point is they’re still LLMs.
This feels like a post that has used a significant amount of AI to be made...
2 votes
Overall karma indicates overall quality.
0 votes
Agreement karma indicates agreement, separate from overall quality.
You praise someone who wants to do agent-based models, but agent-based models are a reductionistic approach to the field of complexity science, so this sure seems to prove my point. (I mean, approximately all of the non-reductionistic approaches to the field of complexity science are bad too.)