“A rationalist, in the sense of this particular community, is someone who is trying to build and update a unified probabilistic model of how the entire world works, and trying to use that model to make predictions and decisions.”
I recently started saying that I really love Effective Curiosity:
Maximizing the total understanding of reality by building models of as many physical phenomena as possible across as many scales of the universe as possible, that are as comprehensive, unified, simple, and empirically predictive as possible.
And I see it more as a direction. And I see it from a more collective intelligence perspective. I think modelling the whole world in fully unified way and in total accuracy is impossible, even with all of our science with all our technology, because we’re all finite limited agents with limited computational resources and time, limited modelling capability, and we get stuck in various models, from various perspectives, and so on. And all we have is approximations, that predict certain parts reality to a certain degree, but never fully all of reality in perfect accuracy in all it’s complexity. And we have a lot of blind spots. All models are wrong but some predictively approximate the extremely nuanced complexity of reality better than others.
And from all of this, intelligence and fundamental physics, which are subsets of this, are the most fascinating to me.
I really like the definition of rationalist from https://www.lesswrong.com/posts/2Ee5DPBxowTTXZ6zf/rationalists-post-rationalists-and-rationalist-adjacents :
“A rationalist, in the sense of this particular community, is someone who is trying to build and update a unified probabilistic model of how the entire world works, and trying to use that model to make predictions and decisions.”
I recently started saying that I really love Effective Curiosity:
Maximizing the total understanding of reality by building models of as many physical phenomena as possible across as many scales of the universe as possible, that are as comprehensive, unified, simple, and empirically predictive as possible.
And I see it more as a direction. And I see it from a more collective intelligence perspective. I think modelling the whole world in fully unified way and in total accuracy is impossible, even with all of our science with all our technology, because we’re all finite limited agents with limited computational resources and time, limited modelling capability, and we get stuck in various models, from various perspectives, and so on. And all we have is approximations, that predict certain parts reality to a certain degree, but never fully all of reality in perfect accuracy in all it’s complexity. And we have a lot of blind spots. All models are wrong but some predictively approximate the extremely nuanced complexity of reality better than others.
And from all of this, intelligence and fundamental physics, which are subsets of this, are the most fascinating to me.