Hello! I’m fia, I found this place through a Substack blog.[1]
I am new to LW. I am here because I’ve realized that rationality and reasoning has been prevalent through about 75% of my life, and I want to understand it more thoroughly, alongside engaging with like-minded people.
I study medicine and have been gradually growing disillusioned about the future of medical practice, over how most our treatments are to manage patients over a chronic timescale, with the more curative approaches being wealth-gated. I am however, hopeful for the advancements we are making in the field of medical research.
As it is one of the main foci of this forum, I will mention I am interested in learning about the internal workings of ML/AI models, particularly regarding alignment.[2]
I am currently in a situation where time can be scarce to allocate due to branching interests, self-optimization becomes increasingly difficult due to the lack of resources and massive demand. I welcome any comments from people who have been in similar places, and will be delighted for any advice or questions on any parts of my comment—even if I may not possess the experience to answer it in a sufficiently rational way.
I am pleased to meet all of you and hope our conversations will be productive. To make up for your time, here are a few fun tidbits in the footnotes.[3][4][5]
Hello! I’m fia, I found this place through a Substack blog.[1]
I am new to LW. I am here because I’ve realized that rationality and reasoning has been prevalent through about 75% of my life, and I want to understand it more thoroughly, alongside engaging with like-minded people.
I study medicine and have been gradually growing disillusioned about the future of medical practice, over how most our treatments are to manage patients over a chronic timescale, with the more curative approaches being wealth-gated. I am however, hopeful for the advancements we are making in the field of medical research.
As it is one of the main foci of this forum, I will mention I am interested in learning about the internal workings of ML/AI models, particularly regarding alignment.[2]
I am currently in a situation where time can be scarce to allocate due to branching interests, self-optimization becomes increasingly difficult due to the lack of resources and massive demand. I welcome any comments from people who have been in similar places, and will be delighted for any advice or questions on any parts of my comment—even if I may not possess the experience to answer it in a sufficiently rational way.
I am pleased to meet all of you and hope our conversations will be productive.
To make up for your time, here are a few fun tidbits in the footnotes.[3][4][5]
https://ceselder.substack.com/
aspects of interest may change
I fell for the Bayesian mammogram test.
I’ve always thought of ML research from a “green elephant in a room” standpoint—https://godescalc.wordpress.com/2012/06/24/overlooked-elephant/ , but recently realized working on these topics and understanding the perceived magic would be much more rewarding
I am currently reading the Harry Potter rationality fanfiction thing, despite certain personal qualms with the setting.