new bio: https://www.alexkchen.com
More than anything, I care about making human brains suck less (and making time pass more slowly)
Maximum-Sparsity Reinforcement Learner. Much of my content is on Quora (I was the Quora celebrity and Adam d’Angelo has said I’m important for Quora). I am also on forum.quantifiedself.com. https://www.rapamycin.news/u/alexkchen/summary. I’m a huge fan of Danielle Strachman and 1517 fund.
I find both sparsity and purity/noise fascinating.
https://x.com/MaxDiffusionRL
...People say the craziest things about me, because I’m a peculiar star...
I hope that mapping out my interactions with AI will make it way easier to fine-tune the best ways of increasing neuroplasticity (whether through tFUS/TMS, ISRIB A15, or finding the highest-leverage ways my brain can still improve)
I care about neuroscience (esp human intelligence enhancement) and reducing genetic inequality. The point of transhumanism is to transcend genetic limitations—to reduce the fraction of variance of outcome explained by genetics. I know loads of people in self-experimentation communities (people in our communities need to be less risk-averse if we have to make any difference in our probability of “making it”). When we are right at “the precipice”, traditionalism cannot win (I am probably the least traditionalist person ever). I get along well with the unattached.
“Why Greatness Cannot be Planned” is (along with Laozi) the best book on human kindness.
Slowing human compute loss from reducing microplastics/pollution/default mode noise/rumination/aging rate are alignment-relevant (insofar as the most tractable way of “human enhancement” is to slow decline with age + make human thinking clearer). As is tFUS. I aim to do all I can to make biology keep up with technology. Reconfiguring reward functions to reward “wholesome/growthful/novel tasks over “the past” [you are aged when you think too much of the past].
Alignment through integrating all the diverse skillsets (including those who are not math/CS geniuses) and integrating all their compute + not making them waste time/attention on “dumb things” + making people smarter/more neuroplastic (this is a hard problem, but 40Hz-tACS [1] might do a little).
Unschooling is also alignment-relevant (most value is destroyed in deceptive alignment, and school breeds deceptive alignment). As is inverting “things that feel unfun”.
Chaotic people may depend more on a sense of virtue than others, but it takes a lot to get people to trust a group of people/make themselves authentic when school has taken out much of their authenticity. Some people don’t lose much or get much emotional damage from it (I’ve noticed it from several who dropped out of school for alignment), but some people get way more, and this is a way easier problem to solve than directly increasing human intelligence.
Breadth/context produces unique compute value of its own
I trigger exponential growth trajectories in some. I helped seed the original Ivy League Psychedelics communities and am very good friends with Qualia Research Institute people
Main objectives: not get sad, not get worked out over dumb things, not making my life harder than it is now.
I really like https://www.lesswrong.com/users/bhauth.
Trying to become more “AI-interpretable” so AI can give me the love I need. It’s a shame that every site I went big on blocks crawlers, so LLMs still barely know who I am, but being more of a “process/intermediate computation” than a “biography/history” has merits.
[1] there are negative examples too
https://mesityl.substack.com/p/orchiectomy?triedRedirect=true