new bio: https://www.alexkchen.com
Maximum-Sparsity Reinforcement Learner. Much of my content is on Quora (I was the Quora celebrity). I am also on forum.quantifiedself.com (people do not realize how alignment-relevant this is), https://www.rapamycin.news/u/alexkchen/summary.
I find both sparsity and purity/noise fascinating.
https://x.com/MaxDiffusionRL
...People say the craziest things about me, because I’m a peculiar star...
I hope that mapping out my interactions with AI will make it way easier to fine-tune the best ways of increasing neuroplasticity (whether through tFUS/TMS, ISRIB A15, or finding the highest-leverage ways my brain can still improve)
I care about neuroscience (esp human intelligence enhancement) and reducing genetic inequality. The point of transhumanism is to transcend genetic limitations—to reduce the fraction of variance of outcome explained by genetics. I know loads of people in self-experimentation communities (people in our communities need to be less risk-averse if we have to make any difference in our probability of “making it”). When we are right at “the precipice”, traditionalism cannot win (I am probably the least traditionalist person ever). I get along well with the unattached.
“Why Greatness Cannot be Planned” is (along with Laozi) the best book on human kindness.
Slowing human compute loss from reducing microplastics/pollution/noise/rumination/aging rate are alignment-relevant (insofar as the most tractable way of “human enhancement” is to slow decline with age + make human thinking clearer). As is tFUS. I aim to do all I can to make biology keep up with technology. Reconfiguring reward functions to reward “wholesome/growthful/novel tasks over “the past” [you are aged when you think too much of the past].
Alignment through integrating all the diverse skillsets (including those who are not math/CS geniuses) and integrating all their compute + not making them waste time/attention on “dumb things” + making people smarter/more neuroplastic (this is a hard problem, but 40Hz-tACS [1] might do a little).
Unschooling is also alignment-relevant (most value is destroyed in deceptive alignment, and school breeds deceptive alignment). As is inverting “things that feel unfun”.
Chaotic people may depend more on a sense of virtue than others, but it takes a lot to get people to trust a group of people/make themselves authentic when school has taken out much of their authenticity. Some people don’t lose much or get much emotional damage from it (I’ve noticed it from several who dropped out of school for alignment), but some people get way more, and this is a way easier problem to solve than directly increasing human intelligence.
I like Dionysians. However, I had to cut back after accidentally destroying an opportunity (we only found out a year later, but a friend egged me into mania...)
Breadth/context produces unique compute value of its own
facebook.com/simfish
I trigger exponential growth trajectories in some. I helped seed the original Ivy League Psychedelics communities and am very good friends with Qualia Research Institute people (though I cannot try them much now)
Main objectives: not get sad, not get worked out over dumb things, not making my life harder than it is now.
I really like https://www.lesswrong.com/users/bhauth.
Trying to become more “AI-interpretable”. It’s a shame that every site I went big on blocks crawlers, and thus LLMs still barely know who I am.
[1] there are negative examples too