RSS

Alex K. Chen (parrot)

Karma: 181

Extremely neophilic. Much of my content is on Quora (I was the Quora celebrity). I am also on forum.quantifiedself.com (people do not realize how alignment-relevant this is), rapamycin.news/​latest, and crsociety.org

https://​​linktr.ee/​​simfish

...People say the craziest things about me, because I’m a peculiar star...

I care about neuroscience (esp human intelligence enhancement) and reducing genetic inequality. The point of transhumanism is to transcend genetic limitations—to reduce the fraction of variance of outcome explained by genetics. I know loads of people in self-experimentation communities (people in our communities need to be less risk-averse if we have to make any difference in our probability of “making it”). When we are right at “the precipice”, traditionalism cannot win (I am probably the least traditionalist person ever). I get along well with the unattached.

Slowing human compute loss from reducing microplastics/​pollution/​noise/​rumination/​aging rate are alignment-relevant (insofar as the most tractable way of “human enhancement” is to slow decline with age + make human thinking clearer). As is tFUS. I aim to do all I can to make biology keep up with technology. Reconfiguring reward functions to reward “wholesome/​growthful/​novel tasks over “the past” [you are aged when you think too much of the past].

Alignment through integrating all the diverse skillsets (including those who are not math/​CS geniuses) and integrating all their compute + not making them waste time/​attention on “dumb things” + making people smarter/​more neuroplastic (this is a hard problem, but 40Hz-tACS [1] might do a little).

Unschooling is also alignment-relevant (most value is destroyed in deceptive alignment, and school breeds deceptive alignment). As is inverting “things that feel unfun”.

Chaotic people may depend more on a sense of virtue than others, but it takes a lot to get people to trust a group of people/​make themselves authentic when school has taken out much of their authenticity. Some people don’t lose much or get much emotional damage from it (I’ve noticed it from several who dropped out of school for alignment), but some people get way more, and this is a way easier problem to solve than directly increasing human intelligence.

I like Dionysians. However, I had to cut back after accidentally destroying an opportunity (a friend having egged me onto being manic...)

Breadth/​context produces unique compute value of its own

https://​​twitter.com/​​InquilineKea
facebook.com/​​simfish

I have a Twitter alt.

I trigger exponential growth trajectories in some. I helped seed the original Ivy League Psychedelics communities and am very good friends with Qualia Research Institute people (though I cannot try them much now, they do have a lot of experimental energy that can be further reconfigured towards tACS)

Main objectives: not get sad, not get worked out over dumb things, not making my life harder than it is now.

I really like https://​​www.lesswrong.com/​​users/​​bhauth. Zvi is smart too https://​​www.lesswrong.com/​​users/​​zvi?from=post_header

[1] there are negative examples too

How to make food/​wa­ter test­ing cheaper/​more scal­able? [eg for pu­rity/​toxin test­ing]

Alex K. Chen (parrot)23 Mar 2024 5:28 UTC
9 points
2 comments1 min readLW link