What can people not smart/​technical/​”competent” enough for AI research/​AI risk work do to reduce AI-risk/​maximize AI safety? (which is most people?)

(aka the vast majority of people in the world). I know that many of them can still benefit from those who are in support roles for them. The speed at which top AI researchers need to learn new AI content is.. just.. extraordinary even by the standards of any other academic field.

I’ve found that people who have “taste/​context” for the most important things are still extraordinarily rare, and that when someone is in the right communities, having taste/​context more than makes up for 1.5 SDs in deficits of intelligence (most at the middle-high range).

I think it’s first of all important to note the reading streams of AI researchers or AI risk workers (eg see if they follow health nuts like Brad Stanfield or Mike Lustgarten). Fluid intelligence, brain size, memory, and neuroplasticity all decline after the 20s (but there is a very high amount of variation in how fast people’s brain sizes decline with age, especially given how poor the western diet is), so doing anything to slow the decline in AI researchers could help [eg EAG events already serve super-healthy vegan food for meals, so have a personal chef for them who continually serves them the healthiest food possible -I know several AI safety people at EAG Boston last week commented “I never could imagine that vegan food could taste so good”]. Simply having high-normal blood sugar levels shrinks brain sizes by age 60 (and SGLT2 inhibitors, newly available on unitedpharmacies.md, do a lot to reduce them, esp when paired with metformin).

Reducing lead/​microplastic pollution rates is huge (where lead went down, now we have an explosive increase in microplastics)

There are some newly released interventions (eg rapamycin+SGLT2 inhibitors (like empagliflozin/​canagliflozin)+metformin) that can do A LOT to reduce aging rate and the risk of diseases later on, esp as most AI researchers (esp the ones at Anthropic) aren’t getting any younger. There are health coaches like those at Apeiron Health who can monitor biometrics (including EEG data, Kernel data, and Health Nucleus data) of AI researchers to ensure that they stay at “peak performance” for their lives (athletes like bengreenfieldfitness.com use it—so we need to port it to AI researchers).

AI researchers also tend to be in a monoculture where they hang around other people who are academically successful AI researchers, but not with empaths who might not be academic geniuses but who have behavioral repertoires and “ability to get social things” that their fellow AI researchers don’t have.

What is the value of one hour of extra time of reduced discomfort for important AI researchers like Chris Olah? (sometimes it becomes more impt critical intervals of their life than at others) Or alternatively, the value of 10 additional “productive years” (including years of fluid intelligence) in the healthspan of an AI researcher? (in a field like AI, where tools constantly improve and “social wisdom” also improves over time, the “net utility” of staying sharp in one’s later years may be far higher than it was of the mathematicians/​physicists that Dean Simonton studied). The value of extra “higher healthspans”/​”high fluid intelligence years” is extremely high whether applied to genius or the general population.

I know some empaths who are super-sensitive to people, who would always be there for you if you’re feeling uncomfortable or down (+are sensitive enough that you would NEVER mind them being around you all the time). Imagine if they could be there for AI researchers during periods of discomfort or feeling down. AI research already selects for people who are low in neuroticism/​high in emotional stability (though it helps to HAVE ppl to BE there for the remaining high-IQ ones who have high neuroticism), but maintaining their health/​fluid intelligence over time/​making sure that they’re the best version of themselves with as perfect memory as possible + manage their time as best as possible ⇒ is optimal for reducing AI risk even if one isn’t technical enough for AI work.

Also, BEING THERE FOR GENIUSES IN THEIR EARLY YEARS (there aren’t enough of them, and most of them suffer through the school they don’t need to be in). Reducing the burden that school has on them is already a super-impt cause (TKS, for example, has been successful in getting many people to devalue school/​grades to do their own entrepreneurial thing). Making sure that people’s early input streams is as clean/​unpolluted as possible (esp from the noise of school) may be the way to creating more people with high capacities even if they don’t have genius-level IQ.

There needs to be more Danielle Strachman’s (one doesn’t have to be genius-level IQ to be Danielle Strachman—in fact she admitted to initially feeling imposter syndrome when surrounded by all the Thiel Fellow geniuses she mentored). One person who has Danielle Strachman vibes is https://​​twitter.com/​​shriyanevatia .[for those who don’t know—Danielle Strachman managed the Thiel Fellowship and mentored Thiel Fellows for the first 3 years—so she had an important influence in many associated with AI risk].

Anything to increase the pool of https://​​erikhoel.substack.com/​​p/​​why-we-stopped-making-einsteins?s=r (one doesn’t have to be a genius to mentor geniuses). There aren’t many people who have this level of intense attention for young geniuses [Laurens Gunnarsen is one of the few], but there could be way way more.

https://​​twitter.com/​​niplav_site is example of someone who calls themselves “midwit” but who still has taste..