Possibly look for other skills / career paths, besides math and computer science? Glancing through 80,000 Hours’ list:
- AI governance and policy—I’m guessing that seeking out “policy people” will be a non-starter in Russia, either because it’s dangerous or because there are fewer such people (not whole graduating classes at Harvard, etc, waiting to become the next generation of DC elites).
- AI safety technical research—of course you are already thinking about this via IOM, IOI, etc. Others have mentioned trying to expand to LLM-specific competitions / clubs / etc. Alternately, consider expanding beyond IOM to more generic super-smart-person competitions, like chess tournaments?
- Biorisk research, strategy, and policy—I’m guessing that tying HPMOR to any kind of biosecurity message would probably be a bad idea in Russia. Although HPMOR does have a very strong anti-death message, which might resonate especially well with medical students with aspirations to discover cures for diseases like cancer, alzheimers, the aging process, etc. So maybe giving it away to high-achieving medical students (with no biosecurity message attached; rather a general triumph-over-death message) could be sort of impactful—obviously it’s unrelated to AI, but perhaps this idea is better than giving it away to random libraries.
- Cybersecurity—sounds like you’re already thinking about this.
- Expert in AI hardware—it’s less clear that this field needs HPMOR-pilled rationalists at the helm, and it’s my understanding that Russia’s semiconductor industry is far behind the rest of the world. But idk, maybe there’s something worth doing here?
- China-related AI safety and governance paths—this is policy-related, thus perhaps has the same problems I mentioned earlier about AI governance/policy roles. But it does seem like Russians might have a natural comparative advantage in the field of “influencing how China thinks about AI”, compared to people from contries that China percieves as rivals / enemies. I’m not sure what kind of competitions / scholarships / fellowships / study-abroad programs you could use to target giving the books—you’d be looking for technically-minded, ambitious, high-achieving Russian speakers with ties to China or interest in China, and ideally also an interest in AI—but maybe there’s something. (Go tournaments??)
- Nuclear weapons safety & security—probably a non-starter in Russia for political reasons
Yep, we’ve also been sending the books to winners of national and international olympiads in biology and chemistry.
Sending these books to policy-/foreign policy-related students seems like a bad idea: too many risks involved (in Russia, this is a career path you often choose if you’re not very value-aligned. For the context, according to Russia, there’s an extremist organization called “international LGBT movement”).
If you know anyone with an understanding of the context who’d want to find more people to send the books to, let me know. LLM competitions, ML hackathons, etc. all might be good.
Ideally, we’d also want to then alignment-pill these people, but no one has a ball on this.
Possibly look for other skills / career paths, besides math and computer science? Glancing through 80,000 Hours’ list:
- AI governance and policy—I’m guessing that seeking out “policy people” will be a non-starter in Russia, either because it’s dangerous or because there are fewer such people (not whole graduating classes at Harvard, etc, waiting to become the next generation of DC elites).
- AI safety technical research—of course you are already thinking about this via IOM, IOI, etc. Others have mentioned trying to expand to LLM-specific competitions / clubs / etc. Alternately, consider expanding beyond IOM to more generic super-smart-person competitions, like chess tournaments?
- Biorisk research, strategy, and policy—I’m guessing that tying HPMOR to any kind of biosecurity message would probably be a bad idea in Russia. Although HPMOR does have a very strong anti-death message, which might resonate especially well with medical students with aspirations to discover cures for diseases like cancer, alzheimers, the aging process, etc. So maybe giving it away to high-achieving medical students (with no biosecurity message attached; rather a general triumph-over-death message) could be sort of impactful—obviously it’s unrelated to AI, but perhaps this idea is better than giving it away to random libraries.
- Cybersecurity—sounds like you’re already thinking about this.
- Expert in AI hardware—it’s less clear that this field needs HPMOR-pilled rationalists at the helm, and it’s my understanding that Russia’s semiconductor industry is far behind the rest of the world. But idk, maybe there’s something worth doing here?
- China-related AI safety and governance paths—this is policy-related, thus perhaps has the same problems I mentioned earlier about AI governance/policy roles. But it does seem like Russians might have a natural comparative advantage in the field of “influencing how China thinks about AI”, compared to people from contries that China percieves as rivals / enemies. I’m not sure what kind of competitions / scholarships / fellowships / study-abroad programs you could use to target giving the books—you’d be looking for technically-minded, ambitious, high-achieving Russian speakers with ties to China or interest in China, and ideally also an interest in AI—but maybe there’s something. (Go tournaments??)
- Nuclear weapons safety & security—probably a non-starter in Russia for political reasons
Yep, we’ve also been sending the books to winners of national and international olympiads in biology and chemistry.
Sending these books to policy-/foreign policy-related students seems like a bad idea: too many risks involved (in Russia, this is a career path you often choose if you’re not very value-aligned. For the context, according to Russia, there’s an extremist organization called “international LGBT movement”).
If you know anyone with an understanding of the context who’d want to find more people to send the books to, let me know. LLM competitions, ML hackathons, etc. all might be good.
Ideally, we’d also want to then alignment-pill these people, but no one has a ball on this.