[Question] Which battles should a young person pick?

This post is based on the common premise of having to ‘pick your battles’. I’m at an impasse in my life and believe this community could offer insights for reflection. I’m particularly interested in perspectives regarding my paradigm, though I hope to provide value for others with similiar problems. In general the question can crudely be phrased:

‘What’s a young persons middle ground for contributing to AI safety?’

The answers should prefferably therefore not ask my life’s worth in devotion.


Which battles should a young person choose to fight in the face of AI risks? The rapid changes in the world of AI — and the seeming lack of corresponding policy — deeply concern me. I’m pursuing a Bachelor of Science in Insurance Mathematics (with a ‘guaranteed’ entry to a Master’s programme in Statistics or Actuarial Science). While I’m satisfied with my field of study — I feel it doesn’t reflect my values and need for contribution.

In Lex Fridman’s interview with Eliezer Yudkowsky, Eliezer presents no compelling path forward — and paints the future as almost non-existent.

I understand the discussion, but struggle to reconcile it with my desire to take action.

Here are some of my personal assumptions:

• The probability of doom given the development of AGI, + the probability of solving aging given AGI, nearly equals 1.

• A future where aging is solved provides me (and humanity in general) with vast ‘amounts’ of utility compared to all other alternatives.

• The probability of solving aging with AGI is significant enough — for the scenario to play a significant role in a ‘mean’ utility calculation of my future utility.

I’m aware these assumptions are somewhat incomplete/​il-defined, especially since utility isn’t typically modeled as a cardinal concept. However, they are just meant as context for understanding my value-judgements.

I live in Scandinavia and see no major (except for maybe EA dk?) political movements addressing these issues. I’m eager to make an impact but feel unsure about how to do so effectively without dedicating my entire life to AI risk.

Although the interview was some time ago, I’ve only recently delved into these thoughts. I’d appreciate any context or thoughts you might provide.

Disclaimer: I’m not in a state of distress. I’m simply seeking a middle ground for making a difference in these areas. Also the tags might be a bit off, so I would appreciate some help with those.