I spent about 5 years assuming I wasn’t really a candidate for “major contributor to X-Risk as a cause” except as a donor and maybe community organizer.
I do think it’s important that people preserve a healthy psychological option of… just not stressing out too much about this. (One person I respect said “I spent years not focusing on X-risk because I need to recover and focus on my personal life. But I appreciated once-a-year going to Solstice and having the question asked “do you want to make sure the future is okay?”)
But, if you’re in a place where this at least seems like a reasonable question, here’s my personal approach. (If you’re more worried about other risks than AI, I think the same framework applies to other areas)
The first two steps of an OODA-loop-like-process (I’m not 100% sure how to separate “observe” and “orient”) included:
a) read lots about various x-risks
b) locate people who seemed able to contribute time towards helping me thinking about who weren’t too busy.
(I think it’s worth doing the above even after reading the results of other people doing the above, since your situation and theirs may be different)
The main things this output were:
Decide if I had the financial and emotional slack to take on anything serious here. (I did, and if I hadn’t, I’d have focused more on getting a good job to build more financial stability. Or, potentially focused on earning-to-give and then setting up a monthly donation to orgs that seem important. “Monthly donation” being important since it lets them plan for the future better.)
Change my environment and network such that I could more easily notice opportunities to help and do them”, followed by another Observe/Orient loop.
I’m getting towards the “decide” section of the next OODA cycle, and current considerations include:
1) Continuing to work on LesserWrong, after being convinced by the argument “Less Wrong was one of the few reliable pipelines outputing more AI safety researchers, and making sure that it is still thriving is a good way to do that.”
2) Spending at least a week trying to learn Machine Learning tools that may enable to help with some technical research projects. (My current understanding is that there’s room there for people who are technically competent but not exceptional, although it may requires some particular skillsets which may or may not come naturally to me)
3) Contribute organizational capacity to other new projects that look like they could use help getting off the ground, and/or gain managerial skills that I can use to help flesh out the hierarchies of other organizations.
I spent about 5 years assuming I wasn’t really a candidate for “major contributor to X-Risk as a cause” except as a donor and maybe community organizer.
I do think it’s important that people preserve a healthy psychological option of… just not stressing out too much about this. (One person I respect said “I spent years not focusing on X-risk because I need to recover and focus on my personal life. But I appreciated once-a-year going to Solstice and having the question asked “do you want to make sure the future is okay?”)
But, if you’re in a place where this at least seems like a reasonable question, here’s my personal approach. (If you’re more worried about other risks than AI, I think the same framework applies to other areas)
The first two steps of an OODA-loop-like-process (I’m not 100% sure how to separate “observe” and “orient”) included:
a) read lots about various x-risks
b) locate people who seemed able to contribute time towards helping me thinking about who weren’t too busy.
c) spent 4 hours thinking about AI X-Risk to get a rough orientation of the situation, and what options seemed available to me.
(I think it’s worth doing the above even after reading the results of other people doing the above, since your situation and theirs may be different)
The main things this output were:
Decide if I had the financial and emotional slack to take on anything serious here. (I did, and if I hadn’t, I’d have focused more on getting a good job to build more financial stability. Or, potentially focused on earning-to-give and then setting up a monthly donation to orgs that seem important. “Monthly donation” being important since it lets them plan for the future better.)
Generally focus on building habits that increase the range of my abilities, opening more options up.
Change my environment and network such that I could more easily notice opportunities to help and do them”, followed by another Observe/Orient loop.
I’m getting towards the “decide” section of the next OODA cycle, and current considerations include:
1) Continuing to work on LesserWrong, after being convinced by the argument “Less Wrong was one of the few reliable pipelines outputing more AI safety researchers, and making sure that it is still thriving is a good way to do that.”
2) Spending at least a week trying to learn Machine Learning tools that may enable to help with some technical research projects. (My current understanding is that there’s room there for people who are technically competent but not exceptional, although it may requires some particular skillsets which may or may not come naturally to me)
3) Contribute organizational capacity to other new projects that look like they could use help getting off the ground, and/or gain managerial skills that I can use to help flesh out the hierarchies of other organizations.