I run a major recruiting firm in India working with Tech companies and wanted to use some of that access to the workforce to get the highly talented Ai people into alignment. the cool thing about India is that the cost of living is so low that full-time talented people in this field) can be snagged at 20k-50k a year.
The question I have is 2 part,
In order to qualify the applicants, what questions would be good?
Once I have qualified applicants ready to go, is there any companies or places actively hiring I can help get them onboarded with? No fees btw, this is my donation to the alignment field.
If there is a need by the business to handle the regulatory part of hiring internationally we have that covered.
This is a good question but it’s hard to answer. I’ll try and signal-boost this a little later, but I’ll give it a shot first.
This depends on question 2. If you want to funnel people into research, then you want to find people who can read an argument and figure out what it implies, who are fundamentally curious people, who are able to think mechanistically about AI, who when faced with a new problem can come up with hypotheses and tests on their own. But if you find (or help push for) some organization that’s doing scalable software work, then you want people who are good coders, understand systems, etc.
Asking questions for either is hard—you’d think I’d know something about the first, but I don’t really. Maybe the people to ask are organizers for SERI? My stab would be general cognitive questions, asking about past research, science, or engineering -like projects, and maybe showing them a gridworld AI from Concrete Problems in AI Safety and getting them to explain what’s going on, why the AI does something bad, and ask them to give one example of where the toy model seems like it would generalize to the real world, and one example of where it wouldn’t.
Unfortunately, the answer might be “nowhere”. AI alignment has researcher money, but not many places are actually setting up top-down organizations prepared to integrate new people. The current model looks more like academic research, where people tend to have to be self-directed (which might be for good reasons). The pipelines people are building for adding new people (e.g. SERI MATS, MLAB) are also focused on this kind of self-directed research, rather than hiring people for specific jobs.
In theory, though, interpretability work has plenty of places where skilled software engineers would help, in ways that are scaleable enough to justify larger organizations. Redwood Research is the org that has likely put the most thought into this, and maybe you should chat with them.