[Closed] PIBBSS is hiring in a variety of roles (alignment research and incubation program)

PIBBSS is looking to expand its team and is running work trials for new team members (primarily) in April, May and early June. If you’re interested in joining a nimble team focused on AI safety research, field-building and incubation of new agendas, consider letting us know by filling in this form. (The applications are now closed, but you can express general interest in this form)

The form is meant to be a low effort means for gauging interests. We don’t guarantee getting back to everyone, but will reach out to you if we think you might be a good fit for the team. We would then aim to get to know you better (e.g. via call) before deciding whether it seems valuable (and worth our respective time) to do a trial. Work trials will look different depending on circumstances, including your interests and availability. We intend to reimburse people for the work they do for us.

About PIBBSS

PIBBSS (pibbss.ai) is a research initiative aimed at extracting insights in the parallels between natural and artificial intelligent systems, with the purpose of making progress on important questions about the safety and design of superintelligent artificial systems. Since its inception in 2021, PIBBSS supported ~50 researchers for 3-month full-time fellowships, is currently supporting 5 in-house, long-term research affiliates, and has organized 15+ AI safety research events/​workshops on topics with participants from both academia and industry. We currently have three full-time staff: Nora Ammann (Co-Founder), Lucas Teixeira (Programs), Dušan D. Nešić (Operations).

Over the past number of months, and in particular with the launch of our affiliate program at the start of 2024, we have started focusing more of our resources towards identifying, testing and developing specific research bets we find promising on our inside-view. This also means we have been directionally moving away from more generic field-building or talent-interventions (though we still do some of this, and might continue doing so, where this appears sufficiently synergetic and counterfactually compelling). We expect to continue and potentially accelerate this trend over the course of 2024 and beyond, and will likely rebrand our efforts soon such as to better reflect the evolving scope and nature of our vision.

Our affiliate program selects scholars from disciplines which study intelligence from a naturalized lens, as well as independent alignment researchers with established track records, and provides them with the necessary support to quickly test, develop, and iterate on high upside research directions. The lacunas in the field which we are trying to address:

  • (Field-building intervention) “Reverse-MATS”: Getting established academics with deep knowledge in areas of relevant but as-of-yet neglected expertise into AI safety

  • (Research intervention) Creating high-quality research output which is theoretically-ambitious as well as empirically-grounded, ultimately leading to the counterfactual incubation of novel promising research agendas in AI safety

What we’re looking for in a new team member

We don’t have a specific singular job description that we’re trying to hire for. Instead, there is a range of skill sets/​profiles that we believe could valuable enhance our team. These tend to range from research to engineering, organizational and management/​leadership profiles. Importantly, we seek to hire someone who becomes part of the core team, implying potential for a significant ability to co-create the vision and carve your own niche based on your strengths and interests.

We expect to hire one or more people who fit an interesting subset of the below list of interests & aptitudes:

  • Ability to manage projects (people, timelines, milestones, deliverables, etc) across several time scales — from days to weeks to months to quarters and beyond

  • Ability to design and run effective research groups and spaces in terms of both formal and informal (e.g. cultural) aspects

  • Ability to identify promising talent and evaluate novel research bets in AI safety, and strong familiarity with the current AI safety research landscape

  • Excitement about approaches to AI safety research that seek strong iterative feedback loops between theory and empirics

  • Strong familiarity with one or several academic fields studying intelligent behavior in natural systems, and/​or History and Philosophy of Science

  • Ability to support research engineering efforts through systems admin and general programming skills; more ambitious, experience in ML engineering to support or contribute to our affiliate’s research and experiments

  • Ability to substantially contribute to developing and refining our strategy and/​or research vision

  • Ability to work in a small and dynamic research team, including a strong generalist skill set, capable and willing to take on novel challenges and figure things out from first principles, and a high degree of self-management, clear communication, intellectual honesty & teamwork

  • Ability to communicate clearly in writing and spoken word — this might be e.g. for research, strategy or organizational purposes

  • Experience in fundraising in research and/​or AI safety specifically

We’re not looking for any specific formal credentials, though we expect a strong candidate to bring at least a few years of relevant work experience.

EDIT: We’re a remote-first team, but can offer office space access in London or Berkeley (where part of the team is located) if desirable.

No comments.