Results from a survey on tool use and workflows in alignment research
- Cyborgism by (10 Feb 2023 14:47 UTC; 334 points)
- AI Tools for Existential Security by (EA Forum; 14 Mar 2025 18:37 UTC; 64 points)
- What specific thing would you do with AI Alignment Research Assistant GPT? by (8 Jan 2023 19:24 UTC; 47 points)
- What specific thing would you do with AI Alignment Research Assistant GPT? by (8 Jan 2023 19:24 UTC; 47 points)
- Concerns about AI safety career change by (EA Forum; 13 Jan 2023 20:52 UTC; 45 points)
- Preparing for AI-assisted alignment research: we need data! by (17 Jan 2023 3:28 UTC; 31 points)
- “Unintentional AI safety research”: Why not systematically mine AI technical research for safety purposes? by (29 Mar 2023 15:56 UTC; 27 points)
- 's comment on Why Not Just… Build Weak AI Tools For AI Alignment Research? by (5 Mar 2023 2:55 UTC; 24 points)
- AI Tools for Existential Security by (14 Mar 2025 18:38 UTC; 22 points)
- 's comment on Project “MIRI as a Service” by (9 Mar 2023 7:30 UTC; 15 points)
- Preparing for AI-assisted alignment research: we need data! by (EA Forum; 17 Jan 2023 3:28 UTC; 11 points)
- 's comment on jacquesthibs’s Shortform by (5 Jan 2023 21:14 UTC; 8 points)
- 's comment on jacquesthibs’s Shortform by (23 Jan 2024 16:49 UTC; 8 points)
- 's comment on jacquesthibs’s Quick takes by (EA Forum; 23 Jan 2024 20:20 UTC; 6 points)
- 's comment on Distillation of Neurotech and Alignment Workshop January 2023 by (22 May 2023 13:14 UTC; 5 points)
- 's comment on I’m open for projects (sort of) by (23 Apr 2024 19:43 UTC; 2 points)
- 's comment on jacquesthibs’s Shortform by (12 May 2023 21:11 UTC; 2 points)
- 's comment on jacquesthibs’s Shortform by (17 Jan 2023 21:01 UTC; 2 points)
- 's comment on jacquesthibs’s Quick takes by (EA Forum; 12 May 2023 21:29 UTC; 1 point)
- 's comment on jacquesthibs’s Shortform by (6 Jan 2023 18:38 UTC; 1 point)
This seems like critical work for the most likely path to an existential win that I can see. Keep it up!
Thanks! More to come!