I’m probably too conflicted to give you advice here (I work on safety at Google DeepMind), but you might want to think through, at a gears level, what could concretely happen with your work that would lead to bad outcomes. Then you can balance that against positives (getting paid, becoming more familiar with model outputs, whatever).
You might also think about how your work compares to whoever would replace you on average, and what implications that might have as well.
Part of why I ask is because it’s difficult for me to construct a concrete gears-level picture of how (if at all) my work influences eventual transformative AI. I’m unsure about the extent to which refining current models’ coding capabilities accelerates timelines, whether some tasks are possibly net-positive, whether these impacts are easily offset etc.
I’m probably too conflicted to give you advice here (I work on safety at Google DeepMind), but you might want to think through, at a gears level, what could concretely happen with your work that would lead to bad outcomes. Then you can balance that against positives (getting paid, becoming more familiar with model outputs, whatever).
You might also think about how your work compares to whoever would replace you on average, and what implications that might have as well.
Part of why I ask is because it’s difficult for me to construct a concrete gears-level picture of how (if at all) my work influences eventual transformative AI. I’m unsure about the extent to which refining current models’ coding capabilities accelerates timelines, whether some tasks are possibly net-positive, whether these impacts are easily offset etc.