My main project thread for the past 2 years has been mostly aiming at Get a Lot of Alignment Research Done Real Fast (in line with my beliefs/taste about what that requires). This is the motivator for the Feedbackloop-first Rationality project, and is also a driver for my explorations into using LLMs for research (where I’m worried specifically about phrases like “full handoff” because of the way it seems like LLM-use subtly saps/erodes agency and direct you towards dumber thoughts that more naturally ‘fit’ into the LLM paradigm. But I’m also excited about approaches for solving that).
But I’m focused for this year on “wake everyone up.”
Nod.
My main project thread for the past 2 years has been mostly aiming at Get a Lot of Alignment Research Done Real Fast (in line with my beliefs/taste about what that requires). This is the motivator for the Feedbackloop-first Rationality project, and is also a driver for my explorations into using LLMs for research (where I’m worried specifically about phrases like “full handoff” because of the way it seems like LLM-use subtly saps/erodes agency and direct you towards dumber thoughts that more naturally ‘fit’ into the LLM paradigm. But I’m also excited about approaches for solving that).
But I’m focused for this year on “wake everyone up.”