A key psychological advantage of the “modest alignment” agenda is that it’s not insanity-inducing. When I seriously contemplate the problem of selecting a utility function to determine the entire universe until the end of time, I want to die (which seems safer and more responsible).
But the problem of making language models “be honest” instead of just continuing the prompt? That’s more my speed; that, I can think about, and possibly even usefully contribute to, without wanting to die. (And if someone else in the future uses honest language models as one of many tools to help select a utility function to determine the entire universe until the end of time, that’s not my problem and not my fault.)
What’s insanity-inducing about it? (Not suggesting you dip into the insanity-tending state, just wondering if you have speculations from afar.)
The problem statement you gave does seem to have an extreme flavor. I want to distinguish “selecting the utility function” from the more general “real core of the problem”s. The OP was about (the complement of) the set of researchers directions that are in some way aimed directly at resolving core issues in alignment. Which sounds closer to your second paragraph.
If it’s philosophical difficulty that’s insanity-inducing (e.g. “oh my god this is impossible we’re going to die aaaahh”), that’s a broader problem. But if it’s more “I can’t be responsible for making the decision, I’m not equipped to commit the lightcone one way or the other”, that seems orthogonal to some alignment issues. For example, trying to understand what it would look like to follow along an AI’s thoughts is more difficult and philosophically fraught than your framing of engineering honesty, but also doesn’t seem responsibility-paralysis, eh?
A key psychological advantage of the “modest alignment” agenda is that it’s not insanity-inducing. When I seriously contemplate the problem of selecting a utility function to determine the entire universe until the end of time, I want to die (which seems safer and more responsible).
But the problem of making language models “be honest” instead of just continuing the prompt? That’s more my speed; that, I can think about, and possibly even usefully contribute to, without wanting to die. (And if someone else in the future uses honest language models as one of many tools to help select a utility function to determine the entire universe until the end of time, that’s not my problem and not my fault.)
What’s insanity-inducing about it? (Not suggesting you dip into the insanity-tending state, just wondering if you have speculations from afar.)
The problem statement you gave does seem to have an extreme flavor. I want to distinguish “selecting the utility function” from the more general “real core of the problem”s. The OP was about (the complement of) the set of researchers directions that are in some way aimed directly at resolving core issues in alignment. Which sounds closer to your second paragraph.
If it’s philosophical difficulty that’s insanity-inducing (e.g. “oh my god this is impossible we’re going to die aaaahh”), that’s a broader problem. But if it’s more “I can’t be responsible for making the decision, I’m not equipped to commit the lightcone one way or the other”, that seems orthogonal to some alignment issues. For example, trying to understand what it would look like to follow along an AI’s thoughts is more difficult and philosophically fraught than your framing of engineering honesty, but also doesn’t seem responsibility-paralysis, eh?