Responding to the last line: to be clear, I’m not claiming I have one. More wondering if the AI risk community should try to find one as a desperate hail mary given they have ~0 hope for their current research directions.
aka I’m wondering if trying to find one even is a desperate hail mary
I think we are a lot closer to solving alignment the normal way than that, and the problem is that understanding the landscape requires skimming a lot of papers, which most people don’t feel like doing for various reasons (a big one being that even for researchers who write and read a lot of papers, reading papers deeply is a drag).
Responding to the last line: to be clear, I’m not claiming I have one. More wondering if the AI risk community should try to find one as a desperate hail mary given they have ~0 hope for their current research directions.
aka I’m wondering if trying to find one even is a desperate hail mary
I think we are a lot closer to solving alignment the normal way than that, and the problem is that understanding the landscape requires skimming a lot of papers, which most people don’t feel like doing for various reasons (a big one being that even for researchers who write and read a lot of papers, reading papers deeply is a drag).