Also, it seems like there is not much of that in the field of alignment. I want there to be more work on unifying (previously frontier) alignment research and more effort to construct paradigms in this preparadigmatic field (but maybe I just haven’t looked hard enough)
I am surprised regarding the lack of distillation claim. I’d naively expected that to be more neglected in physics compared to alignment. Is there something in particular that you think could be more distilled?
Regarding research that tries to come up with new paradigms, here are a few reasons why you might not be observing that much: I guess that is less funded by the big labs and is spread across all kinds of orgs or individuals. Maybe check MIRI, PIBBBS, ARC (theoretical research), Conjecture check who went to ILIAD. More of these researchers didn’t publish all their research compared to AI safety researchers at AGI labs, so you would not have been aware it was going on? Some are also actively avoiding researching things that could be easily applied and tested, because of capability externalities (I think Vanessa Kosoy mentions this somewhere in the YouTube videos on Infrabayesianism).
Is there something in particular that you think could be more distilled?
What I had in mind is something like a more detailed explanation of recent reward hacking/misalignment results. Like, sure, we have old arguments about reward hacking and misalignment, but what I want is more gears for when particular reward hacking would happen in which model class.
Maybe check MIRI, PIBBBS, ARC (theoretical research), Conjecture check who went to ILIAD.
Those are top-down approaches, where you have an idea and then do research for it, which is, of course, useful, but that’s doing more frontier research via expanding surface area. Trying to apply my distillation intuition to them would be like having some overarching theory unifying all approaches, which seems super hard and maybe not even possible. But looking at the intersection of pairs of agendas might prove useful.
The neuroscience/psychology rather than ml side of the alignment problem seems quite neglected (because it harder on the one hand, but it’s easier to not work on something capabilities related if you just don’t focus on the cortex). There’s reverse engineering human social instincts. In principle would benefit from more high quality experiments in mice, but those are expensive.
I am surprised regarding the lack of distillation claim. I’d naively expected that to be more neglected in physics compared to alignment. Is there something in particular that you think could be more distilled?
Regarding research that tries to come up with new paradigms, here are a few reasons why you might not be observing that much: I guess that is less funded by the big labs and is spread across all kinds of orgs or individuals. Maybe check MIRI, PIBBBS, ARC (theoretical research), Conjecture check who went to ILIAD. More of these researchers didn’t publish all their research compared to AI safety researchers at AGI labs, so you would not have been aware it was going on? Some are also actively avoiding researching things that could be easily applied and tested, because of capability externalities (I think Vanessa Kosoy mentions this somewhere in the YouTube videos on Infrabayesianism).
What I had in mind is something like a more detailed explanation of recent reward hacking/misalignment results. Like, sure, we have old arguments about reward hacking and misalignment, but what I want is more gears for when particular reward hacking would happen in which model class.
Those are top-down approaches, where you have an idea and then do research for it, which is, of course, useful, but that’s doing more frontier research via expanding surface area. Trying to apply my distillation intuition to them would be like having some overarching theory unifying all approaches, which seems super hard and maybe not even possible. But looking at the intersection of pairs of agendas might prove useful.
The neuroscience/psychology rather than ml side of the alignment problem seems quite neglected (because it harder on the one hand, but it’s easier to not work on something capabilities related if you just don’t focus on the cortex). There’s reverse engineering human social instincts. In principle would benefit from more high quality experiments in mice, but those are expensive.