I share almost exactly this opinion, and I hope it’s fairly widespread.
The issue is that almost all of the “something elses” seem even less productive on expectation.
(That’s for technical approaches. The communication-minded should by all means be working on spreading the alarm and so slowing progress and raising the ambient levels fo risk-awareness).
LLM research could and should get a lot more focused on future risks instead of current ones. But I don’t see alternatives that realistically have more EV.
It really looks like the best guess is that AGI is now quite likely to be descended from LLMs. And I see little practical hope of pausing that progress. So accepting the probabilities on the game board and researching LLMs/transformers makes sense even when it’s mostly practice and gaining just a little bit of knowledge of how LLMs/transformers/networks represent knowledge and generate behaviors.
It’s of course down to individual research programs; there’s a bunch of really irrelevant LLM research that would be better directed elsewhere. And having a little effort directed to unlikely scenarios where we get very different AGI is also defensible—as long as it’s defended, not just hope-based.
This is of course a major outstanding debate, and needs to be had carefully. But I’d really like to see more of this type of careful thinking about the likely efficiency of different research routes.
I think there’s low-hanging fruit in trying to improve research on LLMs to anticipate the new challenges that arrive when LLM-descended AGI becomes actually dangerous. My recent post LLM AGI may reason about its goals and discover misalignments by default suggests research addressing one fairly obvious possible new risk when LLM-based systems become capable of competent reasoning and planning.
I share almost exactly this opinion, and I hope it’s fairly widespread.
The issue is that almost all of the “something elses” seem even less productive on expectation.
(That’s for technical approaches. The communication-minded should by all means be working on spreading the alarm and so slowing progress and raising the ambient levels fo risk-awareness).
LLM research could and should get a lot more focused on future risks instead of current ones. But I don’t see alternatives that realistically have more EV.
It really looks like the best guess is that AGI is now quite likely to be descended from LLMs. And I see little practical hope of pausing that progress. So accepting the probabilities on the game board and researching LLMs/transformers makes sense even when it’s mostly practice and gaining just a little bit of knowledge of how LLMs/transformers/networks represent knowledge and generate behaviors.
It’s of course down to individual research programs; there’s a bunch of really irrelevant LLM research that would be better directed elsewhere. And having a little effort directed to unlikely scenarios where we get very different AGI is also defensible—as long as it’s defended, not just hope-based.
This is of course a major outstanding debate, and needs to be had carefully. But I’d really like to see more of this type of careful thinking about the likely efficiency of different research routes.
I think there’s low-hanging fruit in trying to improve research on LLMs to anticipate the new challenges that arrive when LLM-descended AGI becomes actually dangerous. My recent post LLM AGI may reason about its goals and discover misalignments by default suggests research addressing one fairly obvious possible new risk when LLM-based systems become capable of competent reasoning and planning.