I’ve been researching coordination explosions for three years now. The rabbit hole goes deep, and there’s much more empirical research on near-term applications than there appears to be at first glance, and surprisingly little of it has anything to do with LLMs or soft takeoffs.
You can do research by yourself, in a group, or even go in bizarre and dangerous and terrible directions like Janus where you dump your entire existence into various LLMs and see what happens, and there’s so much low-hanging fruit that it doesn’t even matter where you go or what you do. This domain is basically the wild west of AI safety.
I’m not really comfortable talking about the details publicly on the internet, but what I can say is that there’s so much uncharted territory with coordination explosions, that almost any individual who goes in this general direction gets to collide with game-changing discoveries.
Are you comfortable talking about the details privately on the internet? I’d appreciate a DM. You’ve got my curiosity with the whole “juicy secrets” aura. Also I’m intending to do that “dump my entire existence into an LLM” thing at some point...