My point was that even though we already have an extremely reliable recipe for getting an LLM to understand grammar and syntax, we are not anywhere near a theoretical guarantee for that. The ask for a theoretical guarantee seems impossible to me, even on much easier things that we already know modern AI can do.
When someone asks for an alignment guarantee I’d like them to demonstrate what they mean by showing a guarantee for some simpler thing—like a syntax guarantee for LLMs. I’m not familiar with SLT but I’ll believe it when I see it.
Wow we have a lot of the same thinking!
I’ve also felt like people who think we’re doomed are basically spending a lot of their effort on sabotaging one of our best bets in the case that we are not doomed, with no clear path to victory in the case where they are correct (how would Anthropic slowing down lead to a global stop?)
And yeah I’m also concerned about competition between DeepMind/Anthropic/SSI/OpenAI—in theory they should all be aligned with each other but as far as I can see they aren’t acting like it.
As an aside, I think the extreme pro-slowdown view is something of a vocal minority. I met some Pause AI organizers IRL and brought up the points I brought in my original comment, expecting pushback, but they agreed, saying they were focused on neutrally enforced slowdowns e.g. government action.