[Question] How Important is Inverting LLMs?

A recent paper shows an algorithm to invert an LLM to find the inputs (I think? I’m not an ML guy), does that mean you can now turn a predictor directly into a world-steerer? If you put in an output, and it finds the input most likely to cause that output, does that mean it will find the things it needs to say in order for the most likely next token to be the chosen one, even if that next token is something said by a human? If that is actually how it works, it really looks like this is a major breakthrough, and strong agents will be here shortly.

No answers.