For instance, are LLMs excellent at generating malware but bad at malware detection?
At current tech levels defense seems favored for anyone who is actually trying: LLMs are much better at looking in a codebase for smells of insecure code than they are at the multi-step process of developing an exploit that requires chaining together multiple gadgets correctly and frequently with no intermediate feedback. Concretely, “there is a vulnerability in this codebase: find it and patch it” seems like an easier task for LLMs than “there is a vulnerability in this codebase: find it and develop a working exploit”.
That said, the majority of exploits target systems which do not meet the “someone is actually trying to make this system secure” standard. If offense gets easier and defense remains at zero, offense becomes increasingly favored.
At current tech levels defense seems favored for anyone who is actually trying: LLMs are much better at looking in a codebase for smells of insecure code than they are at the multi-step process of developing an exploit that requires chaining together multiple gadgets correctly and frequently with no intermediate feedback. Concretely, “there is a vulnerability in this codebase: find it and patch it” seems like an easier task for LLMs than “there is a vulnerability in this codebase: find it and develop a working exploit”.
That said, the majority of exploits target systems which do not meet the “someone is actually trying to make this system secure” standard. If offense gets easier and defense remains at zero, offense becomes increasingly favored.