Does “seem like it’s solving computer security” look like helping develop better passively secure systems, or like actively monitoring and noticing bad actions, or both or something else?
My thoughts are mostly about the latter, although better code scanning will be a big help too. A majority of financially impactful corporate breaches are due to a compromised active directory network, and a majority of security spending by non-tech companies is used to prevent those from happening. The obvious application for the next generation of ML is extremely effective EDR and active monitoring. No more lateral movement/privilege escalation on a corporate domain means no more domain wide compromise, which generally means no more e.g. big ransomware scares.
The problem comes if/when people then start teaching computers to do social engineering, competently fuzz applications, and perform that lateral movement intelligently and in a way that bypasses the above, after we have largely deemed it a solved problem.
Does “seem like it’s solving computer security” look like helping develop better passively secure systems, or like actively monitoring and noticing bad actions, or both or something else?
My thoughts are mostly about the latter, although better code scanning will be a big help too. A majority of financially impactful corporate breaches are due to a compromised active directory network, and a majority of security spending by non-tech companies is used to prevent those from happening. The obvious application for the next generation of ML is extremely effective EDR and active monitoring. No more lateral movement/privilege escalation on a corporate domain means no more domain wide compromise, which generally means no more e.g. big ransomware scares.
The problem comes if/when people then start teaching computers to do social engineering, competently fuzz applications, and perform that lateral movement intelligently and in a way that bypasses the above, after we have largely deemed it a solved problem.