Traditional security tools struggle to keep up as they constantly run into threats introduced by LLMs and agentic AI systems that legacy defences werenāt designed to stop. From prompt injection to model extraction, the attack surface for AI applications is uniquely weird.
āTraditional security tools like WAFs and API gateways are largely insufficient for protecting generative AI systems mainly because they are not pointing to, reading, and intersecting with the AI interactions and do not know how to interpret them,ā said Avivah Litan, distinguished VP analyst, Gartner.
AI threats could be zero-day
AI systems and applications, while extremely capable at automating business workflows, and threat detection and response routines, bring their own problems to the mix, problems that werenāt there before. Security threats have evolved from SQL injections or cross-site scripting exploits to behavioral manipulations, where adversaries trick models into leaking data, bypassing filters, or acting in unpredictable ways.
Gartnerās Litan said that while AI threats like model extractions have been around for many years, some are very new and hard to tackle. āNation states and competitors who do not play by the rules have been reverse-engineering state-of-the-art AI models that others have created for many years.ā
