11/25/2024 | News release | Distributed by Public on 11/25/2024 17:31
As AI continues to evolve, so do the threats and vulnerabilities that surround Large Language Models (LLMs). The OWASP Top 10 for LLM Applications 2025 introduces critical updates that reflect the rapid changes in how these models are applied in real-world scenarios. While the list includes carryovers from the 2023 version, several entries have been significantly reworked or added, addressing emerging risks and community feedback.
Although these changes were finalized in late 2024, OWASP Core Team Contributors designated the list for 2025, signaling their confidence in its relevance over the coming months. The updated list emphasizes a refined understanding of existing risks and includes new vulnerabilities identified through real-world exploits and advancements in LLM usage.
Key Highlights of the 2025 Updates
As you'll see in the figure above, Prompt Injection maintained its position at the top of the list. Coming in at second and third place, respectively, Sensitive Information Disclosure and Supply Chain made fairly significant jumps up the list from 2023. Two of the previous list's entries dropped slightly, Training Data Poisoning and Improper Output Handling, though Training Data Poisoning was expanded to include Data and Model Poisoning.
Below, we go into additional detail on the new entries and those that have been reworked and expanded.
New Entries:
System Prompt Leakage
This addition highlights a critical flaw uncovered through real-world incidents. Many applications assumed that prompts were securely isolated, but recent exploits reveal that information embedded in these prompts can leak, compromising the confidentiality of sensitive data.
Vector and Embedding Weaknesses
This entry addresses community concerns by focusing on the vulnerabilities in Retrieval-Augmented Generation (RAG) and embedding-based methods, which are now integral to grounding LLM outputs. As these techniques become central to AI applications, securing them is essential.
Reworked and Expanded Entries:
Misinformation
Expanded to address Overreliance, this rework emphasizes the dangers of unquestioningly trusting LLM outputs. The updated entry recognizes the nuanced ways models can propagate misinformation, especially when their outputs are taken at face value without verification.
Unbounded Consumption
Previously known as Denial of Service, this entry now includes risks tied to resource management and unexpected operational costs. With LLMs powering large-scale deployments, the potential for runaway expenses and system strain makes this expansion timely and critical.
Excessive Agency
With the rise of agentic architectures that grant LLMs autonomy, this expanded entry highlights the risks of unchecked permissions. As AI systems take on more proactive roles, the potential for unintended or harmful actions demands greater scrutiny.
Qualys TotalAI
Qualys provides comprehensive vulnerability detection for AI threats. With over 1,200 QIDs dedicated to AI/ML-related vulnerabilities and over 1.65 million detections, we help organizations secure their AI infrastructure effectively. From assessing risks in LLM deployments to preventing model theft, Qualys delivers holistic AI security solutions to keep your systems resilient against evolving threats. Start detecting AI-related vulnerabilities today with TotalAI.
Mark your calendar for December 4th, 2024, and dive into the evolving security challenges of AI and LLM workloads. This event will shed light on emerging threats alongside practical solutions for mitigating risks.
Take advantage of this opportunity to stay ahead of the curve and fortify your AI defenses. This half-day virtual event includes a roster of AI & LLM security luminaries, such as Steve Wilson, Chief Product Officer, Exabeam, and founder and project leader of the OWASP Top 10 for Large Language Model Applications.
Don't delay. Secure your spot for the December 4th event!
Final Thoughts
The OWASP Top 10 for LLM Applications 2025 encapsulates a refined and forward-looking understanding of the risks associated with AI models. This update empowers developers and organizations to build safer and more resilient AI systems by addressing persistent vulnerabilities and newly emerging threats. As LLMs become integral to countless applications, staying ahead of these risks is not just prudent-it's essential.
Contributors
Related