OWASP Top 10 LLM Security Risks (2025) – 5-Minute TLDR
LLM breaches jumped 180 percent in the last year. If you ship AI features, you need a one-page map of the dangers—and the fixes. This five-minute TLDR walks through the OWASP Top 10 for LLMs, from prompt injection to unbounded consumption, with concrete mitigations you can copy-paste today.
I'm not about to explain how one can use Promptfoo to address these. We've done that already. I simply wished for a TL;DR version intended for someone far too tired to decipher cybersecurity jargon, and I found none... So here we are.
For the unfamiliar: OWASP is the Open Worldwide Application Security Project Foundation, an online community creating fantastic resources in software and security. There are SO many guides and publications. Have a gander.
In the OWASP 10 for LLM Applications specifically, these are the top ten security issues:
- Prompt injection
- Sensitive information disclosure
- Supply chain vulnerabilities
- Data and model poisoning
- Improper output handling
- Excessive agency
- System prompt leakage
- Vector and embedding weaknesses
- Misinformation
- Unbounded consumption
Let's unpack them in an orderly fashion.
The vulnerabilities
1. Prompt injection
Prompts are instructions fed to a LLM to make it do something.
'Quiz me on Gen Z lingo so the next time I go into class I know when they make fun of me. Today they kept calling me a goat.'
They can be direct (such as text typed in from a user) or indirect (an LLM accepting external input like a file). Consequences include revealing personal information, sensitive business information, providing unauthorized access to functions, and so on. Learn more about LLM prompt-injection playbook.
2. Sensitive information disclosure
Sensitive information includes data that can identify a person, financial records, health records, legal documents, source code, and so on. LLMs can leak it, and users are often far too lax with this information. Some situations feel private when they're not.
Sometimes users are simply providing information as requested, such as addresses to chat support, and an LLM gets trained on those transcripts - this is risky as well.
3. Supply chain vulnerabilities
An LLM supply chain can be massive: it comprises of everything from development to the distribution of LLM models. There are many types of issues that can affect the chain:
- Outdated models without security patches
- Third-party software with security holes
- Model merging - using multiple model services
- Devices (like mobile phones) with exploits running LLM models
4. Data and model poisoning
Data poisoning happens when the data is deliberately tainted before being fed to an LLM. Perhaps we're using public feedback to retrain a model; some jerks could flood the feedback with offensive phrases. Learn about RAG poisoning attacks and how to test for them.
5. Improper output handling
Generally, data is processed before being passed to other systems (for integrity, lack of anything harmful, etc). Improper output handling is doing a terrible job of processing it. Imagine a question that includes some SQL that gets fed to an app and uh oh-where's the data gone (props to anyone thinking of Little Bobby Tables)? Test for SQL injection vulnerabilities in your LLM applications.