📄️ Intro
LLM red teaming is a way to find vulnerabilities in AI systems before they're deployed by using simulated adversarial inputs.
📄️ Quickstart
Promptfoo is an open-source tool for red teaming gen AI applications.
📄️ Configuration
The redteam section in your promptfooconfig.yaml file is used when generating redteam tests via promptfoo redteam run or promptfoo redteam generate. It allows you to specify the plugins and other parameters of your red team tests.
📄️ Architecture
Promptfoo automated red teaming consists of three main components: plugins, strategies, and targets.
📄️ Types of LLM vulnerabilities
This page documents categories of potential LLM vulnerabilities and failure modes.
📄️ OWASP LLM Top 10
The OWASP Top 10 for Large Language Model Applications educates developers about security risks in deploying and managing LLMs. It lists the top critical vulnerabilities in LLM applications based on impact, exploitability, and prevalence. OWASP recently released its updated version of the Top 10 for LLMs for 2025.
🗃️ Plugins
5 items
🗃️ Strategies
6 items
🗃️ Tools
3 items
🗃️ Troubleshooting
9 items
🗃️ Guides
7 items
📄️ How to red team LLM applications
Promptfoo is a popular open source evaluation framework that includes LLM red team and penetration testing capabilities.