📄️ Intro
Red team LLM systems through systematic adversarial testing to detect content policy violations, information leakage, and API misuse before production deployment
📄️ Quickstart
Start red teaming LLMs in minutes by scanning 50+ vulnerabilities including jailbreaks, prompt injection, and data exfiltration
📄️ Configuration
Red team your LLM configuration settings using automated vulnerability scanning to detect misconfigurations and prevent unauthorized access to AI system parameters
📄️ Architecture
Red team AI systems by analyzing architecture components and attack surfaces to protect LLM applications through systematic vulnerability assessment and threat modeling
📄️ Types of LLM vulnerabilities
Red team LLM systems for security, privacy, and criminal vulnerabilities using modular testing plugins to protect AI applications from exploitation and data breaches
📄️ Risk Scoring
Promptfoo provides a risk scoring system that quantifies the severity and likelihood of vulnerabilities in your LLM application. Each vulnerability is assigned a risk score between 0 and 10 that helps you prioritize remediation efforts.
🗃️ Plugins
5 items
🗃️ Strategies
6 items
🗃️ Frameworks
7 items
🗃️ Tools
3 items
🗃️ Troubleshooting
11 items
🗃️ Guides
9 items