Promptfoo Raises $18.4M Series A to Build the Definitive AI Security Stack
We raised $18.4M from Insight Partners with participation from Andreessen Horowitz. Funding will accelerate development of the most widely adopted AI security testing solution..

Latest Posts

Evaluating political bias in LLMs
How right-leaning is Grok? We've released a new testing methodology alongside a dataset of 2,500 political questions..

Join Promptfoo at Hacker Summer Camp 2025
Join Promptfoo at AI Summit, Black Hat, and DEF CON for demos, workshops, and discussions on LLM security and red teaming.

AI Red Teaming for complete first-timers
A comprehensive guide to AI red teaming for beginners, covering the basics, culture building, and operational feedback loops.

System Cards Go Hard
Master LLM system cards for responsible AI deployment with transparency, safety documentation, and compliance requirements.

The Promptfoo MCP Proxy: Enterprise MCP Security
Learn about the security risks introduced by MCP servers and how to mitigate them using the Promptfoo MCP Proxy, an enterprise solution for MCP security..

OWASP Top 10 LLM Security Risks (2025) – 5-Minute TLDR
Master OWASP Top 10 LLM security vulnerabilities with practical mitigation strategies in this comprehensive 2025 guide.

Promptfoo Achieves SOC 2 Type II and ISO 27001 Certification: Strengthening Trust in AI Security
Promptfoo achieves SOC 2 Type II and ISO 27001 compliance, demonstrating enterprise-grade security for AI red teaming and LLM evaluation tools..

ModelAudit vs ModelScan: Comparing ML Model Security Scanners
Compare ModelAudit and ModelScan for ML model security scanning.

Harder, Better, Prompter, Stronger: AI system prompt hardening
Learn techniques for hardening AI prompts against injection attacks and security vulnerabilities with Promptfoo examples.