Older Posts2

Top Open Source AI Red-Teaming and Fuzzing Tools in 2025
Compare the top open source AI red teaming tools in 2025.

Promptfoo Raises $18.4M Series A to Build the Definitive AI Security Stack
We raised $18.4M from Insight Partners with participation from Andreessen Horowitz.

Evaluating political bias in LLMs
How right-leaning is Grok? We've released a new testing methodology alongside a dataset of 2,500 political questions..

Join Promptfoo at Hacker Summer Camp 2025
Join Promptfoo at AI Summit, Black Hat, and DEF CON for demos, workshops, and discussions on LLM security and red teaming.

AI Red Teaming for complete first-timers
A comprehensive guide to AI red teaming for beginners, covering the basics, culture building, and operational feedback loops.

System Cards Go Hard
Master LLM system cards for responsible AI deployment with transparency, safety documentation, and compliance requirements.

The Promptfoo MCP Proxy: Enterprise MCP Security
Learn about the security risks introduced by MCP servers and how to mitigate them using the Promptfoo MCP Proxy, an enterprise solution for MCP security..

OWASP Top 10 LLM Security Risks (2025) – 5-Minute TLDR
Master OWASP Top 10 LLM security vulnerabilities with practical mitigation strategies in this comprehensive 2025 guide.

Promptfoo Achieves SOC 2 Type II and ISO 27001 Certification: Strengthening Trust in AI Security
Promptfoo achieves SOC 2 Type II and ISO 27001 compliance, demonstrating enterprise-grade security for AI red teaming and LLM evaluation tools..

ModelAudit vs ModelScan: Comparing ML Model Security Scanners
Compare ModelAudit and ModelScan for ML model security scanning.

Harder, Better, Prompter, Stronger: AI system prompt hardening
Learn techniques for hardening AI prompts against injection attacks and security vulnerabilities with Promptfoo examples.

Promptfoo vs PyRIT: A Practical Comparison of LLM Red Teaming Tools
Detailed comparison of Promptfoo and Microsoft's PyRIT for LLM security testing.

Promptfoo vs Garak: Choosing the Right LLM Red Teaming Tool
Compare Promptfoo and Garak for LLM security testing.

How to Red Team Gemini: Complete Security Testing Guide for Google's AI Models
Comprehensive guide to red teaming Google Gemini models for multimodal vulnerabilities across text, vision, and code generation.

Next Generation of Red Teaming for LLM Agents
Promptfoo is introducing our revolutionary, next-generation red teaming agent designed for enterprise-grade LLM agents..

Celebrating 100,000 Users: Promptfoo's Journey, Red Teaming, and the Future of AI Security
Promptfoo reaches 100K users! Learn about our journey from prompt evaluation to AI red teaming and what's next for AI security..

How to Red Team GPT: Complete Security Testing Guide for OpenAI Models
OpenAI's latest GPT models are more capable but also more vulnerable.

How to Red Team Claude: Complete Security Testing Guide for Anthropic Models
Claude is known for safety, but how secure is it really? Step-by-step guide to red teaming Anthropic's models and uncovering hidden vulnerabilities..

A2A Protocol: The Universal Language for AI Agents
Dive into the A2A protocol - the standardized communication layer that enables AI agents to discover, connect, and collaborate securely..

Inside MCP: A Protocol for AI Integration
A hands-on exploration of Model Context Protocol - the standard that connects AI systems with real-world tools and data.