Older Posts2

System Cards Go Hard
Master LLM system cards for responsible AI deployment with transparency, safety documentation, and compliance requirements.

The Promptfoo MCP Proxy: Enterprise MCP Security
Learn about the security risks introduced by MCP servers and how to mitigate them using the Promptfoo MCP Proxy, an enterprise solution for MCP security..

OWASP Top 10 LLM Security Risks (2025) – 5-Minute TLDR
Master OWASP Top 10 LLM security vulnerabilities with practical mitigation strategies in this comprehensive 2025 guide.

Promptfoo Achieves SOC 2 Type II and ISO 27001 Certification: Strengthening Trust in AI Security
Promptfoo achieves SOC 2 Type II and ISO 27001 compliance, demonstrating enterprise-grade security for AI red teaming and LLM evaluation tools..

ModelAudit vs ModelScan: Comparing ML Model Security Scanners
Compare ModelAudit and ModelScan for ML model security scanning.

Harder, Better, Prompter, Stronger: AI system prompt hardening
Learn techniques for hardening AI prompts against injection attacks and security vulnerabilities with Promptfoo examples.

Promptfoo vs PyRIT: A Practical Comparison of LLM Red Teaming Tools
Detailed comparison of Promptfoo and Microsoft's PyRIT for LLM security testing.

Promptfoo vs Garak: Choosing the Right LLM Red Teaming Tool
Compare Promptfoo and Garak for LLM security testing.

How to Red Team Gemini: Complete Security Testing Guide for Google's AI Models
Comprehensive guide to red teaming Google Gemini models for multimodal vulnerabilities across text, vision, and code generation.

Next Generation of Red Teaming for LLM Agents
Promptfoo is introducing our revolutionary, next-generation red teaming agent designed for enterprise-grade LLM agents..

Celebrating 100,000 Users: Promptfoo's Journey, Red Teaming, and the Future of AI Security
Promptfoo reaches 100K users! Learn about our journey from prompt evaluation to AI red teaming and what's next for AI security..

How to Red Team GPT: Complete Security Testing Guide for OpenAI Models
OpenAI's latest GPT models are more capable but also more vulnerable.

How to Red Team Claude: Complete Security Testing Guide for Anthropic Models
Claude is known for safety, but how secure is it really? Step-by-step guide to red teaming Anthropic's models and uncovering hidden vulnerabilities..

A2A Protocol: The Universal Language for AI Agents
Dive into the A2A protocol - the standardized communication layer that enables AI agents to discover, connect, and collaborate securely..

Inside MCP: A Protocol for AI Integration
A hands-on exploration of Model Context Protocol - the standard that connects AI systems with real-world tools and data.

The Invisible Threat: How Zero-Width Unicode Characters Can Silently Backdoor Your AI-Generated Code
Explore how invisible Unicode characters can be used to manipulate AI coding assistants and LLMs, potentially leading to security vulnerabilities in your code..

OWASP Red Teaming: A Practical Guide to Getting Started
OWASP released the first official red teaming guide for AI systems.

Misinformation in LLMs: Causes and Prevention Strategies
LLMs can spread false information at scale.

Sensitive Information Disclosure in LLMs: Privacy and Compliance in Generative AI
LLMs can leak training data, PII, and corporate secrets.

Understanding AI Agent Security
AI agents are powerful but vulnerable.