Skip to main content

Older Posts2

System Cards Go Hard
Research Analysis

System Cards Go Hard

Master LLM system cards for responsible AI deployment with transparency, safety documentation, and compliance requirements.

Tabs FakierJul 15, 2025
The Promptfoo MCP Proxy: Enterprise MCP Security
Company Update

The Promptfoo MCP Proxy: Enterprise MCP Security

Learn about the security risks introduced by MCP servers and how to mitigate them using the Promptfoo MCP Proxy, an enterprise solution for MCP security..

Steven KleinJul 14, 2025
OWASP Top 10 LLM Security Risks (2025) – 5-Minute TLDR
Compliance Framework

OWASP Top 10 LLM Security Risks (2025) – 5-Minute TLDR

Master OWASP Top 10 LLM security vulnerabilities with practical mitigation strategies in this comprehensive 2025 guide.

Tabs FakierJul 14, 2025
Promptfoo Achieves SOC 2 Type II and ISO 27001 Certification: Strengthening Trust in AI Security
Company Update

Promptfoo Achieves SOC 2 Type II and ISO 27001 Certification: Strengthening Trust in AI Security

Promptfoo achieves SOC 2 Type II and ISO 27001 compliance, demonstrating enterprise-grade security for AI red teaming and LLM evaluation tools..

Vanessa SauterJul 11, 2025
ModelAudit vs ModelScan: Comparing ML Model Security Scanners
Tool Comparison

ModelAudit vs ModelScan: Comparing ML Model Security Scanners

Compare ModelAudit and ModelScan for ML model security scanning.

Ian WebsterJul 6, 2025
Harder, Better, Prompter, Stronger: AI system prompt hardening
Technical Guide

Harder, Better, Prompter, Stronger: AI system prompt hardening

Learn techniques for hardening AI prompts against injection attacks and security vulnerabilities with Promptfoo examples.

Tabs FakierJul 1, 2025
Promptfoo vs PyRIT: A Practical Comparison of LLM Red Teaming Tools
Tool Comparison

Promptfoo vs PyRIT: A Practical Comparison of LLM Red Teaming Tools

Detailed comparison of Promptfoo and Microsoft's PyRIT for LLM security testing.

Ian WebsterJun 27, 2025
Promptfoo vs Garak: Choosing the Right LLM Red Teaming Tool
Tool Comparison

Promptfoo vs Garak: Choosing the Right LLM Red Teaming Tool

Compare Promptfoo and Garak for LLM security testing.

Ian WebsterJun 26, 2025
How to Red Team Gemini: Complete Security Testing Guide for Google's AI Models
Technical Guide

How to Red Team Gemini: Complete Security Testing Guide for Google's AI Models

Comprehensive guide to red teaming Google Gemini models for multimodal vulnerabilities across text, vision, and code generation.

Ian WebsterJun 18, 2025
Next Generation of Red Teaming for LLM Agents
Company Update

Next Generation of Red Teaming for LLM Agents

Promptfoo is introducing our revolutionary, next-generation red teaming agent designed for enterprise-grade LLM agents..

Steven KleinJun 15, 2025
Celebrating 100,000 Users: Promptfoo's Journey, Red Teaming, and the Future of AI Security
Company Update

Celebrating 100,000 Users: Promptfoo's Journey, Red Teaming, and the Future of AI Security

Promptfoo reaches 100K users! Learn about our journey from prompt evaluation to AI red teaming and what's next for AI security..

Michael D'AngeloJun 10, 2025
How to Red Team GPT: Complete Security Testing Guide for OpenAI Models
Technical Guide

How to Red Team GPT: Complete Security Testing Guide for OpenAI Models

OpenAI's latest GPT models are more capable but also more vulnerable.

Ian WebsterJun 7, 2025
How to Red Team Claude: Complete Security Testing Guide for Anthropic Models
Technical Guide

How to Red Team Claude: Complete Security Testing Guide for Anthropic Models

Claude is known for safety, but how secure is it really? Step-by-step guide to red teaming Anthropic's models and uncovering hidden vulnerabilities..

Ian WebsterMay 22, 2025
A2A Protocol: The Universal Language for AI Agents
Technical Guide

A2A Protocol: The Universal Language for AI Agents

Dive into the A2A protocol - the standardized communication layer that enables AI agents to discover, connect, and collaborate securely..

Asmi GulatiMay 12, 2025
Inside MCP: A Protocol for AI Integration
Technical Guide

Inside MCP: A Protocol for AI Integration

A hands-on exploration of Model Context Protocol - the standard that connects AI systems with real-world tools and data.

Asmi GulatiMay 6, 2025
The Invisible Threat: How Zero-Width Unicode Characters Can Silently Backdoor Your AI-Generated Code
Security Vulnerability

The Invisible Threat: How Zero-Width Unicode Characters Can Silently Backdoor Your AI-Generated Code

Explore how invisible Unicode characters can be used to manipulate AI coding assistants and LLMs, potentially leading to security vulnerabilities in your code..

Asmi GulatiApr 10, 2025
OWASP Red Teaming: A Practical Guide to Getting Started
Compliance Framework

OWASP Red Teaming: A Practical Guide to Getting Started

OWASP released the first official red teaming guide for AI systems.

Vanessa SauterMar 25, 2025
Misinformation in LLMs: Causes and Prevention Strategies
Security Vulnerability

Misinformation in LLMs: Causes and Prevention Strategies

LLMs can spread false information at scale.

Vanessa SauterMar 19, 2025
Sensitive Information Disclosure in LLMs: Privacy and Compliance in Generative AI
Security Vulnerability

Sensitive Information Disclosure in LLMs: Privacy and Compliance in Generative AI

LLMs can leak training data, PII, and corporate secrets.

Vanessa SauterMar 11, 2025
Understanding AI Agent Security
Security Vulnerability

Understanding AI Agent Security

AI agents are powerful but vulnerable.

Vanessa SauterFeb 14, 2025