Skip to main content
Featured

Celebrating 100,000 Users: Promptfoo's Journey, Red Teaming, and the Future of AI Security

Promptfoo reaches 100K users! Learn about our journey from prompt evaluation to AI red teaming and what's next for AI security..

Celebrating 100,000 Users: Promptfoo's Journey, Red Teaming, and the Future of AI Security

Latest Posts

How to Red Team GPT

How to Red Team GPT

OpenAI's GPT-4.1 and GPT-4.5 represents a significant leap in AI capabilities, especially for coding and instruction following.

How to Red Team Claude

How to Red Team Claude

Anthropic's Claude 4 represents a major leap in AI capabilities, especially with its extended thinking feature.

A2A Protocol: The Universal Language for AI Agents

A2A Protocol: The Universal Language for AI Agents

Dive into the A2A protocol - the standardized communication layer that enables AI agents to discover, connect, and collaborate securely..

Inside MCP: A Protocol for AI Integration

Inside MCP: A Protocol for AI Integration

A hands-on exploration of Model Context Protocol - the standard that connects AI systems with real-world tools and data.

The Invisible Threat: How Zero-Width Unicode Characters Can Silently Backdoor Your AI-Generated Code

The Invisible Threat: How Zero-Width Unicode Characters Can Silently Backdoor Your AI-Generated Code

Explore how invisible Unicode characters can be used to manipulate AI coding assistants and LLMs, potentially leading to security vulnerabilities in your code..

OWASP Red Teaming: A Practical Guide to Getting Started

OWASP Red Teaming: A Practical Guide to Getting Started

While generative AI creates new opportunities for companies, it also introduces novel security risks that differ significantly from traditional cybersecurity concerns.

Misinformation in LLMs—Causes and Prevention Strategies

Misinformation in LLMs—Causes and Prevention Strategies

Misinformation in LLMs occurs when a model produces false or misleading information that is treated as credible.

Sensitive Information Disclosure in LLMs: Privacy and Compliance in Generative AI

Sensitive Information Disclosure in LLMs: Privacy and Compliance in Generative AI

Imagine deploying an LLM application only to discover it's inadvertently revealing your company's internal documents, customer data, and API keys through seemingly innocent conversations.