Skip to main content

Archive2

How to Red Team GPT: Complete Security Testing Guide for OpenAI Models

How to Red Team GPT: Complete Security Testing Guide for OpenAI Models

Ian Webster · 6/7/2025

OpenAI's latest GPT models are more capable but also more vulnerable.

How to Red Team Claude: Complete Security Testing Guide for Anthropic Models

How to Red Team Claude: Complete Security Testing Guide for Anthropic Models

Ian Webster · 5/22/2025

Claude is known for safety, but how secure is it really? Step-by-step guide to red teaming Anthropic's models and uncovering hidden vulnerabilities..

A2A Protocol: The Universal Language for AI Agents

A2A Protocol: The Universal Language for AI Agents

Asmi Gulati · 5/12/2025

Dive into the A2A protocol - the standardized communication layer that enables AI agents to discover, connect, and collaborate securely..

Inside MCP: A Protocol for AI Integration

Inside MCP: A Protocol for AI Integration

Asmi Gulati · 5/6/2025

A hands-on exploration of Model Context Protocol - the standard that connects AI systems with real-world tools and data.

The Invisible Threat: How Zero-Width Unicode Characters Can Silently Backdoor Your AI-Generated Code

The Invisible Threat: How Zero-Width Unicode Characters Can Silently Backdoor Your AI-Generated Code

Asmi Gulati · 4/10/2025

Explore how invisible Unicode characters can be used to manipulate AI coding assistants and LLMs, potentially leading to security vulnerabilities in your code..

OWASP Red Teaming: A Practical Guide to Getting Started

OWASP Red Teaming: A Practical Guide to Getting Started

Vanessa Sauter · 3/25/2025

OWASP released the first official red teaming guide for AI systems.

Misinformation in LLMs: Causes and Prevention Strategies

Misinformation in LLMs: Causes and Prevention Strategies

Vanessa Sauter · 3/19/2025

LLMs can spread false information at scale.

Sensitive Information Disclosure in LLMs: Privacy and Compliance in Generative AI

Sensitive Information Disclosure in LLMs: Privacy and Compliance in Generative AI

Vanessa Sauter · 3/11/2025

LLMs can leak training data, PII, and corporate secrets.

Understanding AI Agent Security

Understanding AI Agent Security

Vanessa Sauter · 2/14/2025

AI agents are powerful but vulnerable.

What are the Security Risks of Deploying DeepSeek-R1?

What are the Security Risks of Deploying DeepSeek-R1?

Vanessa Sauter · 2/3/2025

Our red team analysis found DeepSeek-R1 fails 60%+ of harmful content tests.