Skip to main content

16 posts tagged with "red-teaming"

View All Tags

How to Red Team an Ollama Model: Complete Local LLM Security Testing Guide

Ian Webster
Engineer & OWASP Gen AI Red Teaming Contributor

Want to test the safety and security of a model hosted on Ollama? This guide shows you how to use Promptfoo to systematically probe for vulnerabilities through adversarial testing (red teaming).

We'll use Llama 3.2 3B as an example, but this guide works with any Ollama model.

Here's an example of what the red team report looks like:

example llm red team report

Does Fuzzing LLMs Actually Work?

Vanessa Sauter
Principal Solutions Architect

Fuzzing has been a tried and true method in pentesting for years. In essence, it is a method of injecting malformed or unexpected inputs to identify application weaknesses. These payloads are typically static vectors that are automatically pushed into injection points within a system. Once injected, a pentester can glean insights into types of vulnerabilities based on the application's responses. While fuzzing may be tempting for testing LLM applications, it is rarely successful.

Jailbreaking Black-Box LLMs Using Promptfoo: A Complete Walkthrough

Vanessa Sauter
Principal Solutions Architect

Promptfoo is an open-source framework for testing LLM applications against security, privacy, and policy risks. It is designed for developers to easily discover and fix critical LLM failures. Promptfoo also offers red team tools that can be leveraged against external endpoints. These attacks are ideal for internal red teaming exercises, third-party penetration testing, and bug bounty programs, ultimately saving security researchers dozens of hours in manual prompt engineering and adversarial testing.

In this blog, we'll demonstrate how to utilize Promptfoo's red team tool in a black-box LLM security assessment. Using Wiz's AI CTF, Prompt Airlines, we'll walk you step by step through Promptfoo's configuration to ultimately find the malicious queries that broke the chatbot's guidelines.

New Red Teaming Plugins for LLM Agents: Enhancing API Security

Ian Webster
Engineer & OWASP Gen AI Red Teaming Contributor

We're excited to announce the release of three new red teaming plugins designed specifically for Large Language Model (LLM) agents with access to internal APIs. These plugins address critical security vulnerabilities outlined in the OWASP API Security Top 10:

  1. Broken Object Level Authorization (BOLA)
  2. Broken Function Level Authorization (BFLA)
  3. Server-Side Request Forgery (SSRF)