Skip to main content

How Do You Secure RAG Applications?

The Promptfoo panda reaching for the great orb of knowledge, otherwise known as a vector database..

How Do You Secure RAG Applications?

Latest Posts

Prompt Injection: A Comprehensive Guide

Prompt Injection: A Comprehensive Guide

In August 2024, security researcher Johann Rehberger uncovered a critical vulnerability in Microsoft 365 Copilot: through a sophisticated prompt injection attack, he demonstrated how sensitive company data could be secretly exfiltrated..

Understanding Excessive Agency in LLMs

Understanding Excessive Agency in LLMs

Excessive agency in LLMs is a broad security risk where AI systems can do more than they should.

Preventing Bias & Toxicity in Generative AI

Preventing Bias & Toxicity in Generative AI

When asked to generate recommendation letters for 200 U.S.

How Much Does Foundation Model Security Matter?

How Much Does Foundation Model Security Matter?

At the heart of every Generative AI application is the LLM foundation model (or models) used.

Jailbreaking Black-Box LLMs Using Promptfoo: A Walkthrough

Jailbreaking Black-Box LLMs Using Promptfoo: A Walkthrough

Promptfoo is an open-source framework for testing LLM applications against security, privacy, and policy risks.

Promptfoo for Enterprise

Promptfoo for Enterprise

Today we're announcing new Enterprise features for teams developing and securing LLM applications..

New Red Teaming Plugins for LLM Agents: Enhancing API Security

New Red Teaming Plugins for LLM Agents: Enhancing API Security

We're excited to announce the release of three new red teaming plugins designed specifically for Large Language Model (LLM) agents with access to internal APIs.

Promptfoo raises $5M to fix vulnerabilities in AI applications

Promptfoo raises $5M to fix vulnerabilities in AI applications

Today, we’re excited to announce that Promptfoo has raised a $5M seed round led by Andreessen Horowitz to help developers find and fix vulnerabilities in their AI applications..

Automated jailbreaking techniques with Dall-E

Automated jailbreaking techniques with Dall-E

We all know that image models like OpenAI’s Dall-E can be jailbroken to generate violent, disturbing, and offensive images.