📄️ ASCII Smuggling
Overview
📄️ Function-Level Authorization
The BFLA (Broken Function Level Authorization) red teaming plugin is designed to test an AI system's ability to maintain proper authorization controls for specific functions or actions (OWASP API 5).
📄️ Object-Level Authorization
The BOLA (Broken Object Level Authorization) red teaming plugin is designed to test an AI system's vulnerability to attacks that attempt to access or modify resources belonging to other users or outside the authorized scope (OWASP API 1).
📄️ Competitors
The Competitors red teaming plugin is designed to test whether an AI system can be influenced to speak positively about or recommend its competitors.
📄️ Contracts
The Contracts red teaming plugin is designed to test whether an AI system can be influenced to enter into unintended business contracts or make unauthorized commitments.
📄️ Cross-Session Leak
The Cross-Session Leak plugin tests an AI system's vulnerability to information leaks across different user sessions.
📄️ Debug Access
The Debug Access red teaming plugin is designed to test whether an AI system has an exposed debugging interface or responds to debugging commands.
📄️ Excessive Agency
The Excessive Agency red teaming plugin tests whether an AI is aware of its own capabilities and limitations by prompting it with requests that a text-based LLM cannot possibly fulfill.
📄️ Hallucination
The Hallucination red teaming plugin is designed to test whether a large language model can be led to fabricate answers or generate false information.
📄️ Harmful Content
The Harmful Content plugin is designed to test an AI system's ability to handle and respond appropriately to various types of harmful or sensitive content. This plugin covers a wide range of potentially problematic areas, from violent crimes to misinformation.
📄️ Hijacking
The Hijacking red teaming plugin is designed to test whether a large language model can be distracted from its intended system purpose.
📄️ Imitation
The Imitation red teaming plugin is designed to test whether an AI system can be influenced to imitate a specific person, brand, or organization.
📄️ Indirect Prompt Injection
The Indirect Prompt Injection red teaming plugin tests an AI system's vulnerability to attacks where untrusted content is injected into the prompt, potentially influencing the model's behavior.
📄️ Overreliance
The Overreliance red teaming plugin helps identify vulnerabilities where an AI model might accept and act upon incorrect or unrealistic user assumptions without proper verification or correction. It is part of the overall red teaming toolkit for Large Language Models (LLMs) and generative AI systems.
📄️ Custom policy
The Policy red teaming plugin is a customizable tool designed to test whether an AI system adheres to specific policies or guidelines.
📄️ Politics
The Politics red teaming plugin is designed to test whether an AI system can be influenced to make political statements or express political opinions.
📄️ Prompt Extraction
The Prompt Extraction red teaming plugin tests an AI system's vulnerability to attacks aimed at extracting the system prompt from the model. The system prompt consists of instructions given to the model to guide its behavior.
📄️ RBAC
The RBAC (Role-Based Access Control) red teaming plugin is designed to test an AI system's ability to maintain proper authorization controls for different user roles.
📄️ Shell Injection
The Shell Injection plugin is designed to test an AI system's vulnerability to attacks that attempt to execute unauthorized shell commands.
📄️ SQL injection
The SQL Injection red teaming plugin is designed to test an AI system's vulnerability to attacks that attempt to execute unauthorized database queries.
📄️ SSRF
The SSRF (Server-Side Request Forgery) red teaming plugin is designed to test an AI system's vulnerability to attacks that attempt to make the server-side application fetch resources from unexpected or unauthorized destinations.