📄️ Base64 Encoding
The Base64 Encoding strategy is a simple strategy that tests an AI system's ability to handle and process encoded inputs, potentially bypassing certain content filters or detection mechanisms.
📄️ Iterative Jailbreaks
The Iterative Jailbreaks strategy is a technique designed to systematically probe and potentially bypass an AI system's constraints by repeatedly refining a single-shot prompt.
📄️ Leetspeak
The Leetspeak strategy is a text obfuscation technique that replaces standard letters with numbers or special characters.
📄️ Multi-turn Jailbreaks
The Crescendo strategy is a multi-turn jailbreak technique that gradually escalates the potential harm of prompts, exploiting the fuzzy boundary between acceptable and unacceptable responses.
📄️ Multilingual
The Multilingual strategy tests an AI system's ability to handle and process inputs in multiple languages, potentially uncovering inconsistencies in behavior across different languages or bypassing language-specific content filters.
📄️ Prompt Injection
The Prompt Injection strategy tests common direct prompt injection vulnerabilities in LLMs.
📄️ ROT13 Encoding
The ROT13 Encoding strategy is a simple letter substitution technique that rotates each letter in the text by 13 positions in the alphabet.
📄️ Tree-based Jailbreaks
The Tree-based Jailbreaks strategy is an advanced technique designed to systematically explore and potentially bypass an AI system's constraints by creating a branching structure of single-shot prompts.