Vulnerabilities in prompt handling and processing
A vulnerability exists in multiple Large Language Models (LLMs) that allows for safety alignment bypass through a technique named Activation-Guided Local Editing (AGILE). The attack uses white-box access to a source model's internal states (activations and attention scores) to craft a transferable text-based prompt that elicits harmful content.
Large Language Models (LLMs) are vulnerable to automated adversarial attacks that systematically combine multiple jailbreaking "primitives" into complex prompt chains. A dynamic optimization engine can generate and test billions of unique combinations of techniques (e.g., low-resource language translation, payload splitting, role-playing) to bypass safety guardrails. This combinatorial approach differs from manual red-teaming by systematically exploring the attack surface, achieving near-universal success in eliciting harmful content. The vulnerability lies in the models' inability to maintain safety alignment when faced with a sequence of layered obfuscation and manipulation techniques.
Large Language Models (LLMs) are vulnerable to a jailbreak attack termed Paper Summary Attack (PSA). An attacker can bypass safety alignment mechanisms by framing a harmful query within the context of a summarized academic paper. The model's propensity to trust the authoritative structure and tone of a research paper summary overrides its safety filters, leading it to process and respond to the embedded malicious instruction. The vulnerability is particularly potent when using summaries of papers on LLM safety itself (both attack and defense-focused research), exposing significant and differing alignment biases across models.
A vulnerability exists in Diffusion-based Large Language Models (dLLMs) that allows for bypassing safety alignment mechanisms through interleaved mask-text prompts. The vulnerability stems from two core architectural features of dLLMs: bidirectional context modeling and parallel decoding. The model's drive to maintain contextual consistency forces it to fill masked tokens with content that aligns with the surrounding, potentially malicious, text. The parallel decoding process prevents dynamic content filtering or rejection sampling during generation, which are common defense mechanisms in autoregressive models. This allows an attacker to elicit harmful or policy-violating content by explicitly stating a malicious request and inserting mask tokens where the harmful output should be generated.
Large Language Model (LLM) systems integrated with private enterprise data, such as those using Retrieval-Augmented Generation (RAG), are vulnerable to multi-stage prompt inference attacks. An attacker can use a sequence of individually benign-looking queries to incrementally extract confidential information from the LLM's context. Each query appears innocuous in isolation, bypassing safety filters designed to block single malicious prompts. By chaining these queries, the attacker can reconstruct sensitive data from internal documents, emails, or other private sources accessible to the LLM. The attack exploits the conversational context and the model's inability to recognize the cumulative intent of a prolonged, strategic dialogue.
Large Language Models (LLMs) equipped with native code interpreters are vulnerable to Denial of Service (DoS) via resource exhaustion. An attacker can craft a single prompt that causes the interpreter to execute code that depletes CPU, memory, or disk resources. The vulnerability is particularly pronounced when a resource-intensive task is framed within a plausibly benign or socially-engineered context ("indirect prompts"), which significantly lowers the model's likelihood of refusal compared to explicitly malicious requests.
Large Language Models (LLMs) employing safety filters designed to prevent generation of content related to self-harm and suicide can be bypassed through multi-step adversarial prompting. By reframing the request as an academic exercise or hypothetical scenario, users can elicit detailed instructions and information that could facilitate self-harm or suicide, despite initially expressing harmful intent. This vulnerability lies in the inadequacy of existing safety filters to consistently recognize and prevent harmful outputs despite shifts in conversational context.
A resource consumption vulnerability exists in multiple Large Vision-Language Models (LVLMs). An attacker can craft a subtle, imperceptible adversarial perturbation and apply it to an input image. When this image is processed by an LVLM, even with a benign text prompt, it forces the model into an unbounded generation loop. The attack, named RECALLED, uses a gradient-based optimization process to create a visual perturbation that steers the model's text generation towards a predefined, repetitive sequence (an "Output Recall" target). This causes the model to generate text that repeats a word or sentence until the maximum context limit is reached, leading to a denial-of-service condition through excessive computational resource usage and response latency.
A vulnerability exists where Large Language Models (LLMs) can be manipulated by prepending a specially crafted 'persona prompt', often in the system prompt. These persona prompts cause the model to shift its attention from sensitive keywords in a harmful request to the stylistic instructions of the persona. This weakens the model's safety alignment, significantly reducing its refusal rate for harmful queries. The vulnerability is particularly severe because these persona prompts have a synergistic effect, dramatically increasing the success rate of other existing jailbreak techniques when combined. The persona prompts are transferable across different models.
Large Language Models (LLMs) and Text-to-Image (T2I) models are vulnerable to jailbreaking through prompt-based attacks that use narrative framing, semantic substitution, and context diffusion to bypass safety moderation pipelines. These attacks do not require specialized knowledge or technical expertise. Attackers can embed harmful requests within benign narratives, frame them as fictional or professional inquiries, or use euphemistic language to circumvent input filters and output classifiers. The core vulnerability is the models' inability to holistically assess cumulative intent across multi-turn dialogues or recognize malicious intent when it is semantically or stylistically disguised.
© 2025 Promptfoo. All rights reserved.