Vulnerabilities in model fine-tuning processes
A vulnerability in the fine-tuning process of Large Language Models (LLMs) allows for the automated generation of stealthy backdoor attacks using an autonomous LLM agent. This method, termed AutoBackdoor, creates a pipeline to generate semantically coherent trigger phrases and corresponding poisoned instruction-response pairs. Unlike traditional backdoor attacks that rely on fixed, often anomalous triggers, this technique produces natural language triggers that are contextually relevant and difficult to detect. Fine-tuning a model on a small number of these agent-generated samples (as few as 1%) is sufficient to implant a persistent backdoor.
A vulnerability exists in Large Language Models (LLMs) that support fine-tuning, allowing an attacker to bypass safety alignments using a small, benign dataset. The attack, "Attack via Overfitting," is a two-stage process. In Stage 1, the model is fine-tuned on a small set of benign questions (e.g., 10) paired with identical, repetitive refusal answers. This induces an overfitted state where the model learns to refuse all prompts, creating a sharp minimum in the loss landscape and making it highly sensitive to parameter changes. In Stage 2, the overfitted model is further fine-tuned on the same benign questions, but with their standard, helpful answers. This second fine-tuning step causes catastrophic forgetting of the general refusal behavior, leading to a collapse of safety alignment and causing the model to comply with harmful and malicious instructions. The attack is highly stealthy as the fine-tuning data appears benign to content moderation systems.
Large Language Models (LLMs) that use special tokens to define conversational structure (e.g., via chat templates) are vulnerable to a jailbreak attack named MetaBreak. An attacker can inject these special tokens, or regular tokens with high semantic similarity in the embedding space, into a user prompt. This manipulation allows the attacker to bypass the model's internal safety alignment and external content moderation systems. The attack leverages four primitives:
Large Language Models (LLMs) exhibit a significantly lower safety threshold when prompted in low-resource languages, such as Singlish, Malay, and Tamil, compared to high-resource languages like English. This vulnerability allows for the generation of toxic, biased, and hateful content through simple prompts. The models are susceptible to "toxicity jailbreaks" where providing a few toxic examples in-context (few-shot prompting) causes a substantial increase in the generation of harmful outputs, bypassing their safety alignments. The vulnerability is pronounced in tasks involving conversational response, question-answering, and content composition.
A vulnerability exists in LLM-based Multi-Agent Systems (LLM-MAS) where an attacker with control over the communication network can perform a multi-round, adaptive, and stealthy message tampering attack. By intercepting and subtly modifying inter-agent messages over multiple conversational turns, an attacker can manipulate the system's collective reasoning process. The attack (named MAST in the reference paper) uses a fine-tuned policy model to generate a sequence of small, context-aware perturbations that are designed to evade detection by remaining semantically and stylistically similar to the original messages. The cumulative effect of these modifications can steer the entire system toward an attacker-defined goal, causing it to produce incorrect, malicious, or manipulated outputs.
Large Language Model (LLM) systems integrated with private enterprise data, such as those using Retrieval-Augmented Generation (RAG), are vulnerable to multi-stage prompt inference attacks. An attacker can use a sequence of individually benign-looking queries to incrementally extract confidential information from the LLM's context. Each query appears innocuous in isolation, bypassing safety filters designed to block single malicious prompts. By chaining these queries, the attacker can reconstruct sensitive data from internal documents, emails, or other private sources accessible to the LLM. The attack exploits the conversational context and the model's inability to recognize the cumulative intent of a prolonged, strategic dialogue.
A contextual priming vulnerability, termed "Response Attack," exists in certain multimodal and large language models. The vulnerability allows an attacker to bypass safety alignments by crafting a dialogue history where a prior, fabricated model response contains mildly harmful or scaffolding content. This primes the model to generate policy-violating content in response to a subsequent trigger prompt. The model's safety mechanisms, which primarily evaluate the user's current prompt, are circumvented because the harmful intent is established through the preceding, seemingly valid context. The attack is effective in two modes: Direct Response Injection (DRI), which injects a complete harmful response, and Scaffolding Response Injection (SRI), which injects a high-level outline.
A vulnerability in fine-tuning-based large language model (LLM) unlearning allows malicious actors to craft manipulated forgetting requests. By subtly increasing the frequency of common benign tokens within the forgetting data, the attacker can cause the unlearned model to exhibit unintended unlearning behaviors when these benign tokens appear in normal user prompts, leading to a degradation of model utility for legitimate users. This occurs because existing unlearning methods fail to effectively distinguish between benign tokens and those truly related to the target knowledge being unlearned.
Large Language Models (LLMs) employing alignment-based defenses against prompt injection and jailbreak attacks exhibit vulnerability to an informed white-box attack. This attack, termed Checkpoint-GCG, leverages intermediate model checkpoints from the alignment training process to initialize the Greedy Coordinate Gradient (GCG) attack. By using each checkpoint as a stepping stone, Checkpoint-GCG successfully finds adversarial suffixes that bypass defenses achieving significantly higher attack success rates than standard GCG initialized with naive methods. This is particularly impactful as Checkpoint-GCG can discover universal adversarial suffixes effective across multiple inputs.
FC-Attack leverages automatically generated flowcharts containing step-by-step descriptions derived or rephrased from harmful queries, combined with a benign textual prompt, to jailbreak Large Vision-Language Models (LVLMs). The vulnerability lies in the model's susceptibility to visual prompts containing harmful information within the flowcharts, thus bypassing safety alignment mechanisms.
© 2025 Promptfoo. All rights reserved.