Skip to main content

2 posts tagged with "data-poisoning"

View All Tags

Defending Against Data Poisoning Attacks on LLMs: A Comprehensive Guide

Vanessa Sauter
Principal Solutions Architect

Data poisoning remains a top concern on the OWASP Top 10 for 2025. However, the scope of data poisoning has expanded since the 2023 version. Data poisoning is no longer strictly a risk during the training of Large Language Models (LLMs); it now encompasses all three stages of the LLM lifecycle: pre-training, fine-tuning, and retrieval from external sources. OWASP also highlights the risk of model poisoning from shared repositories or open-source platforms, where models may contain backdoors or embedded malware.

When exploited, data poisoning can degrade model performance, produce biased or toxic content, exploit downstream systems, or tamper with the model's generation capabilities.

Understanding how these attacks work and implementing preventative measures is crucial for developers, security engineers, and technical leaders responsible for maintaining the security and reliability of these systems. This comprehensive guide delves into the nature of data poisoning attacks and offers strategies to safeguard against these threats.

RAG Data Poisoning: Key Concepts Explained

Ian Webster
Engineer & OWASP Gen AI Red Teaming Contributor

AI systems are under attack - and this time, it's their knowledge base that's being targeted. A new security threat called data poisoning lets attackers manipulate AI responses by corrupting the very documents these systems rely on for accurate information.

Retrieval-Augmented Generation (RAG) was designed to make AI smarter by connecting language models to external knowledge sources. Instead of relying solely on training data, RAG systems can pull in fresh information to provide current, accurate responses. With over 30% of enterprise AI applications now using RAG, it's become a key component of modern AI architecture.

But this powerful capability has opened a new vulnerability. Through data poisoning, attackers can inject malicious content into knowledge databases, forcing AI systems to generate harmful or incorrect outputs.

Data Poisoning

These attacks are remarkably efficient - research shows that just five carefully crafted documents in a database of millions can successfully manipulate AI responses 90% of the time.