2025​
- January 7 - Jailbreaking LLMs: A Comprehensive Guide (With Examples)
- January 7 - Defending Against Data Poisoning Attacks on LLMs: A Comprehensive Guide
- January 18 - How to Red Team a LangChain Application: Complete Security Testing Guide
- January 28 - 1,156 Questions Censored by DeepSeek
- February 3 - What are the Security Risks of Deploying DeepSeek-R1?
- February 14 - Understanding AI Agent Security
- March 11 - Sensitive Information Disclosure in LLMs: Privacy and Compliance in Generative AI
- March 19 - Misinformation in LLMs: Causes and Prevention Strategies
- March 25 - OWASP Red Teaming: A Practical Guide to Getting Started
- April 10 - The Invisible Threat: How Zero-Width Unicode Characters Can Silently Backdoor Your AI-Generated Code
- May 6 - Inside MCP: A Protocol for AI Integration
- May 12 - A2A Protocol: The Universal Language for AI Agents
- May 22 - How to Red Team Claude: Complete Security Testing Guide for Anthropic Models
- June 7 - How to Red Team GPT: Complete Security Testing Guide for OpenAI Models
- June 10 - Celebrating 100,000 Users: Promptfoo's Journey, Red Teaming, and the Future of AI Security
- June 15 - Next Generation of Red Teaming for LLM Agents
- June 18 - How to Red Team Gemini: Complete Security Testing Guide for Google's AI Models
- June 26 - Promptfoo vs Garak: Choosing the Right LLM Red Teaming Tool
- June 27 - Promptfoo vs PyRIT: A Practical Comparison of LLM Red Teaming Tools
- July 1 - Harder, Better, Prompter, Stronger: AI system prompt hardening
- July 6 - ModelAudit vs ModelScan: Comparing ML Model Security Scanners
- July 11 - Promptfoo Achieves SOC 2 Type II and ISO 27001 Certification: Strengthening Trust in AI Security
- July 14 - OWASP Top 10 LLM Security Risks (2025) – 5-Minute TLDR
- July 14 - The Promptfoo MCP Proxy: Enterprise MCP Security
- July 15 - System Cards Go Hard
- July 22 - AI Red Teaming for complete first-timers
- July 24 - Join Promptfoo at Hacker Summer Camp 2025
- July 24 - Evaluating political bias in LLMs
- July 29 - Promptfoo Raises $18.4M Series A to Build the Definitive AI Security Stack
2024​
- July 1 - Automated Jailbreaking Techniques with DALL-E: Complete Red Team Guide
- July 23 - Promptfoo Raises $5M to Fix Vulnerabilities in AI Applications
- August 14 - New Red Teaming Plugins for LLM Agents: Enhancing API Security
- August 21 - Promptfoo for Enterprise: AI Evaluation and Red Teaming at Scale
- September 26 - Jailbreaking Black-Box LLMs Using Promptfoo: A Complete Walkthrough
- October 4 - How Much Does Foundation Model Security Matter?
- October 8 - Preventing Bias & Toxicity in Generative AI
- October 8 - Understanding Excessive Agency in LLMs
- October 9 - Prompt Injection: A Comprehensive Guide
- October 14 - How Do You Secure RAG Applications?
- October 17 - Does Fuzzing LLMs Actually Work?
- November 4 - RAG Data Poisoning: Key Concepts Explained
- November 5 - Introducing GOAT—Promptfoo's Latest Strategy
- November 20 - How to Red Team a HuggingFace Model: Complete Security Testing Guide
- November 23 - How to Red Team an Ollama Model: Complete Local LLM Security Testing Guide
- December 10 - Leveraging Promptfoo for EU AI Act Compliance
- December 21 - How to run CyberSecEval
- December 22 - Red Team Your LLM with BeaverTails
- December 31 - Beyond DoS: How Unbounded Consumption is Reshaping LLM Security