Skip to main content

6 posts tagged with "company-update"

View All Tags

The Promptfoo MCP Proxy: Enterprise MCP Security

Steven Klein
Principal Engineer

Model Context Protocol (MCP) adoption is skyrocketing, NPM installations are up to 4.7m for the week of July 7th, 2025. Today we're announcing the Promptfoo MCP Proxy to manage the security risks for enterprises using MCP servers. MCP servers aren't inherently insecure, but the way they're being used creates huge vulnerabilities. Through our work with some of the world's largest companies, we've discovered alarmingly insecure patterns.

Promptfoo Achieves SOC 2 Type II and ISO 27001 Certification: Strengthening Trust in AI Security

Vanessa Sauter
Principal Solutions Architect

We're proud to announce that Promptfoo is now SOC 2 Type II compliant and ISO 27001 certified β€” two globally recognized milestones that affirm our commitment to the highest standards in information security, privacy, and risk management.

At Promptfoo, we help organizations secure their generative AI applications by proactively identifying and fixing vulnerabilities through adversarial emulation and red teaming. These new certifications reflect our continued investment in building secure, trustworthy AI systems β€” not only in our technology, but across our entire organization.

Celebrating 100,000 Users: Promptfoo's Journey, Red Teaming, and the Future of AI Security

Michael D'Angelo
CTO & Co-founder

We're thrilled to announce that Promptfoo now has over 100,000 users! This milestone reflects our incredible community of developers, enterprises, and AI enthusiasts who trust us to help build reliable and secure AI applications.

Promptfoo was born out of a simple idea: prompt engineering should be systematic and measurable. Since our first open-source release in 2023, we've focused on giving developers the tools to evaluate models and prompts with confidence. Along the way, we discovered that happy path testing wasn't enoughβ€”teams also needed a way to proactively identify vulnerabilities. That realization sparked our deep investment in AI red teaming and GenAI Application Security.

Promptfoo for Enterprise: AI Evaluation and Red Teaming at Scale

Ian Webster
Engineer & OWASP Gen AI Red Teaming Contributor

Today we're announcing new Enterprise features for teams developing and securing LLM applications.

Over the past few months, Promptfoo has launched open-source support for the latest adversarial ML research techniques, new integrations with popular LLM providers, and features for evaluating complex RAG and agent architectures.

We're now expanding to support developers in larger teams. We've developed new capabilities for companies that recognize the importance of LLM security, seek tools for collaboration across development and security teams, and want higher visibility into the security of their LLM applications.

Promptfoo Raises $5M to Fix Vulnerabilities in AI Applications

Ian Webster
Engineer & OWASP Gen AI Red Teaming Contributor

Today, we're excited to announce that Promptfoo has raised a $5M seed round led by Andreessen Horowitz to help developers find and fix vulnerabilities in their AI applications.

AI adoption is at a critical juncture. Companies racing to build with LLMs face mounting security risks, legal uncertainty, and potential brand damage from new pitfalls like training data leaks and insecure integrations.

We believe in a pragmatic approach to AI security that hinges on fortifying the application layer, where the model meets the real world. Here, design choices hold immense power to shape the security of the entire system.

Our mission: Empower every builder to systematically find and fix vulnerabilities in their LLM apps.

We are the architects of adversarial AI. We've built the first pentesting product that specifically targets AI applications. We craft malicious inputs, we simulate real-world threats, and we push LLMs to their breaking point.

We are the champions of open-source intelligence. We believe AI should be built on a culture of transparency and accountability.

How we got here​

As an engineering leader at Discord, I started the Platform Ecosystem org and spent years building Developer APIs at scale. When I switched to leading a team that built LLM-based products for millions of users, I learned firsthand that the most difficult part of shipping AI is making sure that the end result is safe, secure, and reliable. Because the surface area of LLMs is so large, traditional testing and security methods were not effective.

I designed the first version of Promptfoo for people like me β€” application developers β€” with a focus on making it as easy as possible to test, discover, and fix LLM failures.

Along the way, I was joined by my co-founder Michael, a longtime friend and engineering leader who scaled ML to hundreds of enterprises serving over 100 million people at Smile Identity. His hands-on experience in defending AI applications against real threats embodies our practical approach to security.

Every AI builder needs a red team​

The big AI companies rely on specialized "red teams" dedicated to probing models for major security and safety vulnerabilities. But they don't always care about the same things as application developers.

That's why on top of common exploits like jailbreaks and prompt injections, we see problems that occur only at the application level – like AIs promising free cars, customer service agents revealing database information, and homework tutors spouting political opinions. These issues cripple trust and pose real threats to businesses using AI.

We are building the AI red team for everyone else by empowering developers to find and fix the failures that matter most to them before they reach users. We've focused on issues that affect the application layer specifically – like context poisoning, tool misuse, use-case hijacking, and many more business-specific risks.

Today, over 25,000 software engineers at companies like Shopify, Amazon, and Anthropic are fortifying their apps with our powerful open-source tool for evaluating AI behavior.

The future of AI is open-source​

We believe that AI thrives on open-source. The best security and evaluation tools will be grounded in the open-source principles of transparency and interoperability, not opaqueness and proprietary lock-in.

With this in mind, we are developing Promptfoo as the open-source standard for performing AI pentests and red team evaluations.

We're honored to have the support of Andreessen Horowitz, who share our vision for open-source, application-focused AI security. We're also grateful for the participation of industry leaders like Tobi Lutke (CEO, Shopify), Stanislav Vishnevskiy (CTO, Discord), Frederic Kerrest (Vice-Chairman & Co-Founder, Okta), and many other top executives in the technology, security, and financial industries. Their belief in Promptfoo validates our approach, and their expertise strengthens our mission.

We also acknowledge with gratitude the collaborative efforts of the open-source community. We remain deeply committed to Promptfoo's roots as an open-source project. Our contributors continue to be a driving force behind Promptfoo's development and success.

Take control of AI security with Promptfoo​

The conversation around AI security is broken. Traditional security methods fall short for complex AI systems, and regulation misses the actual risks to consumers.

The answer lies in empowering every developer to proactively find and fix vulnerabilities in their applications.

Ready to build trustworthy, reliable AI applications? Reach out to discuss options for your company.

Ian Webster

CEO, Promptfoo