📄️ Command line
Explore promptfoo CLI commands for LLM testing - run evaluations, generate datasets, scan models for vulnerabilities, and automate testing workflows via command line
📄️ Node package
Integrate LLM testing into Node.js apps with promptfoo's evaluate() function. Configure providers, run test suites, and analyze results using TypeScript/JavaScript APIs.
📄️ Web viewer
Compare and analyze LLM outputs side-by-side with promptfoo's web viewer. Share results, rate responses, and track evaluations in real-time for AI testing workflows.
📄️ Sharing
Collaborate on LLM evaluations by sharing results via cloud platform, enterprise deployment, or self-hosted infrastructure
📄️ Self-hosting
Learn how to self-host promptfoo using Docker, Docker Compose, or Helm. This comprehensive guide walks you through setup, configuration, and troubleshooting.
📄️ Troubleshooting
Debug and resolve common promptfoo issues with solutions for memory optimization, API configuration, Node.js errors, and native builds in your LLM testing pipeline