Open-Sourcing ModelAudit: Security Scanner for ML Model Files
Before joining Promptfoo, I worked on model scanning at Databricks. Teams pulled models from public registries, ran torch.load(), and treated the artifact like inert data. Model files are executable at load time.
Since joining Promptfoo last September, I've been building ModelAudit, a static security scanner for ML model files. We filed 7 GHSAs against existing scanners, including a CVSS 10.0 universal bypass, and validated against thousands of real models with zero false positives. Last week we released it as an MIT-licensed open-source project.
