Skip to main content

Javascript Provider

Custom Javascript providers let you create providers in JavaScript or TypeScript to integrate with any API or service not already built into promptfoo.

Supported File Formats and Examples

promptfoo supports multiple JavaScript module formats. Complete working examples are available on GitHub:

Provider Interface

At minimum, a custom provider must implement an id method and a callApi method.

echoProvider.mjs
// Save as echoProvider.mjs for ES6 syntax, or echoProvider.js for CommonJS
export default class EchoProvider {
id = () => 'echo';

callApi = async (prompt, context, options) => {
return {
output: `Echo: ${prompt}`,
};
};
}

You can optionally use a constructor to initialize the provider, for example:

openaiProvider.js
const promptfoo = require('promptfoo').default;

module.exports = class OpenAIProvider {
constructor(options) {
this.providerId = options.id || 'openai-custom';
this.config = options.config;
}

id() {
return this.providerId;
}

async callApi(prompt, context, options) {
const { data } = await promptfoo.cache.fetchWithCache(
'https://api.openai.com/v1/chat/completions',
{
method: 'POST',
headers: {
'Content-Type': 'application/json',
Authorization: `Bearer ${process.env.OPENAI_API_KEY}`,
},
body: JSON.stringify({
model: this.config?.model || 'gpt-5-mini',
messages: [{ role: 'user', content: prompt }],
max_completion_tokens: this.config?.max_tokens || 1024,
temperature: this.config?.temperature || 1,
}),
},
);

return {
output: data.choices[0].message.content,
tokenUsage: data.usage,
};
}
};

callApi returns a ProviderResponse object. The ProviderResponse object format:

{
// main response shown to users
output: "Model response - can be text or structured data",
error: "Error message if applicable",
prompt: "The actual prompt sent to the LLM", // Optional: reported prompt
tokenUsage: {
total: 100,
prompt: 50,
completion: 50,
},
cost: 0.002,
cached: false,
conversationEnded: false, // Optional: set true to stop multi-turn redteam gracefully
conversationEndReason: 'thread_closed', // Optional reason when conversationEnded=true
metadata: {}, // Additional data
...
}

Context Parameter

The context parameter provides test case information and utility objects:

{
vars: {}, // Test case variables
prompt: {}, // Prompt template (raw, label, config)
test: { // Full test case object
vars: {},
metadata: {
pluginId: '...', // Redteam plugin (e.g. "promptfoo:redteam:harmful:hate")
strategyId: '...', // Redteam strategy (e.g. "jailbreak", "prompt-injection")
},
},
originalProvider: {}, // Original provider when overridden
logger: {}, // Winston logger instance
}

For redteam evals, use context.test.metadata.pluginId and context.test.metadata.strategyId to identify which plugin and strategy generated the test case.

Reporting the Actual Prompt

If your provider dynamically generates or modifies prompts, you can report the actual prompt sent to the LLM using the prompt field in your response. This is useful for:

  • Frameworks like GenAIScript that generate prompts dynamically
  • Agent frameworks that build multi-turn conversations
  • Providers that add system instructions or modify the prompt
dynamicPromptProvider.mjs
export default class DynamicPromptProvider {
id = () => 'dynamic-prompt';

callApi = async (prompt, context) => {
// Generate a different prompt dynamically
const generatedPrompt = `System: You are helpful.\nUser: ${prompt}`;

// Call the LLM with the generated prompt
const response = await callLLM(generatedPrompt);

return {
output: response,
prompt: generatedPrompt, // Report what was actually sent
};
};
}

The reported prompt is used for:

  • Display: Shown as "Actual Prompt Sent" in the web UI
  • Assertions: Prompt-based assertions like moderation check this value
  • Debugging: Helps understand what was actually sent to the LLM

See the vercel-ai-sdk example for a complete working example.

Two-Stage Provider

twoStageProvider.js
const promptfoo = require('promptfoo').default;

module.exports = class TwoStageProvider {
constructor(options) {
this.providerId = options.id || 'two-stage';
this.config = options.config;
}

id() {
return this.providerId;
}

async callApi(prompt) {
// First stage: fetch additional data
const secretData = await this.fetchSecret(this.config.secretKey);

// Second stage: call LLM with enriched prompt
const enrichedPrompt = `${prompt}\nContext: ${secretData}`;
const llmResponse = await this.callLLM(enrichedPrompt);

return {
output: llmResponse.output,
metadata: { secretUsed: true },
};
}

async fetchSecret(key) {
// Fetch some external data needed for processing
return `Secret information for ${key}`;
}

async callLLM(prompt) {
const { data } = await promptfoo.cache.fetchWithCache(
'https://api.openai.com/v1/chat/completions',
{
method: 'POST',
headers: {
'Content-Type': 'application/json',
Authorization: `Bearer ${process.env.OPENAI_API_KEY}`,
},
body: JSON.stringify({
model: 'gpt-5-mini',
messages: [{ role: 'user', content: prompt }],
}),
},
);

return {
output: data.choices[0].message.content,
};
}
};

TypeScript Implementation

typedProvider.ts
import promptfoo from 'promptfoo';
import type {
ApiProvider,
ProviderOptions,
ProviderResponse,
CallApiContextParams,
} from 'promptfoo';

export default class TypedProvider implements ApiProvider {
protected providerId: string;
public config: Record<string, any>;

constructor(options: ProviderOptions) {
this.providerId = options.id || 'typed-provider';
this.config = options.config || {};
}

id(): string {
return this.providerId;
}

async callApi(prompt: string, context?: CallApiContextParams): Promise<ProviderResponse> {
const username = (context?.vars?.username as string) || 'anonymous';

return {
output: `Hello, ${username}! You said: "${prompt}"`,
tokenUsage: {
total: prompt.length,
prompt: prompt.length,
completion: 0,
},
};
}
}

TypeScript Providers in Frontend Projects

Promptfoo loads TypeScript providers in Node.js, not through your frontend bundler. If your provider imports app code from a Vite, Next.js, or Webpack project, make sure the imports are also valid from Node.

For path aliases such as @/utils, define the alias in tsconfig.json:

{
"compilerOptions": {
"baseUrl": ".",
"paths": {
"@/*": ["src/*"]
}
}
}

Run promptfoo eval from the project root so the TypeScript loader can find that tsconfig.json.

If your provider depends on bundler-only aliases, browser-only globals, CSS imports, or frontend plugins, compile or bundle the provider to JavaScript first and reference the built file:

providers:
- file://dist/promptfoo-provider.js

Additional Capabilities

Embeddings API

embeddingProvider.js
async callEmbeddingApi(text) {
const response = await fetch('https://api.openai.com/v1/embeddings', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
Authorization: `Bearer ${process.env.OPENAI_API_KEY}`,
},
body: JSON.stringify({
model: 'text-embedding-3-small',
input: text,
}),
});

const data = await response.json();

return {
embedding: data.data[0].embedding,
tokenUsage: {
total: data.usage.total_tokens,
prompt: data.usage.prompt_tokens,
completion: 0,
},
};
}

Classification API

classificationProvider.js
async callClassificationApi(text) {
return {
classification: {
positive: 0.75,
neutral: 0.20,
negative: 0.05,
},
};
}

Handling Multimodal Content

Custom providers handle multimodal content the same way whether the media comes from a standard eval or a red team strategy: read the media variable from context.vars and translate it into the target API's expected payload shape.

For standard evals, provide the media value through tests[].vars, defaultTest.vars, a dataset column, or a dynamic variable:

promptfooconfig.yaml
prompts:
- '{{image}} {{question}}'

tests:
- vars:
image: 'data:image/png;base64,iVBORw0KGgo...'
question: Describe this image.

In this case, context.vars.image contains the configured value. It may be raw base64, a data: URL, an external URL, or another representation your provider knows how to forward.

For red team runs, image, audio, and video strategies generate media and store it in the template variable named by redteam.injectVar. The rendered prompt also contains the media value, but context.vars is safer because it preserves variable boundaries and avoids parsing a very long prompt.

Red team strategycontext.vars[redteam.injectVar]Extra contextForwarding notes
imageRaw PNG base64, no data: prefixcontext.vars.image_text, context.test.metadata.originalTextWrap as data:image/png;base64,... for APIs that expect data URLs.
audioRaw MP3 base64 from remote generation, no data: prefixcontext.test.metadata.originalTextRequires remote generation. Forward with MIME type audio/mpeg or your provider's equivalent audio format.
videoRaw MP4 base64 when local FFmpeg generation succeedscontext.vars.video_text, context.test.metadata.originalTextInstall FFmpeg and set PROMPTFOO_DISABLE_REMOTE_GENERATION=true or PROMPTFOO_DISABLE_REDTEAM_REMOTE_GENERATION=true for real MP4 bytes. If generation falls back, the value may decode to the original text instead of an MP4.

Audio and video have opposite generation requirements today: audio requires remote generation, while real MP4 video requires the local FFmpeg path. Run separate scans if you need to verify both remote audio and local MP4 handling.

multimodalProvider.js
module.exports = class MultimodalProvider {
id() {
return 'multimodal-provider';
}

async callApi(prompt, context) {
const apiKey = process.env.OPENAI_API_KEY;
if (!apiKey) {
return { error: 'OPENAI_API_KEY is required' };
}

const imageBase64 = context.vars.image || '';
const question = context.vars.question || 'Describe this image';

// Red team image runs provide raw PNG base64. Eval vars may already provide a URL.
const imageUrl = /^(data:|https?:\/\/)/.test(imageBase64)
? imageBase64
: `data:image/png;base64,${imageBase64}`;

const response = await fetch('https://api.openai.com/v1/chat/completions', {
method: 'POST',
headers: {
Authorization: `Bearer ${apiKey}`,
'Content-Type': 'application/json',
},
body: JSON.stringify({
model: 'gpt-5',
messages: [
{
role: 'user',
content: [
{ type: 'image_url', image_url: { url: imageUrl } },
{ type: 'text', text: question },
],
},
],
}),
});

if (!response.ok) {
return { error: `OpenAI API error ${response.status}: ${await response.text()}` };
}

const result = await response.json();
const output = result.choices?.[0]?.message?.content;
return output
? { output }
: { error: `OpenAI API returned no output: ${JSON.stringify(result)}` };
}
};
note

injectVar defaults to the last template variable in your prompt. With {{image}} {{question}}, it defaults to question — not image. Always set injectVar explicitly when using media strategies.

Avoid logging full media strings; screenshots, audio, and video can be large or sensitive. For debugging, log length, detected MIME type, a hash, or the first few bytes after decoding instead of the full base64 payload.

See the Python provider multimodal docs for a Python example.

Cache System

The built-in caching system helps avoid redundant API calls:

cacheExample.js
// Get the cache instance
const cache = promptfoo.cache.getCache();

// Store and retrieve data
await cache.set('my-key', 'cached-value', { ttl: 3600 }); // TTL in seconds
const value = await cache.get('my-key');

// Fetch with cache wrapper
const { data, cached } = await promptfoo.cache.fetchWithCache(
'https://api.example.com/endpoint',
{
method: 'POST',
body: JSON.stringify({ query: 'data' }),
},
5000, // timeout in ms
);

Configuration

Provider Configuration

promptfooconfig.yaml
providers:
- id: file://./myProvider.mjs # ES6 modules
label: 'My Custom API' # Display name in UI
config:
model: 'gpt-5'
temperature: 0.7
max_tokens: 2000
custom_parameter: 'custom value'
# - id: file://./myProvider.js # CommonJS modules
Promptfoo Cloud Feature

Available in Promptfoo Cloud deployments.

Link your local provider configuration to a cloud target using linkedTargetId:

providers:
- id: file://./myProvider.mjs
config:
linkedTargetId: 'promptfoo://provider/12345678-1234-1234-1234-123456789abc'

See Linking Local Targets to Cloud for setup instructions.

Multiple Instances

multiple-providers.yaml
providers:
- id: file:///path/to/provider.js
label: high-temperature
config:
temperature: 0.9
- id: file:///path/to/provider.js
label: low-temperature
config:
temperature: 0.1

See Also