Echo Provider
The Echo Provider is a simple utility provider that returns the input prompt as the output. It's particularly useful for testing, debugging, and validating pre-generated outputs without making any external API calls.
Configuration
To use the Echo Provider, set the provider ID to echo
in your configuration file:
providers:
- echo
# or
- id: echo
label: pass through provider
Response Format
The Echo Provider returns a complete ProviderResponse
object with the following fields:
output
: The original input stringcost
: Always 0cached
: Always falsetokenUsage
: Set to{ total: 0, prompt: 0, completion: 0 }
isRefusal
: Always falsemetadata
: Any additional metadata provided in the context
Usage
The Echo Provider requires no additional configuration and returns the input after performing any variable substitutions.
Example
providers:
- echo
- openai:chat:gpt-4.1-mini
prompts:
- 'Summarize this: {{text}}'
tests:
- vars:
text: 'The quick brown fox jumps over the lazy dog.'
assert:
- type: contains
value: 'quick brown fox'
- type: similar
value: '{{text}}'
threshold: 0.75
In this example, the Echo Provider returns the exact input after variable substitution, while the OpenAI provider generates a summary.
Use Cases and Working with Pre-generated Outputs
The Echo Provider is useful for:
-
Debugging and Testing Prompts: Ensure prompts and variable substitutions work correctly before using complex providers.
-
Assertion and Pre-generated Output Evaluation: Test assertion logic on known inputs and validate pre-generated outputs without new API calls.
-
Testing Transformations: Test how transformations affect the output without the variability of an LLM response.
-
Mocking in Test Environments: Use as a drop-in replacement for other providers in test environments when you don't want to make actual API calls.
Evaluating Logged Production Outputs
A common pattern is evaluating LLM outputs that were already generated in production. This allows you to run assertions against real production data without making new API calls.
Use your logged output directly as the prompt:
prompts:
- '{{logged_output}}'
providers:
- echo
tests:
- vars:
logged_output: 'Paris is the capital of France.'
assert:
- type: llm-rubric
value: 'Answer is factually correct'
- type: contains
value: 'Paris'
The echo provider returns the prompt as-is, so your logged output flows directly to assertions without any API calls.
For JSON-formatted production logs, use a default transform to extract specific fields:
prompts:
- '{{logged_output}}'
providers:
- echo
defaultTest:
options:
# Extract just the response field from all logged outputs
transform: 'JSON.parse(output).response'
tests:
- vars:
# Production logs often contain JSON strings
logged_output: '{"response": "Paris is the capital of France.", "confidence": 0.95, "model": "gpt-5"}'
assert:
- type: llm-rubric
value: 'Answer is factually correct'
- vars:
logged_output: '{"response": "London is in England.", "confidence": 0.98, "model": "gpt-5"}'
assert:
- type: contains
value: 'London'
This pattern is particularly useful for:
- Post-deployment evaluation of production prompts
- Regression testing against known outputs
- A/B testing assertion strategies on historical data
- Validating system behavior without API costs
For loading large volumes of logged outputs, test cases can be generated dynamically from CSV files, Python scripts, JavaScript functions, or JSON.