OpenLLM
To use OpenLLM with promptfoo, we take advantage of OpenLLM's support for OpenAI-compatible endpoint.
-
Start the server using the
openllm startcommand. -
Set environment variables:
- Set
OPENAI_BASE_URLtohttp://localhost:8001/v1 - Set
OPENAI_API_KEYto a dummy valuefoo.
- Set
-
Depending on your use case, use the
chatorcompletionmodel types.Chat format example: To run a Llama2 eval using chat-formatted prompts, first start the model:
openllm start llama --model-id meta-llama/Llama-2-7b-chat-hfThen set the promptfoo configuration:
providers:
- openai:chat:llama2Completion format example: To run a Flan eval using completion-formatted prompts, first start the model:
openllm start flan-t5 --model-id google/flan-t5-largeThen set the promptfoo configuration:
providers:
- openai:completion:flan-t5 -
See OpenAI provider documentation for more details.