Overview
OpenAIProvider is Coevolved’s built-in adapter for OpenAI chat completions.
It supports:
- Non-streaming (
complete) - Streaming (
stream) - Tool calling via
ToolSpec→ OpenAItools
Provider configuration
Constructor:LLMConfig.metadata["request_options"] (merged into the provider defaults).
Request mapping
LLMRequest mapping:
request.context.model→modelrequest.prompt.messages(ortext) →messagesrequest.context.tools→tools(OpenAI function tool schema)request.context.tool_choice→tool_choicetemperature,max_tokensmap directly when set
- If
PromptPayload.messagesis present, it is used as-is (with normalization for tool call message shapes). - If
PromptPayload.textis present, it becomes a single user message.
Response mapping
The provider returnsLLMResponse with:
text: from the first choice’s message contenttool_calls: parsed from OpenAI tool calls (arguments JSON parsed when possible)finish_reason,model,usagemapped when present
LLMStreamChunk instances with:
textdeltas (when present)- Optional
tool_call_deltafields for tool call assembly finish_reasonon completion