Skip to main content

Overview

OpenAIProvider is Coevolved’s built-in adapter for OpenAI chat completions. It supports:
  • Non-streaming (complete)
  • Streaming (stream)
  • Tool calling via ToolSpec → OpenAI tools

Provider configuration

Constructor:
from openai import OpenAI
from coevolved.core import OpenAIProvider

provider = OpenAIProvider(
    OpenAI(),
    request_options={"timeout": 60},  # optional defaults for every request
)
Per-request overrides can be provided via LLMConfig.metadata["request_options"] (merged into the provider defaults).

Request mapping

LLMRequest mapping:
  • request.context.modelmodel
  • request.prompt.messages (or text) → messages
  • request.context.toolstools (OpenAI function tool schema)
  • request.context.tool_choicetool_choice
  • temperature, max_tokens map directly when set
Prompt conversion:
  • If PromptPayload.messages is present, it is used as-is (with normalization for tool call message shapes).
  • If PromptPayload.text is present, it becomes a single user message.

Response mapping

The provider returns LLMResponse with:
  • text: from the first choice’s message content
  • tool_calls: parsed from OpenAI tool calls (arguments JSON parsed when possible)
  • finish_reason, model, usage mapped when present
Streaming yields LLMStreamChunk instances with:
  • text deltas (when present)
  • Optional tool_call_delta fields for tool call assembly
  • finish_reason on completion

Next steps