Skip to main content

Overview

Coevolved ships with an OpenAI provider that maps Coevolved’s core types to OpenAI Chat Completions:
  • PromptPayload → OpenAI messages
  • ToolSpec → OpenAI tools (function calling schema)
  • OpenAI response → LLMResponse (text + tool calls + usage)

Setup

Install credentials and initialize the OpenAI client:
export OPENAI_API_KEY="..."
from openai import OpenAI
from coevolved.core import OpenAIProvider

client = OpenAI()
provider = OpenAIProvider(client)

Models and config

Use LLMConfig to configure the model and generation parameters:
from coevolved.core import LLMConfig

config = LLMConfig(
    model="gpt-4o-mini",
    temperature=0,
)
If you need to pass additional OpenAI request options (timeouts, etc.), you can:
  • Supply request_options when creating OpenAIProvider(...), and/or
  • Put request_options under LLMConfig.metadata["request_options"]

Tool calling

Tool calling requires:
  1. tool steps created with tool_step(...)
  2. tool specs generated from those steps
  3. an LLMConfig that includes those specs
At runtime, the provider:
  • Sends tool schemas to OpenAI as tools
  • Parses tool calls and exposes them as LLMResponse.tool_calls

Next steps