Skip to main content

Overview

Coevolved models LLM interaction as a Step:
  1. build a prompt from state
  2. call a provider (LLMProvider)
  3. attach a structured response (LLMResponse) back onto state
This keeps LLM calls observable and composable like everything else.

LLMConfig, requests, and responses

Core types:
  • LLMConfig: model + generation parameters + available tools
  • LLMRequest: prompt + config
  • LLMResponse: text + tool calls + usage metadata
llm_step(...) handles the request lifecycle and emits prompt/response events.

Providers

A provider implements the LLMProvider protocol:
  • complete(request) -> LLMResponse
Coevolved includes an OpenAI provider that maps PromptPayload and ToolSpec into OpenAI chat completion requests.

Streaming (optional)

If your provider supports streaming, implement StreamingLLMProvider:
  • stream(request) -> Iterator[LLMStreamChunk]
Streaming is useful for UIs (token-by-token rendering) and long outputs, but it adds complexity around tool call deltas and partial usage reporting.

Next steps