Overview
Coevolved models LLM interaction as a Step:- build a prompt from state
- call a provider (
LLMProvider) - attach a structured response (
LLMResponse) back onto state
LLMConfig, requests, and responses
Core types:LLMConfig: model + generation parameters + available toolsLLMRequest: prompt + configLLMResponse: text + tool calls + usage metadata
llm_step(...) handles the request lifecycle and emits prompt/response events.
Providers
A provider implements theLLMProvider protocol:
complete(request) -> LLMResponse
PromptPayload and ToolSpec into OpenAI chat completion requests.
Streaming (optional)
If your provider supports streaming, implementStreamingLLMProvider:
stream(request) -> Iterator[LLMStreamChunk]