Skip to main content

llm_step

llm_step(...) creates a Step that:
  1. builds a prompt from state (prompt_builder)
  2. coerces it into a PromptPayload
  3. calls provider.complete(...)
  4. emits PromptEvent and LLMEvent (when invocation context exists)
  5. attaches the result to state (default key: llm_response)
Minimal usage:
from coevolved.core import LLMConfig, llm_step

step = llm_step(
    prompt_builder=lambda s: "Say hi",
    provider=provider,
    config=LLMConfig(model="gpt-4o-mini"),
)

Prompt builders

prompt_builder can return:
  • str
  • Prompt
  • PromptPayload
  • dict that validates as PromptPayload
This makes it easy to start simple and evolve to structured prompts and message-based tool calling.

Parsing responses

If you pass parser, Coevolved applies it to the LLMResponse before returning/attaching. Two common patterns:
  • Keep the raw LLMResponse in state for agent loops
  • Parse into a structured domain object for downstream deterministic steps

Next steps