Skip to main content

Goal

By the end of this page, you will:
  • Wrap Python functions as Steps
  • Compose steps into a workflow
  • Add an LLM step
  • Add tool steps
  • Run a prebuilt ReAct agent

Minimal workflow (Step → compose)

Create a step from a function, then run it in a simple sequence.
from coevolved.base import run_sequence, step

@step(name="normalize")
def normalize(state: dict) -> dict:
    text = (state.get("text") or "").strip()
    return {**state, "text": text}

@step(name="classify")
def classify(state: dict) -> dict:
    category = "empty" if not state["text"] else "non-empty"
    return {**state, "category": category}

workflow = [normalize, classify]
final_state = run_sequence(workflow, {"text": "  hello  "})

Add an LLM step

Use llm_step(...) to turn “prompt building + provider call” into a Step.
from openai import OpenAI

from coevolved.core import LLMConfig, OpenAIProvider, llm_step

provider = OpenAIProvider(OpenAI())
config = LLMConfig(model="gpt-4o-mini", temperature=0)

def build_prompt(state: dict) -> str:
    return f"Rewrite this in one short sentence: {state['text']}"

rewrite = llm_step(
    prompt_builder=build_prompt,
    provider=provider,
    config=config,
    name="rewrite",
)

Add a tool step

Tools are just Steps with kind="tool" and optional argument schemas for function calling.
from pydantic import BaseModel

from coevolved.core import tool_step

class AddArgs(BaseModel):
    a: int
    b: int

def add_tool(state: dict) -> int:
    args = state["tool_args"]
    return int(args["a"]) + int(args["b"])

add = tool_step(
    add_tool,
    name="add",
    tool_schema=AddArgs,
    result_key="tool_result",
)

Run a prebuilt agent (ReAct)

The prebuilt ReAct agent alternates between:
  1. an LLM “planner” step
  2. executing a tool when the LLM requests it
from coevolved.core import tool_specs_from_dict
from coevolved.prebuilt import react_agent

tools = {"add": add}

planner_config = config.model_copy(update={"tools": tool_specs_from_dict(tools)})
planner = llm_step(
    prompt_builder=lambda s: s["messages"],
    provider=provider,
    config=planner_config,
    name="planner",
)

agent = react_agent(planner=planner, tools=tools, max_steps=5)

result = agent(
    {
        "messages": [
            {"role": "user", "content": "What is 2 + 3? Use the add tool."}
        ]
    }
)
ReAct expects messages to be present if you want the loop to accumulate conversation context.

Next steps