Back to CourseLesson 2 of 12

Agent Configuration Deep Dive

Every CoFounder agent starts with a configuration object that controls its behavior, model selection, and capabilities. Understanding each option lets you fine-tune agents for specific tasks, balancing quality, speed, and cost.

The AgentConfig Type

The AgentConfig type defines every parameter your agent accepts. Here is the full shape with commonly used fields:

import { AgentConfig } from '@waymakerai/aicofounder-core';

const config: AgentConfig = {
  name: 'my-agent',
  model: 'gpt-4o',
  provider: 'openai',
  systemPrompt: 'You are a helpful assistant that answers concisely.',
  temperature: 0.7,
  maxTokens: 4096,
  maxSteps: 15,
  tools: [],
  memory: { type: 'sliding-window', maxMessages: 50 },
};

Each field is optional except name and model. CoFounder applies sensible defaults for everything else.

Model Selection and Providers

CoFounder supports multiple LLM providers out of the box. You specify the provider and model as separate fields, which makes it easy to swap models without changing your agent logic:

  • OpenAI -- gpt-4o, gpt-4o-mini, gpt-3.5-turbo
  • Anthropic -- claude-sonnet-4-20250514, claude-haiku-4-20250414
  • Google -- gemini-pro, gemini-1.5-flash
import { createAgent } from '@waymakerai/aicofounder-core';

// Switch providers by changing two fields
const agent = createAgent({
  name: 'flexible-agent',
  provider: 'anthropic',
  model: 'claude-sonnet-4-20250514',
  systemPrompt: 'You are a technical writer.',
  temperature: 0.3,
  maxTokens: 2048,
});

System Prompts and Temperature

The systemPromptsets the agent's persona and behavioral guidelines. Keep it focused: tell the agent what it is, what it should do, and any constraints.

The temperature parameter controls randomness. Use low values (0.0 -- 0.3) for deterministic tasks like code generation or data extraction. Use higher values (0.7 -- 1.0) for creative tasks like brainstorming or content writing.

Token Limits and Step Budgets

Two limits control how much work an agent does:

  • maxTokens -- The maximum number of tokens in each LLM response. Set this to prevent runaway generation costs.
  • maxSteps -- The maximum number of observe-think-act cycles the agent can perform. This prevents infinite loops when an agent cannot resolve its task.

For most agents, 10-20 steps is sufficient. Research agents that need to gather many sources may need 20-30 steps.

Tool Definitions in Config

Tools are passed as an array in the configuration. Each tool object defines a name, description, parameter schema, and execute function. We will cover tool creation in depth in the next lesson -- for now, here is how they appear in config:

const config: AgentConfig = {
  name: 'tool-agent',
  model: 'gpt-4o',
  tools: [
    {
      name: 'get_weather',
      description: 'Get current weather for a location',
      parameters: {
        type: 'object',
        properties: {
          location: { type: 'string', description: 'City name' },
        },
        required: ['location'],
      },
      execute: async ({ location }) => {
        const data = await fetchWeather(location);
        return JSON.stringify(data);
      },
    },
  ],
};