Back to CourseLesson 7 of 15

Agent Pipelines

Complex AI tasks are best decomposed into a sequence of focused steps. Agent pipelines chain multiple agents together, where each agent's output feeds into the next. This lesson covers how to build, compose, and debug multi-step agent pipelines with proper error handling.

Sequential Agent Chains

A pipeline is a sequence of agents where each one has a specific role. For example, a content generation pipeline might include: a research agent that gathers facts, an outline agent that structures the content, a writing agent that produces the draft, and an editing agent that refines it. Each agent is optimized for its specific task with targeted system prompts and model selection.

import { createAgent, Pipeline } from '@waymakerai/aicofounder-core';

const researchAgent = createAgent({
  model: 'gpt-4o',
  systemPrompt: 'You are a research assistant. Gather key facts and sources.',
});

const outlineAgent = createAgent({
  model: 'gpt-4o-mini',
  systemPrompt: 'You create structured outlines from research notes.',
});

const writerAgent = createAgent({
  model: 'gpt-4o',
  systemPrompt: 'You write polished articles from outlines. Use clear prose.',
});

const pipeline = new Pipeline({
  name: 'content-generator',
  steps: [
    { agent: researchAgent, name: 'research' },
    { agent: outlineAgent, name: 'outline' },
    { agent: writerAgent, name: 'write' },
  ],
});

const result = await pipeline.run('Write an article about edge computing');
console.log(result.text);              // Final article
console.log(result.steps.research);    // Intermediate research output
console.log(result.steps.outline);     // Intermediate outline
console.log(result.totalTokens);       // Aggregate token usage

Data Transformation Between Agents

Raw output from one agent is rarely the perfect input for the next. Transform functions sit between pipeline stages to reshape data, extract specific fields, validate structure, or enrich the context. This keeps each agent's prompt clean and focused.

const pipeline = new Pipeline({
  name: 'data-processor',
  steps: [
    {
      agent: extractorAgent,
      name: 'extract',
      transform: (output) => {
        // Parse structured data from the extractor
        const entities = JSON.parse(output.text);
        return `Analyze the following entities: ${entities.map(e => e.name).join(', ')}`;
      },
    },
    {
      agent: analyzerAgent,
      name: 'analyze',
      transform: (output) => {
        // Add metadata for the summarizer
        return `Based on this analysis, write a summary:\n${output.text}\n\nTarget audience: technical managers`;
      },
    },
    {
      agent: summarizerAgent,
      name: 'summarize',
    },
  ],
});

Pipeline Composition

Pipelines can be nested: a pipeline step can itself be a pipeline. This lets you build complex workflows from reusable building blocks. For example, a document processing pipeline might include a sub-pipeline for entity extraction that is also used independently elsewhere in your application.

CoFounder supports branching pipelines too. A router step examines the input and directs it to one of several sub-pipelines based on the content type, complexity, or domain. This pattern enables sophisticated workflows that adapt to the input dynamically.

Error Propagation

When a pipeline step fails, you need a clear strategy. The default behavior is to halt the pipeline and return the error with the context of which step failed. But you can also configure steps with fallback behavior: retry with a different model, skip the step and pass through the previous output, or use a cached result from a previous run.

CoFounder's pipeline emits events at each stage, letting you build monitoring dashboards that show which steps succeed, fail, or are slow. Each step records its duration, token usage, and output size, giving you the data to optimize your pipeline's performance.