Back to CourseLesson 3 of 10

Deploying to Vercel

Vercel is the recommended deployment platform for CoFounder projects built on Next.js. This lesson covers project setup, configuration, environment variables, edge functions for low-latency agent responses, serverless limits to watch for, and preview deployments for safe iteration.

Vercel Project Setup

Connect your repository to Vercel through the dashboard or CLI. CoFounder projects use the standard Next.js build pipeline, but AI agent routes often need longer timeouts and larger payloads than typical web apps.

# Install Vercel CLI
npm i -g vercel

# Link your project
vercel link

# Deploy to preview
vercel

# Deploy to production
vercel --prod

Configuring vercel.json

The vercel.json file controls function timeouts, regions, headers, and rewrites. AI agent endpoints typically need extended timeouts because LLM calls can take several seconds, especially with multi-step tool use:

{
  "framework": "nextjs",
  "regions": ["iad1"],
  "functions": {
    "app/api/agents/*/route.ts": {
      "maxDuration": 60,
      "memory": 1024
    },
    "app/api/chat/route.ts": {
      "maxDuration": 30,
      "memory": 512
    }
  },
  "headers": [
    {
      "source": "/api/(.*)",
      "headers": [
        { "key": "X-Content-Type-Options", "value": "nosniff" },
        { "key": "X-Frame-Options", "value": "DENY" }
      ]
    }
  ],
  "crons": [
    {
      "path": "/api/cron/cost-report",
      "schedule": "0 9 * * 1"
    }
  ]
}

Environment Variables on Vercel

Vercel supports environment variables scoped to Production, Preview, and Development. Always set sensitive keys at the platform level, never in code. Use the CLI for bulk operations:

# Set a secret for production only
vercel env add OPENAI_API_KEY production

# Set a variable for all environments
vercel env add NEXT_PUBLIC_SUPABASE_URL production preview development

# Pull all env vars to .env.local for local dev
vercel env pull .env.local

# List all environment variables
vercel env ls

Edge Functions for Low-Latency Responses

Edge functions run closer to your users and start faster than serverless functions. Use them for lightweight agent routing, authentication checks, and streaming responses. Note that edge functions have a more limited runtime -- no Node.js filesystem access and a 25 MB size limit.

// app/api/agent-router/route.ts
import { NextRequest } from 'next/server';

export const runtime = 'edge';

export async function POST(req: NextRequest) {
  const { agentId, message } = await req.json();

  // Route to the appropriate agent handler
  // Edge function handles routing; the actual LLM call
  // happens in a serverless function with longer timeout
  const response = await fetch(
    `${process.env.NEXT_PUBLIC_APP_URL}/api/agents/${agentId}/execute`,
    {
      method: 'POST',
      headers: {
        'Content-Type': 'application/json',
        'Authorization': req.headers.get('Authorization') ?? '',
      },
      body: JSON.stringify({ message }),
    }
  );

  // Stream the response back to the client
  return new Response(response.body, {
    headers: { 'Content-Type': 'text/event-stream' },
  });
}

Serverless Limits and Preview Deployments

Be aware of Vercel's serverless limits: 10-second default timeout (extendable to 300s on Pro/Enterprise), 4.5 MB request body, and 250 MB function bundle size. For AI agents that process large documents, consider streaming uploads to S3 first.

Preview deployments are created automatically for every pull request. Use them to test agent behavior changes safely before merging. CoFounder recommends setting a lower LLM budget for preview environments to avoid runaway costs during testing.