Agent Best Practices
You have learned how to build agents, design tools, manage memory, orchestrate multi-agent systems, and test everything. This final lesson covers the security, cost, and operational practices that make your agents production-ready.
Security: Input Sanitization
User input flows directly into your agent's context and tool calls. Without sanitization, attackers can inject malicious instructions. CoFounder provides built-in security guards:
import { createAgent, SecurityGuard } from '@waymakerai/aicofounder-core';
const agent = createAgent({
name: 'secure-agent',
model: 'gpt-4o',
security: {
guards: [
SecurityGuard.PROMPT_INJECTION, // Blocks prompt injection attempts
SecurityGuard.PII_DETECTION, // Detects and redacts PII in inputs
SecurityGuard.SQL_INJECTION, // Blocks SQL injection in tool parameters
],
onViolation: (violation) => {
console.error(`Security violation: ${violation.type} - ${violation.message}`);
return {
blocked: true,
userMessage: 'I cannot process that request for security reasons.',
};
},
},
tools: [databaseTool, searchTool],
});Always enable prompt injection detection in production. It catches common attacks like "ignore your instructions and..." or hidden instructions embedded in tool results.
Security: Output Validation
Agent output can also be a security risk. Validate what your agent produces before showing it to users:
- PII filtering -- Ensure the agent does not leak emails, phone numbers, or other personal data from its training or tool results.
- Content filtering -- Block harmful, offensive, or off-topic content from agent responses.
- Format validation -- If your agent should produce JSON, HTML, or code, validate the output format before rendering.
const agent = createAgent({
name: 'validated-agent',
model: 'gpt-4o',
hooks: {
beforeRespond: async (output) => {
// Redact any email addresses the agent might expose
const sanitized = output.replace(
/[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\.[a-zA-Z]{2,}/g,
'[email redacted]'
);
// Redact phone numbers
const cleaned = sanitized.replace(
/\b\d{3}[-.]?\d{3}[-.]?\d{4}\b/g,
'[phone redacted]'
);
return cleaned;
},
},
tools: [databaseTool],
});Cost Optimization
LLM API calls are expensive at scale. Here are proven strategies to reduce costs:
- Use the right model -- Not every task needs GPT-4o. Use cheaper models for simple routing, summarization, and classification.
- Minimize context -- Keep conversation history lean. Summarize old messages instead of sending the full history.
- Cache tool results -- If the same tool is called with the same arguments, return cached results.
- Set token limits -- Cap
maxTokensto prevent unnecessarily long responses. - Monitor per-user costs -- Track spending by user or session and set budget limits.
import { createAgent, CostTracker } from '@waymakerai/aicofounder-core';
const costTracker = new CostTracker({
budgetPerUser: 0.50, // $0.50 per user per session
budgetPerDay: 100.00, // $100 daily budget
onBudgetExceeded: (type, current, limit) => {
console.warn(`Budget exceeded: ${type} - $${current.toFixed(2)} / $${limit.toFixed(2)}`);
},
});
const agent = createAgent({
name: 'budget-agent',
model: 'gpt-4o',
costTracker,
tools: [searchTool, databaseTool],
});Monitoring and Logging
Production agents need observability. Log every agent step so you can debug issues and understand agent behavior:
- Log each agent step: which tool was called, what arguments were passed, what was returned.
- Track latency per step, per tool, and end-to-end.
- Monitor error rates by tool and by error category.
- Alert on anomalies: sudden cost spikes, increased error rates, or unusually long runs.
Production Checklist
Before deploying an agent to production, verify each of these:
- All security guards are enabled (prompt injection, PII detection).
- Rate limits are configured for each external API tool.
- Fallback models are configured for resilience.
- Cost tracking and budget limits are in place.
- Error handling covers retryable, recoverable, and fatal errors.
- Logging captures every step for debugging.
- Unit tests cover all tools. Integration tests cover key agent flows.
- The system prompt is reviewed for safety and accuracy.
- Memory management prevents context overflow.
- Output validation prevents PII leakage and harmful content.
Congratulations -- you have completed the Building AI Agents course. You now have the knowledge to design, build, test, and deploy production-ready AI agents with CoFounder.