Back to Docs

API Reference

Complete API documentation for the five core CoFounder packages. Every function, class, type, and option is documented here with signatures, parameters, defaults, return types, and examples.

@waymakerai/aicofounder-guard

PII detection, prompt injection blocking, toxicity filtering, budget enforcement, rate limiting, and model gating.

npm install @waymakerai/aicofounder-guard

createGuard

createGuard(options?: GuardOptions): Guard

Create a guard instance with configurable PII, injection, toxicity, budget, rate limit, and model gating. Returns an object with check(), wrap(), middleware(), report(), and resetBudget() methods.

Parameters / Options

NameTypeDefault
pii'detect' | 'redact' | 'block' | false'detect'
injection'block' | 'warn' | false'block'
toxicity'block' | 'warn' | false'block'
budgetBudgetConfig | falsefalse
rateLimitRateLimitConfig | falsefalse
models{ allowed?: string[]; blocked?: string[] } | falsefalse
reporter'console' | 'json' | { webhook: string } | falsefalse

Example

const guard = createGuard({
  pii: 'redact',
  injection: 'block',
  toxicity: 'block',
  budget: { limit: 50, period: 'day', warningAt: 0.8, action: 'block' },
  rateLimit: { maxRequests: 100, windowMs: 60000 },
  reporter: 'console',
});

guard.check

check(text: string, opts?: { model?: string; direction?: "input" | "output" }): CheckResult

Run all configured guards on the input text. Returns a CheckResult with safe/blocked status, PII findings, injection findings, toxicity findings, redacted text, warnings, violations, cost estimate, and model name.

Parameters / Options

NameTypeDefault
textstringrequired
opts.modelstringundefined
opts.direction'input' | 'output'undefined

Return Type

interface CheckResult {
  safe: boolean;
  blocked: boolean;
  reason?: string;
  warnings: string[];
  piiFindings: PIIFinding[];
  injectionFindings: InjectionFinding[];
  toxicityFindings: ToxicityFinding[];
  redacted?: string;
  cost?: number;
  model?: string;
  violations: Violation[];
}

guard.wrap

wrap<T extends object>(client: T): T

Wrap an Anthropic, OpenAI, or Google client with automatic guarding. All API calls through the wrapped client are intercepted, checked, and guarded transparently.

guard.middleware

middleware(): (req, res, next) => void

Returns an Express-compatible middleware function. POST requests with a body are checked; blocked requests receive a 403 response with violation details.

guard.report

report(): GuardReport

Returns a summary report of all guard activity: total checks, blocked/warned/passed counts, PII redaction stats by type, injection attempts by category, toxicity stats, cost tracking, rate limit hits, and model denials.

detectPII

detectPII(text: string): PIIFinding[]

Standalone PII detection. Returns an array of findings with type, value, redacted label, start/end positions, and confidence score. Detects email, SSN, credit card (with Luhn validation), phone, IP address (v4 and v6), date of birth, address, medical record number, passport, and driver's license.

Parameters / Options

NameTypeDefault
textstringrequired

redactPII

redactPII(text: string): { redacted: string; findings: PIIFinding[] }

Detect and replace all PII in the text with labeled placeholders (e.g., [REDACTED_EMAIL], [REDACTED_SSN]). Returns the redacted text and the list of findings.

detectInjection

detectInjection(text: string, sensitivity?: "low" | "medium" | "high"): { score: number; findings: InjectionFinding[]; blocked: boolean }

Score text for prompt injection risk. Returns a 0-100 score, an array of matched patterns with category/severity/weight, and a blocked boolean based on the sensitivity threshold. Sensitivity thresholds: low=70, medium=45, high=25.

Parameters / Options

NameTypeDefault
textstringrequired
sensitivity'low' | 'medium' | 'high''medium'

detectToxicity

detectToxicity(text: string): ToxicityFinding[]

Detect toxic content across 7 categories: profanity (low), hate_speech (critical), violence (high), self_harm (critical), sexual (high), harassment (high), and spam (low). Returns matched patterns with category, severity, matched text, and surrounding context.

hasPII / hasInjection / hasToxicity

hasPII(text: string): boolean hasInjection(text: string, sensitivity?): boolean hasToxicity(text: string, minSeverity?): boolean

Boolean convenience functions. hasPII returns true if any PII is detected. hasInjection returns true if the injection score exceeds the threshold. hasToxicity returns true if any finding meets the minimum severity level.

BudgetEnforcer

new BudgetEnforcer(config: BudgetConfig)

Standalone budget enforcement. Track spending against per-period limits. Methods: checkBudget(additionalCost?), isExceeded(), record(cost), reset().

Parameters / Options

NameTypeDefault
limitnumberrequired
period'hour' | 'day' | 'week' | 'month'required
warningAtnumber (0-1)0.8
action'block' | 'warn''block'

RateLimiter

new RateLimiter(config: RateLimitConfig)

Standalone rate limiter using a sliding window. Methods: check() returns { allowed, remaining, resetMs }, record() increments the counter.

Parameters / Options

NameTypeDefault
maxRequestsnumberrequired
windowMsnumberrequired

ModelGate

new ModelGate(config: { allowed?: string[]; blocked?: string[] })

Restrict which models can be used. check(model) returns { allowed: boolean, reason?: string }. Supports exact names and glob patterns (e.g., "*-preview").

@waymakerai/aicofounder-compliance

Enterprise compliance enforcement with 9 preset rules for HIPAA, SEC/FINRA, GDPR, CCPA, legal, safety, and security.

npm install @waymakerai/aicofounder-compliance

ComplianceEnforcer

new ComplianceEnforcer(config?: ComplianceEnforcerConfig)

Main compliance engine. Add rules, enforce them on AI input/output pairs, and track violation history.

Parameters / Options

NameTypeDefault
rulesComplianceRule[][]
enableAllPresetsbooleanfalse
strictModebooleanfalse
logViolationsbooleantrue
storeViolationsbooleantrue
onViolation(violation) => voidno-op
onEnforcement(result) => voidno-op

Example

const enforcer = new ComplianceEnforcer({
  rules: [
    PresetRules.hipaaNoMedicalAdvice(),
    PresetRules.secFinancialDisclaimer(),
    PresetRules.gdprPIIProtection(),
  ],
  strictMode: true,
});

enforcer.enforce

enforce(input: string, output: string, context?: ComplianceContext): Promise<ComplianceEnforcementResult>

Run all active rules against the AI output. Each rule can allow, block, redact, replace, or append content. Returns the final output, action taken, list of violations, and compliance status.

Return Type

interface ComplianceEnforcementResult {
  compliant: boolean;
  action: 'allow' | 'block' | 'redact' | 'replace' | 'append';
  finalOutput?: string;
  violations: ComplianceViolation[];
}

enforcer.addRule / removeRule

addRule(rule: ComplianceRule): void removeRule(id: string): void

Dynamically add or remove compliance rules at runtime.

enforcer.getViolations

getViolations(): ComplianceViolation[]

Returns the full violation history since the enforcer was created.

createComplianceEnforcer

createComplianceEnforcer(config?: ComplianceEnforcerConfig): ComplianceEnforcer

Factory function that creates and returns a ComplianceEnforcer instance.

PresetRules

PresetRules.hipaaNoMedicalAdvice(): ComplianceRule ...

Factory object with 9 preset compliance rule generators. Each returns a fully configured ComplianceRule ready for use with ComplianceEnforcer.

Parameters / Options

NameTypeDefault
hipaaNoMedicalAdvice()ComplianceRuleBlocks medical diagnoses and treatment advice
hipaaPIIProtection()ComplianceRuleRedacts PHI (SSN, MRN, DOB) from output
secFinancialDisclaimer()ComplianceRuleAppends financial disclaimer to investment content
secNoInvestmentAdvice()ComplianceRuleBlocks specific buy/sell recommendations
noLegalAdvice()ComplianceRuleAppends legal disclaimer to legal content
gdprPIIProtection()ComplianceRuleRedacts PII per GDPR (email, phone, address, IP)
ccpaPrivacy()ComplianceRuleRedacts sensitive data per CCPA (SSN, credit card, passport)
ageAppropriate(minAge?)ComplianceRuleBlocks mature content for underage users (default: 13+)
noPasswordRequest()ComplianceRuleBlocks AI from requesting passwords or credentials

createComplianceRule

createComplianceRule(options: CreateRuleOptions): ComplianceRule

Create a custom compliance rule with a check function that receives (input, output, context) and returns a compliance result.

Parameters / Options

NameTypeDefault
idstringrequired
namestringrequired
descriptionstringrequired
category'healthcare' | 'finance' | 'legal' | 'privacy' | 'safety' | 'security'required
severity'low' | 'medium' | 'high' | 'critical'required
checkComplianceCheckFnrequired
tagsstring[][]
enabledbooleantrue

Example

const customRule = createComplianceRule({
  id: 'no-competitor-mention',
  name: 'No Competitor Mentions',
  description: 'Prevent mentioning competitor products',
  category: 'safety',
  severity: 'medium',
  tags: ['brand', 'marketing'],
  check: async (input, output, context) => {
    const competitors = ['CompetitorA', 'CompetitorB'];
    const mentioned = competitors.filter(c =>
      output.toLowerCase().includes(c.toLowerCase())
    );
    if (mentioned.length > 0) {
      return {
        compliant: false,
        action: 'replace',
        message: `Competitor mentioned: ${mentioned.join(', ')}`,
        replacement: 'I can help you with our product features.',
        issues: mentioned.map(m => `competitor_${m}`),
        confidence: 0.9,
      };
    }
    return { compliant: true, action: 'allow' };
  },
});

detectPII / redactPII

detectPII(text: string, types?: PIIType[]): PIIMatch[] redactPII(text: string, types?: PIIType[], replacement?: string): string

Compliance-focused PII detection and redaction. Supports filtering by type: email, phone, ssn, credit_card, ip_address, medical_record, passport, address, name, date_of_birth.

@waymakerai/aicofounder-agent-sdk

Guardrail wrapper for the Anthropic Agent SDK. Adds PII, injection, compliance, cost tracking, content filtering, audit logging, and rate limiting as interceptors.

npm install @waymakerai/aicofounder-agent-sdk

createGuardedAgent

createGuardedAgent(config: GuardedAgentConfig): GuardedAgent

Create an agent with a full guard pipeline. The pipeline processes input through interceptors (rate limit, injection, PII, compliance, content, cost, audit), calls the LLM, then guards the output. Returns an agent with run() and getGuardReport() methods.

Parameters / Options

NameTypeDefault
modelstringrequired
instructionsstring'You are a helpful assistant.'
guardsGuardConfig | booleanrequired

Example

const agent = createGuardedAgent({
  model: 'claude-sonnet-4-20250514',
  instructions: 'You are a helpful customer service agent.',
  guards: {
    pii: { mode: 'redact', onDetection: 'redact' },
    injection: { sensitivity: 'medium', onDetection: 'block' },
    compliance: { frameworks: ['hipaa', 'gdpr'] },
    cost: { budgetPeriod: 'day', warningThreshold: 0.8 },
    contentFilter: true,
    audit: { destination: 'file', filePath: './audit.log' },
    rateLimit: { maxRequests: 100, windowMs: 60000 },
  },
});

const result = await agent.run('Help me with my account');
console.log(result.output);
console.log(result.blocked);
console.log(result.violations);
console.log(result.cost);
console.log(result.tokensUsed);
console.log(result.guardsApplied);

GuardedAgent.run

run(input: string, context?: Record<string, unknown>): Promise<GuardedAgentResult>

Send a message through the guard pipeline, to the LLM, and back through the output guards. Returns the output, block status, violations, cost, token usage, and list of applied guards.

Return Type

interface GuardedAgentResult {
  output: string;
  blocked: boolean;
  violations: Violation[];
  cost: number;
  tokensUsed: { input: number; output: number };
  guardsApplied: string[];
}

GuardedAgent.getGuardReport

getGuardReport(): GuardReport

Returns a comprehensive report: total requests, total cost, PII detections by type, injection attempts, compliance violations by framework, content filtered count, rate limit hits, audit event count, and timestamps.

Pre-built Factories

createHIPAAAgent(config): GuardedAgent createFinancialAgent(config): GuardedAgent createGDPRAgent(config): GuardedAgent createSafeAgent(config): GuardedAgent

Pre-configured agent factories with industry-specific guard settings. Each factory creates a GuardedAgent with the appropriate interceptors enabled and configured for its compliance domain.

Parameters / Options

NameTypeDefault
createHIPAAAgentGuardedAgentPII redaction + HIPAA compliance + audit logging
createFinancialAgentGuardedAgentSEC/FINRA compliance + cost controls + audit
createGDPRAgentGuardedAgentGDPR PII protection + data minimization + audit
createSafeAgentGuardedAgentFull guard stack: PII + injection + toxicity + rate limit

Example

import { createHIPAAAgent } from '@waymakerai/aicofounder-agent-sdk';

const agent = createHIPAAAgent({
  model: 'claude-sonnet-4-20250514',
  instructions: 'You are a healthcare information assistant.',
});

const result = await agent.run('Tell me about diabetes management');

Interceptors (7 total)

PIIInterceptor | InjectionInterceptor | CostInterceptor | ComplianceInterceptor | ContentInterceptor | AuditInterceptor | RateLimitInterceptor

Individual interceptor classes that implement the Interceptor interface with processInput() and processOutput() methods. Can be composed into a custom GuardPipeline for advanced use cases.

GuardPipeline

new GuardPipeline() pipeline.use(interceptor: Interceptor): void pipeline.processInput(text, context): Promise<PipelineResult> pipeline.processOutput(text, context): Promise<PipelineResult>

Low-level pipeline that chains interceptors. Add interceptors with use(), then process input/output text through the full chain. Order matters: rate limit, injection, PII, compliance, content, cost, audit.

guardTool

guardTool(tool: ToolDefinition, guards: GuardConfig): ToolDefinition

Wrap an individual tool definition with guards. The tool's handler is intercepted and guarded before and after execution.

Reporting

generateCostReport(agent): CostReportData formatCostReport(data): string generateComplianceReport(agent): ComplianceReportData formatComplianceReport(data): string

Generate structured or human-readable reports from a GuardedAgent's activity. Cost reports include per-model token usage and spending. Compliance reports include violation counts by framework.

@waymakerai/aicofounder-policies

Declarative policy engine for PII rules, content rules, model rules, cost rules, and data retention. Includes 9 industry presets and a composable policy builder.

npm install @waymakerai/aicofounder-policies

PolicyEngine

new PolicyEngine(policies: Policy[])

Load one or more policies and evaluate text/context against all active rules. Supports PII patterns, content patterns, model restrictions, cost limits, and data retention rules.

Example

import { PolicyEngine, hipaaPolicy, gdprPolicy } from '@waymakerai/aicofounder-policies';

const engine = new PolicyEngine([hipaaPolicy, gdprPolicy]);
const result = engine.evaluate('Patient SSN: 123-45-6789', {
  model: 'claude-sonnet-4-20250514',
  direction: 'output',
});

console.log(result.allowed);    // false
console.log(result.violations); // [{ rule: 'pii', pattern: 'ssn', ... }]

compose

compose(policies: Policy[], strategy?: CompositionStrategy, conflictResolution?: ConflictResolution): Policy

Merge multiple policies into a single composite policy. Strategies: "merge" (union of all rules), "override" (last policy wins), "strict" (most restrictive rule wins). Conflict resolution: "most-restrictive", "least-restrictive", "first-wins", "last-wins".

Parameters / Options

NameTypeDefault
policiesPolicy[]required
strategy'merge' | 'override' | 'strict''merge'
conflictResolution'most-restrictive' | 'least-restrictive' | 'first-wins' | 'last-wins''most-restrictive'

evaluatePolicy / evaluatePolicies

evaluatePolicy(policy: Policy, text: string, context: EvaluationContext): EvaluationResult evaluatePolicies(policies: Policy[], text: string, context: EvaluationContext): EvaluationResult

Evaluate text against one or multiple policies. Returns allowed/blocked status, violations with rule details, and the applicable action.

parsePolicy

parsePolicy(input: string | object): Policy

Parse a policy from YAML string or JavaScript object. Useful for loading policies from configuration files.

validatePolicy

validatePolicy(policy: Policy): ValidationResult

Validate a policy structure. Returns { valid: boolean, errors: ValidationError[] } with details about any structural issues.

Policy Presets (9 total)

hipaaPolicy | gdprPolicy | ccpaPolicy | secPolicy | pciPolicy | ferpaPolicy | soxPolicy | safetyPolicy | enterprisePolicy

Pre-built policies for common regulatory frameworks. Each includes appropriate PII rules, content rules, model restrictions, cost controls, and data retention settings.

Parameters / Options

NameTypeDefault
hipaaPolicyPolicyHIPAA healthcare compliance
gdprPolicyPolicyEU General Data Protection Regulation
ccpaPolicyPolicyCalifornia Consumer Privacy Act
secPolicyPolicySEC/FINRA financial regulations
pciPolicyPolicyPCI-DSS payment card security
ferpaPolicyPolicyFERPA student data protection
soxPolicyPolicySarbanes-Oxley financial reporting
safetyPolicyPolicyGeneral AI safety (harmful content, jailbreaks)
enterprisePolicyPolicyCombined enterprise baseline

PII Pattern Constants

CORE_PII_PATTERNS | EXTENDED_PII_PATTERNS | ALL_PII_PATTERNS | EMAIL_PATTERN | PHONE_PATTERN | SSN_PATTERN | CREDIT_CARD_PATTERN | ...

Pre-defined regex patterns for 20+ PII types. Use these to build custom policy rules. Includes: EMAIL, PHONE, SSN, CREDIT_CARD, CREDIT_CARD_FORMATTED, IPV4, IPV6, DOB, ADDRESS, MEDICAL_RECORD, PASSPORT, DRIVERS_LICENSE, BANK_ACCOUNT, ZIP_CODE, FULL_NAME, AGE, VIN, DEA, NPI.

Content Pattern Constants

SAFETY_PROHIBITED_PATTERNS | FINANCIAL_REQUIRED_PATTERNS | MEDICAL_REQUIRED_PATTERNS | JAILBREAK_ATTEMPT | PROMPT_INJECTION | ...

Pre-defined content detection patterns for harmful instructions, suicide/self-harm, child exploitation, violence threats, jailbreak attempts, prompt injection, and required disclaimers (investment, medical, legal, AI disclosure).

Cost Rule Presets

FREE_TIER_COST_RULES | STANDARD_COST_RULES | ENTERPRISE_COST_RULES | UNLIMITED_COST_RULES | createCostRules(config)

Pre-configured cost limits by tier. FREE_TIER: $1/day, STANDARD: $50/day, ENTERPRISE: $500/day. Use createCostRules() to define custom limits.

Model Rule Presets

OPENAI_ONLY | ANTHROPIC_ONLY | MAJOR_PROVIDERS_ONLY | NO_DEPRECATED | createModelRules(config)

Model restriction presets. OPENAI_ONLY and ANTHROPIC_ONLY lock to a single provider. MAJOR_PROVIDERS_ONLY allows OpenAI, Anthropic, and Google. NO_DEPRECATED blocks known deprecated models.

PolicyBuilder

new PolicyBuilder(name: string)

Fluent builder for constructing policies programmatically. Chain methods like .pii(patterns).content(rules).model(rules).cost(rules).data(rules).build().

@waymakerai/aicofounder-core

Core SDK with the main CoFounder client, cost tracking, provider management, rate limiting, retry logic, and fallback system.

npm install @waymakerai/aicofounder-core

createCoFounder

createCoFounder(config: CoFounderConfig): CoFounderClient

Create the main CoFounder client. Configure providers, default model, caching, optimization strategy, and plugins. Supports fluent API chaining.

Parameters / Options

NameTypeDefault
providersRecord<string, string>required (API keys)
defaultModelstring'claude-sonnet-4-20250514'
cacheboolean | CacheConfigfalse
optimize'cost' | 'speed' | 'quality''cost'
budgetBudgetConfigundefined
pluginsRanaPlugin[][]

Example

import { createCoFounder } from '@waymakerai/aicofounder-core';

const cofounder = createCoFounder({
  providers: {
    anthropic: process.env.ANTHROPIC_API_KEY!,
    openai: process.env.OPENAI_API_KEY!,
  },
  defaultModel: 'claude-sonnet-4-20250514',
  cache: true,
  optimize: 'cost',
});

// Simple usage
const response = await cofounder.chat('Hello!');

// Fluent API
const response2 = await cofounder
  .provider('anthropic')
  .model('claude-sonnet-4-20250514')
  .optimize('quality')
  .cache(true)
  .chat({ messages: [{ role: 'user', content: 'Hello!' }] });

CostTracker

new CostTracker(config?: CostTrackingConfig)

Track spending across all providers and models. Supports budget limits with configurable periods and warning thresholds. Methods: record(model, inputTokens, outputTokens), getStats(), getCostBreakdown(), isOverBudget(), reset().

Parameters / Options

NameTypeDefault
budgetBudgetConfigundefined
onBudgetWarning(stats: CostStats) => voidundefined
onBudgetExceeded(stats: CostStats) => voidundefined

withRetry

withRetry<T>(fn: () => Promise<T>, config?: RetryConfig): Promise<RetryResult<T>>

Retry a function with exponential backoff and jitter. Classifies errors (rate_limit, server_error, network_error, timeout) and only retries retryable failures. Returns the result with retry metadata.

Parameters / Options

NameTypeDefault
maxRetriesnumber3
baseDelaynumber (ms)1000
maxDelaynumber (ms)30000
backoffMultipliernumber2
jitterbooleantrue

RateLimiter

createRateLimiter(config: RateLimiterOptions): RateLimiter

Provider-aware rate limiter that respects API rate limit headers. Configure per-provider limits and automatic backoff.

Error Classes

RanaError | RanaAuthError | RanaRateLimitError | RanaNetworkError | RanaBudgetExceededError | RanaBudgetWarningError

Typed error classes for different failure modes. Each extends RanaError with provider, model, and context information for debugging.