Skip to content

Backends Overview

Four backends implement IAgentService. CLI backends (Copilot, Claude) spawn a subprocess — the CLI drives the tool loop. API backends (Vercel AI) make HTTP calls — the SDK drives the tool loop. Mock LLM is built-in for testing.

FeatureCopilotClaudeVercel AIMock LLM
run()
stream()
runStructured()✓ (text extraction)✓ (text extraction)✓ (generateObject)✓ (configurable)
Persistent sessions
Tool executionExternal (CLI)External (CLI)Internal (SDK)Simulated
Permission callbacks✓ (configurable)
Ask user✓ (injected tool)
AuthGitHub Device FlowOAuth + PKCEAPI key
listModels()✓ (GitHub API)✓ (Anthropic API)✓ (provider API)✓ (static list)
Retry on transient errors
Heartbeat
External dependency@github/copilot-sdk@anthropic-ai/claude-agent-sdkai + @ai-sdk/openai-compatibleNone

Wraps @github/copilot-sdk — spawns a Node.js subprocess running the Copilot CLI agent.

Terminal window
npm install @github/copilot-sdk
import { createCopilotService } from "@witqq/agent-sdk/copilot";
const service = createCopilotService({
useLoggedInUser: true, // use GitHub CLI auth (gh auth)
// OR:
// githubToken: "ghp_...", // explicit token
workingDirectory: process.cwd(), // optional
cliPath: "/path/to/copilot", // optional custom CLI path
cliArgs: ["--allow-all"], // optional extra CLI flags
env: { PATH: "/custom/bin" }, // optional env vars for subprocess
});
  • System requirements: @github/copilot-sdk includes a native binary requiring glibc. Alpine Linux (musl) is not supported — use node:20-bookworm-slim or similar.
  • Headless mode: Without supervisor.onPermission / supervisor.onAskUser, the backend auto-approves permissions and auto-answers user questions to prevent hanging.
  • System prompt mode: Default mode: "append" adds your prompt to the Copilot built-in prompt. Use systemMessageMode: "replace" to fully replace it (removes built-in tool instructions).
  • Available tools filter: Restrict Copilot built-in tools with availableTools: ["web_search", "web_fetch"] in AgentConfig.

Wraps @anthropic-ai/claude-agent-sdk — spawns a subprocess running the Claude CLI agent.

Terminal window
npm install @anthropic-ai/claude-agent-sdk
import { createClaudeService } from "@witqq/agent-sdk/claude";
const service = createClaudeService({
workingDirectory: process.cwd(),
cliPath: "/path/to/claude", // optional
maxTurns: 10, // optional turn limit
env: { CLAUDE_CONFIG_DIR: "/custom/config" },
});
  • supervisor.onAskUser is not supported — a warning is emitted if set.
  • When supervisor.onPermission is set, the Claude backend automatically sets permissionMode: "default" so the CLI invokes the callback instead of using built-in rules.

Wraps ai SDK with @ai-sdk/openai-compatible for OpenRouter, OpenAI, and compatible providers.

Terminal window
npm install ai @ai-sdk/openai-compatible
import { createVercelAIService } from "@witqq/agent-sdk/vercel-ai";
const service = createVercelAIService({
apiKey: process.env.OPENROUTER_API_KEY!,
baseUrl: "https://openrouter.ai/api/v1", // default
provider: "openrouter", // default
});

Pass provider options via providerOptions on AgentConfig:

const agent = service.createAgent({
model: "google/gemini-2.0-flash",
systemPrompt: "Think step by step.",
providerOptions: {
google: { thinkingConfig: { thinkingBudget: 1024 } },
},
});
  • Uses generateText() for runs, generateObject() for structured output, streamText() for streaming.
  • Supports supervisor.onAskUser via an injected ask_user tool.
  • finishReason from the stream finish part is propagated to the done event.

Built-in backend for automated testing. No external dependencies — no API keys, no CLI tools, no network calls. Extends BaseAgent for full lifecycle support (retry, heartbeat, middleware, usage enrichment).

import { createMockLLMService } from "@witqq/agent-sdk/mock-llm";
const service = createMockLLMService({ mode: { type: "echo" } });
const agent = service.createAgent({ systemPrompt: "Test" });
ModeConfigurationBehavior
Echo{ type: "echo" }Returns the user’s prompt as the response
Static{ type: "static", response: "text" }Always returns the specified response
Scripted{ type: "scripted", responses: [...], loop?: true }Returns responses in sequence. With loop: true, cycles back to start; without, repeats last response
Error{ type: "error", error: "msg", code?: "TIMEOUT", recoverable?: true }Throws AgentSDKError. Set recoverable: true for BaseAgent retry
  • Latency simulationlatency: { type: "fixed", ms: 100 } or latency: { type: "random", minMs, maxMs }
  • Streaming controlstreaming: { chunkSize: 5, chunkDelayMs: 10 }
  • Permission simulationpermissions: { toolNames: ["bash"], autoApprove: true } or permissions: { toolNames: ["rm"], denyTools: ["rm"] }
  • Tool call simulationtoolCalls: [{ toolName: "search", args: {...}, result: {...} }]
  • Structured outputstructuredOutput: { city: "Paris", country: "France" }
  • Configurable finishReasonfinishReason: "stop" | "length" | "tool-calls"

See Mock LLM Guide for testing patterns and integration with createMockAgentService.

All backends share AgentConfig and return the same AgentResult. Switch by changing only the service creation:

import { createAgentService } from "@witqq/agent-sdk";
const config = {
systemPrompt: "You are a helpful assistant.",
tools: [searchTool],
};
// Switch backend:
const service = await createAgentService("copilot", { useLoggedInUser: true });
// const service = await createAgentService("claude", { workingDirectory: "." });
// const service = await createAgentService("vercel-ai", { apiKey: "..." });
// Mock LLM — use direct import (not registered in createAgentService):
// import { createMockLLMService } from "@witqq/agent-sdk/mock-llm";
// const service = createMockLLMService({ mode: { type: "echo" } });
const agent = service.createAgent(config);
const result = await agent.run("Hello", { model: "gpt-5-mini" });

Or use direct backend imports:

import { createCopilotService } from "@witqq/agent-sdk/copilot";
import { createClaudeService } from "@witqq/agent-sdk/claude";
import { createVercelAIService } from "@witqq/agent-sdk/vercel-ai";
import { createMockLLMService } from "@witqq/agent-sdk/mock-llm";
BackendModel ID exampleShort name
Copilotgpt-4o(same)
Claudeclaude-sonnet-4-5-20250514sonnet
Vercel AIanthropic/claude-sonnet-4-5(provider-specific)
Mock LLMmock-model(any string)

Use service.listModels() to get available models per backend.


API Reference: Core Exports · Copilot · Claude · Vercel AI · Mock LLM