Skip to content

Mock LLM Backend

Defined in: types/backends.ts:35

Options for Mock LLM backend

optional finishReason: string

Defined in: types/backends.ts:45

Override finishReason in done events (default: “stop”)

optional latency: MockLLMLatency

Defined in: types/backends.ts:41

Latency simulation — delay before each response

optional mode: MockLLMResponseMode

Defined in: types/backends.ts:37

Response mode configuration

optional models: object[]

Defined in: types/backends.ts:39

Models to advertise from listModels()

optional description: string

id: string

optional name: string

optional permissions: MockLLMPermissionOptions

Defined in: types/backends.ts:47

Permission simulation for tool calls

optional streaming: MockLLMStreamingOptions

Defined in: types/backends.ts:43

Streaming behavior control

optional structuredOutput: unknown

Defined in: types/backends.ts:51

Structured output — return specific JSON from runStructured()

optional toolCalls: MockLLMToolCall[]

Defined in: types/backends.ts:49

Tool call simulation — emit tool_call_start/end events during streaming


Defined in: types/backends.ts:75

Permission simulation options

optional autoApprove: boolean

Defined in: types/backends.ts:79

Auto-approve all permission requests (default: false — uses supervisor callback)

optional denyTools: string[]

Defined in: types/backends.ts:81

Tool names to always deny

toolNames: string[]

Defined in: types/backends.ts:77

Tool names to simulate permission requests for


Defined in: types/backends.ts:67

Streaming chunk control

optional chunkDelayMs: number

Defined in: types/backends.ts:71

Delay in ms between chunks (default: 0)

optional chunkSize: number

Defined in: types/backends.ts:69

Characters per chunk (default: word-boundary splitting)


Defined in: types/backends.ts:85

Tool call simulation — emitted as tool_call_start/end events in stream

optional args: Record<string, unknown>

Defined in: types/backends.ts:89

Tool call arguments

optional result: unknown

Defined in: types/backends.ts:91

Tool execution result

optional toolCallId: string

Defined in: types/backends.ts:93

Tool call ID (auto-generated if not provided)

toolName: string

Defined in: types/backends.ts:87

Tool name (e.g. “bash”, “file_write”)

MockLLMLatency = { ms: number; type: "fixed"; } | { maxMs: number; minMs: number; type: "random"; }

Defined in: types/backends.ts:62

Latency simulation configuration


MockLLMResponseMode = { type: "echo"; } | { response: string; type: "static"; } | { loop?: boolean; responses: string[]; type: "scripted"; } | { code?: string; error: string; recoverable?: boolean; type: "error"; }

Defined in: types/backends.ts:55

Response mode — determines how the mock agent generates responses

createMockLLMService(options?): IAgentService

Defined in: backends/mock-llm.ts:418

Create a mock LLM backend service for automated testing.

Unlike the lightweight createMockAgentService (from @witqq/agent-sdk/testing), this backend extends BaseAgent and participates in the full agent lifecycle: retry, heartbeat, activity timeout, middleware pipeline, and usage enrichment.

MockLLMBackendOptions = {}

IAgentService

import { createMockLLMService } from "@witqq/agent-sdk/mock-llm";
// Basic echo mode
const service = createMockLLMService({ mode: { type: "echo" } });
// With latency simulation and streaming control
const realisticService = createMockLLMService({
mode: { type: "static", response: "Hello!" },
latency: { type: "fixed", ms: 100 },
streaming: { chunkSize: 5, chunkDelayMs: 10 },
finishReason: "stop",
});
// With permission simulation
const permService = createMockLLMService({
mode: { type: "echo" },
permissions: { toolNames: ["bash", "file_write"], autoApprove: true },
});