rowboat/apps/x/packages/shared/src/runs.ts

166 lines
4.2 KiB
TypeScript
Raw Normal View History

2025-12-29 15:30:57 +05:30
import { LlmStepStreamEvent } from "./llm-step-events.js";
import { Message, ToolCallPart } from "./message.js";
import z from "zod";
const BaseRunEvent = z.object({
runId: z.string(),
ts: z.iso.datetime().optional(),
subflow: z.array(z.string()),
});
export const RunProcessingStartEvent = BaseRunEvent.extend({
type: z.literal("run-processing-start"),
});
export const RunProcessingEndEvent = BaseRunEvent.extend({
type: z.literal("run-processing-end"),
});
export const StartEvent = BaseRunEvent.extend({
type: z.literal("start"),
agentName: z.string(),
freeze model + provider per run at creation time The model dropdown was broken in two ways: it wrote to ~/.rowboat/config/models.json (the BYOK creds file, stamped with a fake `flavor: 'openrouter'` to satisfy zod when signed in), and the runtime ignored that write entirely for signed-in users because `streamAgent` hard-coded `gpt-5.4`. Model selection was also globally scoped, so every chat shared one brain. This change moves model + provider out of the global config and onto the run itself, resolved once at runs:create and frozen for the run's lifetime. ## Resolution `runsCore.createRun` resolves per-field, falling through: run.model = opts.model ?? agent.model ?? defaults.model run.provider = opts.provider ?? agent.provider ?? defaults.provider A new `core/models/defaults.ts` is the only place in the codebase that branches on signed-in state. `getDefaultModelAndProvider()` returns name strings; `resolveProviderConfig(name)` does the name → full LlmProvider lookup at runtime. `createProvider` learns about `flavor: 'rowboat'` so the gateway is just another flavor. `provider` is stored as a name (e.g. `"rowboat"`, `"openai"`), not a full LlmProvider object. API keys never get written into the JSONL log; rotating a key in models.json applies to existing runs without re-creation. Cost: deleting a provider from settings breaks runs that referenced it (clear error surfaced via `resolveProviderConfig`). ## Runtime `streamAgent` no longer resolves anything — it reads `state.runModel` / `state.runProvider`, looks up the provider config, instantiates. Subflows inherit the parent run's pair, so KG / inline-task subagents run on whatever the main run resolved to at creation. The `knowledgeGraphAgents` array, `isKgAgent`, and the per-agent default constants are gone. KG / inline-task / pre-built agents declare their preferred model in YAML frontmatter (claude-haiku-4.5 / claude-sonnet-4.6) — used at resolution time when those agents are themselves the top-level agent of a run (background triggers, scheduled tasks, etc.). ## Standalone callers Non-run LLM call sites (summarize_meeting, track/routing, builtin-tools parseFile) and `agent-schedule/runner` were branching on signed-in independently. They all route through `getDefaultModelAndProvider` + `resolveProviderConfig` + `createProvider` now; `agent-schedule/runner` switched from raw `runsRepo.create` to `runsCore.createRun` so resolution applies to scheduled-agent runs too. ## UI `chat-input-with-mentions` stops calling `models:saveConfig`. The dropdown notifies the parent via `onSelectedModelChange` ({provider, model} as names); App.tsx stashes selection per-tab and passes it to the next `runs:create`. When a run already exists, the input fetches it and renders a static label — model can't change mid-run. ## Legacy runs A lenient zod schema in `repo.ts` (`StartEvent.extend(...optional)` plus `RunEvent.or(LegacyStartEvent)`) parses pre-existing runs. `repo.fetch` fills missing model/provider from current defaults and returns the strict canonical `Run` type. No file-rewriting migration; no impact on the canonical schema in `@x/shared`. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-22 12:26:01 +05:30
model: z.string(),
provider: z.string(),
// useCase/subUseCase tag the run for analytics. Optional on read so legacy
// run files written before these fields existed still parse cleanly.
useCase: z.enum([
"copilot_chat",
"track_block",
"meeting_note",
"knowledge_sync",
]).optional(),
subUseCase: z.string().optional(),
2025-12-29 15:30:57 +05:30
});
export const SpawnSubFlowEvent = BaseRunEvent.extend({
type: z.literal("spawn-subflow"),
agentName: z.string(),
toolCallId: z.string(),
});
export const LlmStreamEvent = BaseRunEvent.extend({
type: z.literal("llm-stream-event"),
event: LlmStepStreamEvent,
});
export const MessageEvent = BaseRunEvent.extend({
type: z.literal("message"),
messageId: z.string(),
message: Message,
});
export const ToolInvocationEvent = BaseRunEvent.extend({
type: z.literal("tool-invocation"),
toolCallId: z.string().optional(),
toolName: z.string(),
input: z.string(),
});
export const ToolResultEvent = BaseRunEvent.extend({
type: z.literal("tool-result"),
toolCallId: z.string().optional(),
toolName: z.string(),
result: z.any(),
});
export const AskHumanRequestEvent = BaseRunEvent.extend({
type: z.literal("ask-human-request"),
toolCallId: z.string(),
query: z.string(),
});
export const AskHumanResponseEvent = BaseRunEvent.extend({
type: z.literal("ask-human-response"),
toolCallId: z.string(),
response: z.string(),
});
export const ToolPermissionRequestEvent = BaseRunEvent.extend({
type: z.literal("tool-permission-request"),
toolCall: ToolCallPart,
});
export const ToolPermissionResponseEvent = BaseRunEvent.extend({
type: z.literal("tool-permission-response"),
toolCallId: z.string(),
response: z.enum(["approve", "deny"]),
scope: z.enum(["once", "session", "always"]).optional(),
2025-12-29 15:30:57 +05:30
});
export const RunErrorEvent = BaseRunEvent.extend({
type: z.literal("error"),
error: z.string(),
});
export const RunStoppedEvent = BaseRunEvent.extend({
type: z.literal("run-stopped"),
reason: z.enum(["user-requested", "force-stopped"]).optional(),
});
2025-12-29 15:30:57 +05:30
export const RunEvent = z.union([
RunProcessingStartEvent,
RunProcessingEndEvent,
StartEvent,
SpawnSubFlowEvent,
LlmStreamEvent,
MessageEvent,
ToolInvocationEvent,
ToolResultEvent,
AskHumanRequestEvent,
AskHumanResponseEvent,
ToolPermissionRequestEvent,
ToolPermissionResponseEvent,
RunErrorEvent,
RunStoppedEvent,
2025-12-29 15:30:57 +05:30
]);
export const ToolPermissionAuthorizePayload = ToolPermissionResponseEvent.pick({
subflow: true,
toolCallId: true,
response: true,
scope: true,
2025-12-29 15:30:57 +05:30
});
export const AskHumanResponsePayload = AskHumanResponseEvent.pick({
subflow: true,
toolCallId: true,
response: true,
});
export const UseCase = z.enum([
"copilot_chat",
"track_block",
"meeting_note",
"knowledge_sync",
]);
2025-12-29 15:30:57 +05:30
export const Run = z.object({
id: z.string(),
2026-01-20 16:36:36 +05:30
title: z.string().optional(),
2025-12-29 15:30:57 +05:30
createdAt: z.iso.datetime(),
agentId: z.string(),
freeze model + provider per run at creation time The model dropdown was broken in two ways: it wrote to ~/.rowboat/config/models.json (the BYOK creds file, stamped with a fake `flavor: 'openrouter'` to satisfy zod when signed in), and the runtime ignored that write entirely for signed-in users because `streamAgent` hard-coded `gpt-5.4`. Model selection was also globally scoped, so every chat shared one brain. This change moves model + provider out of the global config and onto the run itself, resolved once at runs:create and frozen for the run's lifetime. ## Resolution `runsCore.createRun` resolves per-field, falling through: run.model = opts.model ?? agent.model ?? defaults.model run.provider = opts.provider ?? agent.provider ?? defaults.provider A new `core/models/defaults.ts` is the only place in the codebase that branches on signed-in state. `getDefaultModelAndProvider()` returns name strings; `resolveProviderConfig(name)` does the name → full LlmProvider lookup at runtime. `createProvider` learns about `flavor: 'rowboat'` so the gateway is just another flavor. `provider` is stored as a name (e.g. `"rowboat"`, `"openai"`), not a full LlmProvider object. API keys never get written into the JSONL log; rotating a key in models.json applies to existing runs without re-creation. Cost: deleting a provider from settings breaks runs that referenced it (clear error surfaced via `resolveProviderConfig`). ## Runtime `streamAgent` no longer resolves anything — it reads `state.runModel` / `state.runProvider`, looks up the provider config, instantiates. Subflows inherit the parent run's pair, so KG / inline-task subagents run on whatever the main run resolved to at creation. The `knowledgeGraphAgents` array, `isKgAgent`, and the per-agent default constants are gone. KG / inline-task / pre-built agents declare their preferred model in YAML frontmatter (claude-haiku-4.5 / claude-sonnet-4.6) — used at resolution time when those agents are themselves the top-level agent of a run (background triggers, scheduled tasks, etc.). ## Standalone callers Non-run LLM call sites (summarize_meeting, track/routing, builtin-tools parseFile) and `agent-schedule/runner` were branching on signed-in independently. They all route through `getDefaultModelAndProvider` + `resolveProviderConfig` + `createProvider` now; `agent-schedule/runner` switched from raw `runsRepo.create` to `runsCore.createRun` so resolution applies to scheduled-agent runs too. ## UI `chat-input-with-mentions` stops calling `models:saveConfig`. The dropdown notifies the parent via `onSelectedModelChange` ({provider, model} as names); App.tsx stashes selection per-tab and passes it to the next `runs:create`. When a run already exists, the input fetches it and renders a static label — model can't change mid-run. ## Legacy runs A lenient zod schema in `repo.ts` (`StartEvent.extend(...optional)` plus `RunEvent.or(LegacyStartEvent)`) parses pre-existing runs. `repo.fetch` fills missing model/provider from current defaults and returns the strict canonical `Run` type. No file-rewriting migration; no impact on the canonical schema in `@x/shared`. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-22 12:26:01 +05:30
model: z.string(),
provider: z.string(),
useCase: UseCase.optional(),
subUseCase: z.string().optional(),
2025-12-29 15:30:57 +05:30
log: z.array(RunEvent),
});
export const ListRunsResponse = z.object({
runs: z.array(Run.pick({
id: true,
2026-01-20 16:36:36 +05:30
title: true,
2025-12-29 15:30:57 +05:30
createdAt: true,
agentId: true,
})),
nextCursor: z.string().optional(),
});
freeze model + provider per run at creation time The model dropdown was broken in two ways: it wrote to ~/.rowboat/config/models.json (the BYOK creds file, stamped with a fake `flavor: 'openrouter'` to satisfy zod when signed in), and the runtime ignored that write entirely for signed-in users because `streamAgent` hard-coded `gpt-5.4`. Model selection was also globally scoped, so every chat shared one brain. This change moves model + provider out of the global config and onto the run itself, resolved once at runs:create and frozen for the run's lifetime. ## Resolution `runsCore.createRun` resolves per-field, falling through: run.model = opts.model ?? agent.model ?? defaults.model run.provider = opts.provider ?? agent.provider ?? defaults.provider A new `core/models/defaults.ts` is the only place in the codebase that branches on signed-in state. `getDefaultModelAndProvider()` returns name strings; `resolveProviderConfig(name)` does the name → full LlmProvider lookup at runtime. `createProvider` learns about `flavor: 'rowboat'` so the gateway is just another flavor. `provider` is stored as a name (e.g. `"rowboat"`, `"openai"`), not a full LlmProvider object. API keys never get written into the JSONL log; rotating a key in models.json applies to existing runs without re-creation. Cost: deleting a provider from settings breaks runs that referenced it (clear error surfaced via `resolveProviderConfig`). ## Runtime `streamAgent` no longer resolves anything — it reads `state.runModel` / `state.runProvider`, looks up the provider config, instantiates. Subflows inherit the parent run's pair, so KG / inline-task subagents run on whatever the main run resolved to at creation. The `knowledgeGraphAgents` array, `isKgAgent`, and the per-agent default constants are gone. KG / inline-task / pre-built agents declare their preferred model in YAML frontmatter (claude-haiku-4.5 / claude-sonnet-4.6) — used at resolution time when those agents are themselves the top-level agent of a run (background triggers, scheduled tasks, etc.). ## Standalone callers Non-run LLM call sites (summarize_meeting, track/routing, builtin-tools parseFile) and `agent-schedule/runner` were branching on signed-in independently. They all route through `getDefaultModelAndProvider` + `resolveProviderConfig` + `createProvider` now; `agent-schedule/runner` switched from raw `runsRepo.create` to `runsCore.createRun` so resolution applies to scheduled-agent runs too. ## UI `chat-input-with-mentions` stops calling `models:saveConfig`. The dropdown notifies the parent via `onSelectedModelChange` ({provider, model} as names); App.tsx stashes selection per-tab and passes it to the next `runs:create`. When a run already exists, the input fetches it and renders a static label — model can't change mid-run. ## Legacy runs A lenient zod schema in `repo.ts` (`StartEvent.extend(...optional)` plus `RunEvent.or(LegacyStartEvent)`) parses pre-existing runs. `repo.fetch` fills missing model/provider from current defaults and returns the strict canonical `Run` type. No file-rewriting migration; no impact on the canonical schema in `@x/shared`. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-22 12:26:01 +05:30
export const CreateRunOptions = z.object({
agentId: z.string(),
model: z.string().optional(),
provider: z.string().optional(),
useCase: UseCase.optional(),
subUseCase: z.string().optional(),
freeze model + provider per run at creation time The model dropdown was broken in two ways: it wrote to ~/.rowboat/config/models.json (the BYOK creds file, stamped with a fake `flavor: 'openrouter'` to satisfy zod when signed in), and the runtime ignored that write entirely for signed-in users because `streamAgent` hard-coded `gpt-5.4`. Model selection was also globally scoped, so every chat shared one brain. This change moves model + provider out of the global config and onto the run itself, resolved once at runs:create and frozen for the run's lifetime. ## Resolution `runsCore.createRun` resolves per-field, falling through: run.model = opts.model ?? agent.model ?? defaults.model run.provider = opts.provider ?? agent.provider ?? defaults.provider A new `core/models/defaults.ts` is the only place in the codebase that branches on signed-in state. `getDefaultModelAndProvider()` returns name strings; `resolveProviderConfig(name)` does the name → full LlmProvider lookup at runtime. `createProvider` learns about `flavor: 'rowboat'` so the gateway is just another flavor. `provider` is stored as a name (e.g. `"rowboat"`, `"openai"`), not a full LlmProvider object. API keys never get written into the JSONL log; rotating a key in models.json applies to existing runs without re-creation. Cost: deleting a provider from settings breaks runs that referenced it (clear error surfaced via `resolveProviderConfig`). ## Runtime `streamAgent` no longer resolves anything — it reads `state.runModel` / `state.runProvider`, looks up the provider config, instantiates. Subflows inherit the parent run's pair, so KG / inline-task subagents run on whatever the main run resolved to at creation. The `knowledgeGraphAgents` array, `isKgAgent`, and the per-agent default constants are gone. KG / inline-task / pre-built agents declare their preferred model in YAML frontmatter (claude-haiku-4.5 / claude-sonnet-4.6) — used at resolution time when those agents are themselves the top-level agent of a run (background triggers, scheduled tasks, etc.). ## Standalone callers Non-run LLM call sites (summarize_meeting, track/routing, builtin-tools parseFile) and `agent-schedule/runner` were branching on signed-in independently. They all route through `getDefaultModelAndProvider` + `resolveProviderConfig` + `createProvider` now; `agent-schedule/runner` switched from raw `runsRepo.create` to `runsCore.createRun` so resolution applies to scheduled-agent runs too. ## UI `chat-input-with-mentions` stops calling `models:saveConfig`. The dropdown notifies the parent via `onSelectedModelChange` ({provider, model} as names); App.tsx stashes selection per-tab and passes it to the next `runs:create`. When a run already exists, the input fetches it and renders a static label — model can't change mid-run. ## Legacy runs A lenient zod schema in `repo.ts` (`StartEvent.extend(...optional)` plus `RunEvent.or(LegacyStartEvent)`) parses pre-existing runs. `repo.fetch` fills missing model/provider from current defaults and returns the strict canonical `Run` type. No file-rewriting migration; no impact on the canonical schema in `@x/shared`. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-22 12:26:01 +05:30
});