Compare commits

...

17 commits
v0.3.0 ... main

Author SHA1 Message Date
Ramnique Singh
17afc935bf
Merge pull request #524 from rowboatlabs/dev
identify signed-in users on every app startup
2026-04-28 20:22:26 +05:30
Ramnique Singh
de176ec458 identify signed-in users on every app startup
Previously identify() only fired during the OAuth completion flow, so
existing installs (signed in before analytics shipped) and every cold
start of v0.3.4+ would emit main-process events under the anonymous
installation_id until the user happened to re-sign-in.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-28 20:21:37 +05:30
Ramnique Singh
0dff57e8f7
Merge pull request #523 from rowboatlabs/dev
add posthog analytics for llm usage and auth events
2026-04-28 20:10:13 +05:30
Ramnique Singh
43c1ba719f add posthog analytics for llm usage and auth events
Captures per-LLM-call token usage tagged by feature (copilot chat,
track block, meeting note, knowledge sync), plus sign-in / sign-out
and identity. Renderer and main share one PostHog identity so events
from either process resolve to the same user.

See apps/x/ANALYTICS.md for the event catalog, person properties,
use-case taxonomy, and how to add new events.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-28 19:53:40 +05:30
arkml
f14f3b0347
Merge pull request #520 from rowboatlabs/dev
Dev
2026-04-24 18:44:24 +05:30
Ramnique Singh
d42fb26bcc allow per-track model + provider overrides
Track block YAML gains optional `model` and `provider` fields. When set,
the track runner passes them through to `createRun` so this specific
track runs on the chosen model/provider; when unset the global default
flows through (`getTrackBlockModel()` + the resolved provider).

The track skill picks up the new fields automatically via the embedded
`z.toJSONSchema(TrackBlockSchema)` and adds an explicit "Do Not Set"
section: copilot leaves them omitted unless the user named a specific
model or provider for the track. Common bad reasons ("might be faster",
"in case it matters", complex instruction) are called out so the
defaults stay the path of least resistance.

Track modal Details tab shows the values when set, in the same
conditional `<dt>/<dd>` style as the lastRun fields.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-24 16:58:18 +05:30
Ramnique Singh
caf00fae0c configurable kg / meeting / track-block model overrides
Bring back per-category model selection that 5c4aa772 dropped, plus add a
new track-block category. Each is a BYOK-only override on `LlmModelConfig`
(`knowledgeGraphModel`, `meetingNotesModel`, `trackBlockModel`); signed-in
users always get the curated gateway default and never hit the on-disk
config.

Three helpers in core/models/defaults.ts — `getKgModel`,
`getTrackBlockModel`, `getMeetingNotesModel` — each check `isSignedIn`
first (fast path) and fall through to `cfg.<field> ?? cfg.model` for BYOK.

The model is now picked at the invocation site rather than via runtime
agent-name branching: each top-level `createRun` for a polling KG agent
or a track-block update passes `model: await getXxxModel()`. The `model:`
declarations on the affected agent YAMLs are dropped — they were dead
code under the per-call override. Standalone (non-run) callers
`track/routing` and `summarize_meeting` use the helpers inline.

Settings dialog and the two onboarding flows surface the two new fields
("Meeting Notes Model", "Track Block Model") next to the existing
"Knowledge Graph Model"; `repo.setConfig` persists all three per-provider.

Note: the signed-in `RowboatModelSettings` panel still has its
now-defunct kg selector; that's a UI cleanup for a later pass.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-24 16:44:02 +05:30
Ramnique Singh
bdf270b7a1 convert Today.md track blocks to event-driven and batch Gmail sync events
Removes polling schedules from the up-next and calendar track blocks on
Today.md so they refresh only on calendar.synced events, and rewrites
the emails track instruction to consume a multi-thread digest payload.
Batches Gmail sync so one email.synced event covers a whole sync run
(capped at 10 threads per digest) instead of one event per thread,
which collapses Pass 1 routing calls for multi-thread syncs.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-24 11:15:56 +05:30
Arjun
0bb256879c preserve formatting in chat input text 2026-04-23 21:29:51 +05:30
Arjun
75842fa06b assistant chat ui shows the model name properly 2026-04-23 00:49:06 +05:30
Arjun
f4dbb58a77 add rowboat meeting notes to graph 2026-04-23 00:35:08 +05:30
Ramnique Singh
5c4aa77255 freeze model + provider per run at creation time
The model dropdown was broken in two ways: it wrote to ~/.rowboat/config/models.json
(the BYOK creds file, stamped with a fake `flavor: 'openrouter'` to satisfy zod
when signed in), and the runtime ignored that write entirely for signed-in users
because `streamAgent` hard-coded `gpt-5.4`. Model selection was also globally
scoped, so every chat shared one brain.

This change moves model + provider out of the global config and onto the run
itself, resolved once at runs:create and frozen for the run's lifetime.

## Resolution

`runsCore.createRun` resolves per-field, falling through:

  run.model    = opts.model    ?? agent.model    ?? defaults.model
  run.provider = opts.provider ?? agent.provider ?? defaults.provider

A new `core/models/defaults.ts` is the only place in the codebase that branches
on signed-in state. `getDefaultModelAndProvider()` returns name strings;
`resolveProviderConfig(name)` does the name → full LlmProvider lookup at
runtime. `createProvider` learns about `flavor: 'rowboat'` so the gateway is
just another flavor.

`provider` is stored as a name (e.g. `"rowboat"`, `"openai"`), not a full
LlmProvider object. API keys never get written into the JSONL log; rotating a
key in models.json applies to existing runs without re-creation. Cost: deleting
a provider from settings breaks runs that referenced it (clear error surfaced
via `resolveProviderConfig`).

## Runtime

`streamAgent` no longer resolves anything — it reads `state.runModel` /
`state.runProvider`, looks up the provider config, instantiates. Subflows
inherit the parent run's pair, so KG / inline-task subagents run on whatever
the main run resolved to at creation. The `knowledgeGraphAgents` array,
`isKgAgent`, and the per-agent default constants are gone.

KG / inline-task / pre-built agents declare their preferred model in YAML
frontmatter (claude-haiku-4.5 / claude-sonnet-4.6) — used at resolution time
when those agents are themselves the top-level agent of a run (background
triggers, scheduled tasks, etc.).

## Standalone callers

Non-run LLM call sites (summarize_meeting, track/routing, builtin-tools
parseFile) and `agent-schedule/runner` were branching on signed-in
independently. They all route through `getDefaultModelAndProvider` +
`resolveProviderConfig` + `createProvider` now; `agent-schedule/runner`
switched from raw `runsRepo.create` to `runsCore.createRun` so resolution
applies to scheduled-agent runs too.

## UI

`chat-input-with-mentions` stops calling `models:saveConfig`. The dropdown
notifies the parent via `onSelectedModelChange` ({provider, model} as names);
App.tsx stashes selection per-tab and passes it to the next `runs:create`.
When a run already exists, the input fetches it and renders a static label —
model can't change mid-run.

## Legacy runs

A lenient zod schema in `repo.ts` (`StartEvent.extend(...optional)` plus
`RunEvent.or(LegacyStartEvent)`) parses pre-existing runs. `repo.fetch` fills
missing model/provider from current defaults and returns the strict canonical
`Run` type. No file-rewriting migration; no impact on the canonical schema in
`@x/shared`.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-22 12:26:01 +05:30
Ramnique Singh
51f2ad6e8a
Merge pull request #517 from rowboatlabs/dev
Dev
2026-04-21 14:39:46 +05:30
Ramnique Singh
15567cd1dd let tool failures be observed by the model instead of killing the run
streamAgent executed tools with no try/catch around the call. A throw
from execTool or from a subflow agent streamed up through streamAgent,
out of trigger's inner catch (which rethrows non-abort errors), and
into the new top-level catch that the previous commit added. That
surfaces the failure — but it ends the run. One misbehaving tool took
down the whole conversation.

Wrap the tool-execution block in a try/catch. On abort, rethrow so the
existing AbortError path still fires. On any other error, convert the
exception into a tool-result payload ({ success: false, error, toolName })
and keep going. The model then sees a tool-result message saying the
tool failed with a specific message and can apologize, retry with
different arguments, pick a different tool, or explain to the user —
the normal recovery moves it already knows how to make.

No change to happy-path tool execution, no change to abort handling,
no change to subflow agent semantics (subflows that themselves error
are treated identically to regular tool errors at the call site).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-21 14:38:19 +05:30
Ramnique Singh
c81d3cb27b surface silent runtime failures as error events
AgentRuntime.trigger() wrapped its body in try/finally with no outer
catch. An inner catch around the streamAgent for-await only handled
AbortError and rethrew everything else. Call sites fire-and-forget
trigger (runs.ts:26,60,72), so any thrown error became an unhandled
promise rejection. The finally still ran and published
run-processing-end, but nothing told the renderer why — the chat
showed the spinner, then an empty assistant bubble.

Provider misconfig, invalid API keys, unknown model ids, streamText
setup throws, runsRepo.fetch or loadAgent failing, and provider
auth/rate-limit rejections on the first chunk all hit this path on a
first message. All invisible.

Add a top-level catch that formats the error to a string and emits a
{type: "error"} RunEvent via the existing runsRepo/bus path. The
renderer already renders those as a chat bubble plus toast
(App.tsx:2069) — no UI work needed.

No changes to the abort path: user-initiated stops still flow through
the existing inner catch and the signal.aborted branch that emits
run-stopped.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-21 14:36:00 +05:30
Ramnique Singh
32b6b2f1c0 Merge branch 'main' into dev 2026-04-21 13:36:58 +05:30
tusharmagar
0f051ea467 fix: duplicate navigation button 2026-04-21 13:02:44 +05:30
56 changed files with 1449 additions and 360 deletions

View file

@ -109,6 +109,7 @@ Long-form docs for specific features. Read the relevant file before making chang
| Feature | Doc | | Feature | Doc |
|---------|-----| |---------|-----|
| Track Blocks — auto-updating note content (scheduled / event-driven / manual), Copilot skill, prompts catalog | `apps/x/TRACKS.md` | | Track Blocks — auto-updating note content (scheduled / event-driven / manual), Copilot skill, prompts catalog | `apps/x/TRACKS.md` |
| Analytics — PostHog event catalog, person properties, use-case taxonomy, how to add a new event | `apps/x/ANALYTICS.md` |
## Common Tasks ## Common Tasks

146
apps/x/ANALYTICS.md Normal file
View file

@ -0,0 +1,146 @@
# Analytics
> PostHog instrumentation for `apps/x`. We capture LLM token usage (broken down by feature) and identity/auth events. Renderer (`posthog-js`) and main (`posthog-node`) share one stable distinct_id and one identified user, so events from either process resolve to the same person.
## Identity model
- **Anonymous distinct_id** = `installationId` from `~/.rowboat/config/installation.json` (auto-generated on first run; see `packages/core/src/analytics/installation.ts`).
- Renderer fetches it from main on startup via the `analytics:bootstrap` IPC channel and passes it as PostHog's `bootstrap.distinctID`. Main uses it directly in `posthog-node`.
- **On rowboat sign-in**: `posthog.identify(rowboatUserId)` runs in **both** processes.
- Main does it from `apps/main/src/oauth-handler.ts:285` (after `getBillingInfo()` resolves) — this is the load-bearing call, since main always runs.
- Renderer mirrors via `apps/renderer/src/hooks/useAnalyticsIdentity.ts` listening on the `oauth:didConnect` IPC event.
- Main also calls `alias()` so events emitted under the anonymous installation_id are linked to the identified user retroactively.
- **On every app startup**: main re-identifies if rowboat tokens exist (`packages/core/src/analytics/identify.ts`, called from `apps/main/src/main.ts` whenReady). Idempotent — PostHog merges person properties on duplicate identifies. This catches users who installed before analytics existed, and refreshes person properties (plan/status) on every launch.
- **On rowboat sign-out**: `posthog.reset()` in both processes; future events resolve to the installation_id again.
- **`email`** is set on `identify` from main only (sourced from `/v1/me`). Person properties are server-side, so the renderer's events resolve to the same record without redundantly setting it.
## Event catalog
### `llm_usage`
Emitted whenever ai-sdk returns token usage (one event per LLM call, not per run).
| Property | Type | Notes |
|---|---|---|
| `use_case` | enum | `copilot_chat` / `track_block` / `meeting_note` / `knowledge_sync` |
| `sub_use_case` | string? | Refines `use_case` — see taxonomy table below |
| `agent_name` | string? | Present when the call goes through an agent run (`createRun`); omitted for direct `generateText`/`generateObject` |
| `model` | string | e.g. `claude-sonnet-4-6` |
| `provider` | string | `rowboat` = cloud LLM gateway; otherwise the BYOK provider (`openai`, `anthropic`, `ollama`, etc.) |
| `input_tokens` | number | |
| `output_tokens` | number | |
| `total_tokens` | number | |
| `cached_input_tokens` | number? | When the provider reports it |
| `reasoning_tokens` | number? | When the provider reports it |
#### Use-case taxonomy
Every `llm_usage` emit point in the codebase:
| `use_case` | `sub_use_case` | `agent_name`? | Where | File:line |
|---|---|---|---|---|
| `copilot_chat` | (none) | yes | User chat in renderer (default for any `createRun` without `useCase`) | `packages/core/src/agents/runtime.ts:1313` (finish-step in `streamLlm`) |
| `copilot_chat` | `scheduled` | yes | Background scheduled agent runner | `packages/core/src/agent-schedule/runner.ts:167` |
| `copilot_chat` | `file_parse` | inherits | `parseFile` builtin tool inside any chat | `packages/core/src/application/lib/builtin-tools.ts:770` |
| `track_block` | `routing` | no | Pass 1 routing classifier (`generateObject`) | `packages/core/src/knowledge/track/routing.ts:104` |
| `track_block` | `run` | yes | Pass 2 track block execution | `packages/core/src/knowledge/track/runner.ts:109` (createRun) |
| `meeting_note` | (none) | no | Meeting transcript summarizer (`generateText`) | `packages/core/src/knowledge/summarize_meeting.ts:161` |
| `knowledge_sync` | `agent_notes` | yes | Agent notes learning service | `packages/core/src/knowledge/agent_notes.ts:309` (createRun) |
| `knowledge_sync` | `tag_notes` | yes | Note tagging | `packages/core/src/knowledge/tag_notes.ts:86` (createRun) |
| `knowledge_sync` | `build_graph` | yes | Knowledge graph note creation | `packages/core/src/knowledge/build_graph.ts:253` (createRun) |
| `knowledge_sync` | `label_emails` | yes | Email labeling | `packages/core/src/knowledge/label_emails.ts:73` (createRun) |
| `knowledge_sync` | `inline_task_run` | yes | Inline `@rowboat` task execution (two call sites) | `packages/core/src/knowledge/inline_tasks.ts:471, 552` (createRun) |
| `knowledge_sync` | `inline_task_classify` | no | Inline task scheduling classifier (`generateText`) | `packages/core/src/knowledge/inline_tasks.ts:673` |
| `knowledge_sync` | `pre_built` | yes | Pre-built scheduled agents | `packages/core/src/pre_built/runner.ts:43` (createRun) |
`testModelConnection` in `packages/core/src/models/models.ts` is **not** instrumented (diagnostic only — would skew per-model counts).
### `user_signed_in`
Emitted when rowboat OAuth completes. Properties: `plan`, `status` (subscription state from `/v1/me`).
Emitted from **both** processes:
- Main (`apps/main/src/oauth-handler.ts:290`) — always fires; load-bearing.
- Renderer (`apps/renderer/src/hooks/useAnalyticsIdentity.ts:75`) — fires only when the renderer is open. Same distinct_id, so dedup is automatic in PostHog dashboards.
### `user_signed_out`
Emitted on rowboat disconnect. No properties. Followed immediately by `posthog.reset()`.
Emit points: `apps/main/src/oauth-handler.ts:369` and `apps/renderer/src/hooks/useAnalyticsIdentity.ts:82`.
### Other events (pre-existing, not added by the LLM-usage work)
All in `apps/renderer/src/lib/analytics.ts`:
- `chat_session_created``{ run_id }`
- `chat_message_sent``{ voice_input, voice_output, search_enabled }`
- `oauth_connected` / `oauth_disconnected``{ provider }`
- `voice_input_started` — no properties
- `search_executed``{ types: string[] }`
- `note_exported``{ format }`
## Person properties
Persistent across sessions for the same user. Set via `posthog.people.set` or as the `properties` arg to `identify`.
| Property | Set by | Notes |
|---|---|---|
| `email` | main on identify | From `/v1/me`; powers PostHog cohort match + integrations |
| `plan`, `status` | main on identify | Subscription state |
| `api_url` | both processes (init + identify) | Distinguishes prod / staging / custom — assign meaning in PostHog dashboard. `https://api.x.rowboatlabs.com` = production |
| `signed_in` | renderer | `true` while rowboat OAuth is connected |
| `{provider}_connected` | renderer | One of `gmail`, `calendar`, `slack`, `rowboat` |
| `total_notes` | renderer (init) | Workspace size signal |
| `has_used_search`, `has_used_voice` | renderer | One-shot first-use flags |
## How to add a new event
1. **Naming**: `snake_case`, `[object]_[verb]` shape (e.g. `note_exported`, not `exportedNote`). Matches PostHog convention.
2. **Pick the right helper**:
- LLM token usage → `captureLlmUsage()` from `@x/core/dist/analytics/usage.js`. Always include `useCase`; add `subUseCase` if it refines an existing top-level case.
- Anything else from main → `capture()` from `@x/core/dist/analytics/posthog.js`.
- Anything else from renderer → add a typed wrapper to `apps/renderer/src/lib/analytics.ts` and call it from the UI code (don't call `posthog.capture()` directly from components).
3. **If it's a new LLM call site**:
- Goes through `createRun`? Pass `useCase` (and optionally `subUseCase`) to the create call. The runtime auto-emits at every `finish-step` — no further code needed.
- Direct `generateText` / `generateObject`? Call `captureLlmUsage` after the call with `model`, `provider`, `usage` from the result.
- Inside a builtin tool? Call `getCurrentUseCase()` from `analytics/use_case.ts` first — the parent run's tag is propagated via `AsyncLocalStorage`. Use `ctx?.useCase ?? 'copilot_chat'` as fallback.
4. **Update this file in the same PR.** That's the contract — without it, dashboards and downstream consumers drift.
## How to add a new use-case sub-case
- **New `sub_use_case` under an existing top-level case**: just pick a string and add a row to the taxonomy table above. No code changes beyond the call site.
- **New top-level `use_case`**: edit the `UseCase` enum in `packages/shared/src/runs.ts` and the matching `UseCase` type in `packages/core/src/analytics/use_case.ts`. Then update this doc.
## Configuration
PostHog credentials live in two env vars (also baked into the binary at packaging time — never set at runtime in distributed builds):
- `VITE_PUBLIC_POSTHOG_KEY` — project API key (e.g. `phc_xxx`). Public-facing — safe to commit if you'd rather hardcode.
- `VITE_PUBLIC_POSTHOG_HOST` — e.g. `https://us.i.posthog.com`. Defaults to US cloud if unset.
Where they're consumed:
- **Renderer** (Vite): `import.meta.env.VITE_PUBLIC_POSTHOG_*` — inlined at build time.
- **Main** (esbuild via `apps/main/bundle.mjs`): inlined into `main.cjs` at packaging time using esbuild `define`. In dev (`npm run dev`), main reads them from `process.env` at runtime.
For GitHub Actions / packaged builds: set both as workflow env vars (from secrets) on the step that runs `npm run package` or `npm run make`. They'll be baked in.
If unset, analytics no-op silently — you'll see `[Analytics] POSTHOG_KEY not set; analytics disabled` in main-process logs.
`installationId`: stored in `~/.rowboat/config/installation.json`, generated on first run.
## File map
| File | Purpose |
|---|---|
| `packages/core/src/analytics/installation.ts` | Stable per-install distinct_id |
| `packages/core/src/analytics/posthog.ts` | Main-process client (`capture`, `identify`, `reset`, `shutdown`) |
| `packages/core/src/analytics/usage.ts` | `captureLlmUsage()` helper |
| `packages/core/src/analytics/use_case.ts` | `AsyncLocalStorage` for tool-internal LLM call inheritance |
| `apps/renderer/src/lib/analytics.ts` | Renderer event wrappers |
| `apps/renderer/src/hooks/useAnalyticsIdentity.ts` | Renderer identify/reset on OAuth events |
| `apps/main/src/oauth-handler.ts` | Main-side identify/reset/sign-in/sign-out events |
| `apps/main/src/main.ts` | `before-quit` hook flushes queued events |
| `packages/shared/src/ipc.ts` | `analytics:bootstrap` IPC channel definition |
| `apps/main/src/ipc.ts` | `analytics:bootstrap` handler + forwards `userId` on `oauth:didConnect` |
| `apps/main/bundle.mjs` | Bakes `POSTHOG_KEY`/`POSTHOG_HOST` into packaged `main.cjs` |

View file

@ -31,6 +31,11 @@ await esbuild.build({
// Replace import.meta.url directly with our polyfill variable // Replace import.meta.url directly with our polyfill variable
define: { define: {
'import.meta.url': '__import_meta_url', 'import.meta.url': '__import_meta_url',
// Inject PostHog credentials at build time. Reuse the renderer's
// VITE_PUBLIC_* envs so packaging only needs one set of values.
// Empty strings disable analytics gracefully.
'process.env.POSTHOG_KEY': JSON.stringify(process.env.VITE_PUBLIC_POSTHOG_KEY ?? ''),
'process.env.POSTHOG_HOST': JSON.stringify(process.env.VITE_PUBLIC_POSTHOG_HOST ?? 'https://us.i.posthog.com'),
}, },
}); });

View file

@ -46,6 +46,8 @@ import { getAccessToken } from '@x/core/dist/auth/tokens.js';
import { getRowboatConfig } from '@x/core/dist/config/rowboat.js'; import { getRowboatConfig } from '@x/core/dist/config/rowboat.js';
import { triggerTrackUpdate } from '@x/core/dist/knowledge/track/runner.js'; import { triggerTrackUpdate } from '@x/core/dist/knowledge/track/runner.js';
import { trackBus } from '@x/core/dist/knowledge/track/bus.js'; import { trackBus } from '@x/core/dist/knowledge/track/bus.js';
import { getInstallationId } from '@x/core/dist/analytics/installation.js';
import { API_URL } from '@x/core/dist/config/env.js';
import { import {
fetchYaml, fetchYaml,
updateTrackBlock, updateTrackBlock,
@ -342,7 +344,7 @@ function emitServiceEvent(event: z.infer<typeof ServiceEvent>): void {
} }
} }
export function emitOAuthEvent(event: { provider: string; success: boolean; error?: string }): void { export function emitOAuthEvent(event: { provider: string; success: boolean; error?: string; userId?: string }): void {
const windows = BrowserWindow.getAllWindows(); const windows = BrowserWindow.getAllWindows();
for (const win of windows) { for (const win of windows) {
if (!win.isDestroyed() && win.webContents) { if (!win.isDestroyed() && win.webContents) {
@ -415,6 +417,12 @@ export function setupIpcHandlers() {
// args is null for this channel (no request payload) // args is null for this channel (no request payload)
return getVersions(); return getVersions();
}, },
'analytics:bootstrap': async () => {
return {
installationId: getInstallationId(),
apiUrl: API_URL,
};
},
'workspace:getRoot': async () => { 'workspace:getRoot': async () => {
return workspace.getRoot(); return workspace.getRoot();
}, },

View file

@ -26,6 +26,8 @@ import { init as initAgentNotes } from "@x/core/dist/knowledge/agent_notes.js";
import { init as initTrackScheduler } from "@x/core/dist/knowledge/track/scheduler.js"; import { init as initTrackScheduler } from "@x/core/dist/knowledge/track/scheduler.js";
import { init as initTrackEventProcessor } from "@x/core/dist/knowledge/track/events.js"; import { init as initTrackEventProcessor } from "@x/core/dist/knowledge/track/events.js";
import { init as initLocalSites, shutdown as shutdownLocalSites } from "@x/core/dist/local-sites/server.js"; import { init as initLocalSites, shutdown as shutdownLocalSites } from "@x/core/dist/local-sites/server.js";
import { shutdown as shutdownAnalytics } from "@x/core/dist/analytics/posthog.js";
import { identifyIfSignedIn } from "@x/core/dist/analytics/identify.js";
import { initConfigs } from "@x/core/dist/config/initConfigs.js"; import { initConfigs } from "@x/core/dist/config/initConfigs.js";
import started from "electron-squirrel-startup"; import started from "electron-squirrel-startup";
@ -230,6 +232,13 @@ app.whenReady().then(async () => {
// Initialize all config files before UI can access them // Initialize all config files before UI can access them
await initConfigs(); await initConfigs();
// PostHog identify() is idempotent — call it on every startup so existing
// signed-in installs (and every cold start of v0.3.4+) get re-identified.
// Otherwise main-process events stay anonymous until the user re-signs-in.
identifyIfSignedIn().catch((error) => {
console.error('[Analytics] Failed to identify on startup:', error);
});
registerBrowserControlService(new ElectronBrowserControlService()); registerBrowserControlService(new ElectronBrowserControlService());
setupIpcHandlers(); setupIpcHandlers();
@ -318,4 +327,7 @@ app.on("before-quit", () => {
shutdownLocalSites().catch((error) => { shutdownLocalSites().catch((error) => {
console.error('[LocalSites] Failed to shut down cleanly:', error); console.error('[LocalSites] Failed to shut down cleanly:', error);
}); });
shutdownAnalytics().catch((error) => {
console.error('[Analytics] Failed to flush on quit:', error);
});
}); });

View file

@ -12,6 +12,7 @@ import { triggerSync as triggerCalendarSync } from '@x/core/dist/knowledge/sync_
import { triggerSync as triggerFirefliesSync } from '@x/core/dist/knowledge/sync_fireflies.js'; import { triggerSync as triggerFirefliesSync } from '@x/core/dist/knowledge/sync_fireflies.js';
import { emitOAuthEvent } from './ipc.js'; import { emitOAuthEvent } from './ipc.js';
import { getBillingInfo } from '@x/core/dist/billing/billing.js'; import { getBillingInfo } from '@x/core/dist/billing/billing.js';
import { capture as analyticsCapture, identify as analyticsIdentify, reset as analyticsReset } from '@x/core/dist/analytics/posthog.js';
const REDIRECT_URI = 'http://localhost:8080/oauth/callback'; const REDIRECT_URI = 'http://localhost:8080/oauth/callback';
@ -275,16 +276,33 @@ export async function connectProvider(provider: string, credentials?: { clientId
// For Rowboat sign-in, ensure user + Stripe customer exist before // For Rowboat sign-in, ensure user + Stripe customer exist before
// notifying the renderer. Without this, parallel API calls from // notifying the renderer. Without this, parallel API calls from
// multiple renderer hooks race to create the user, causing duplicates. // multiple renderer hooks race to create the user, causing duplicates.
let signedInUserId: string | undefined;
if (provider === 'rowboat') { if (provider === 'rowboat') {
try { try {
await getBillingInfo(); const billing = await getBillingInfo();
if (billing.userId) {
signedInUserId = billing.userId;
analyticsIdentify(billing.userId, {
...(billing.userEmail ? { email: billing.userEmail } : {}),
plan: billing.subscriptionPlan,
status: billing.subscriptionStatus,
});
analyticsCapture('user_signed_in', {
plan: billing.subscriptionPlan,
status: billing.subscriptionStatus,
});
}
} catch (meError) { } catch (meError) {
console.error('[OAuth] Failed to initialize user via /v1/me:', meError); console.error('[OAuth] Failed to initialize user via /v1/me:', meError);
} }
} }
// Emit success event to renderer // Emit success event to renderer
emitOAuthEvent({ provider, success: true }); emitOAuthEvent({
provider,
success: true,
...(signedInUserId ? { userId: signedInUserId } : {}),
});
} catch (error) { } catch (error) {
console.error('OAuth token exchange failed:', error); console.error('OAuth token exchange failed:', error);
// Log cause chain for debugging (e.g. OAUTH_INVALID_RESPONSE -> OperationProcessingError) // Log cause chain for debugging (e.g. OAUTH_INVALID_RESPONSE -> OperationProcessingError)
@ -347,6 +365,10 @@ export async function disconnectProvider(provider: string): Promise<{ success: b
try { try {
const oauthRepo = getOAuthRepo(); const oauthRepo = getOAuthRepo();
await oauthRepo.delete(provider); await oauthRepo.delete(provider);
if (provider === 'rowboat') {
analyticsCapture('user_signed_out');
analyticsReset();
}
// Notify renderer so sidebar, voice, and billing re-check state // Notify renderer so sidebar, voice, and billing re-check state
emitOAuthEvent({ provider, success: false }); emitOAuthEvent({ provider, success: false });
return { success: true }; return { success: true };

View file

@ -49,6 +49,7 @@
"react": "^19.2.0", "react": "^19.2.0",
"react-dom": "^19.2.0", "react-dom": "^19.2.0",
"recharts": "^3.8.0", "recharts": "^3.8.0",
"remark-breaks": "^4.0.0",
"sonner": "^2.0.7", "sonner": "^2.0.7",
"streamdown": "^1.6.10", "streamdown": "^1.6.10",
"tailwind-merge": "^3.4.0", "tailwind-merge": "^3.4.0",

View file

@ -62,6 +62,8 @@ import { BrowserPane } from '@/components/browser-pane/BrowserPane'
import { VersionHistoryPanel } from '@/components/version-history-panel' import { VersionHistoryPanel } from '@/components/version-history-panel'
import { FileCardProvider } from '@/contexts/file-card-context' import { FileCardProvider } from '@/contexts/file-card-context'
import { MarkdownPreOverride } from '@/components/ai-elements/markdown-code-override' import { MarkdownPreOverride } from '@/components/ai-elements/markdown-code-override'
import { defaultRemarkPlugins } from 'streamdown'
import remarkBreaks from 'remark-breaks'
import { TabBar, type ChatTab, type FileTab } from '@/components/tab-bar' import { TabBar, type ChatTab, type FileTab } from '@/components/tab-bar'
import { import {
type ChatMessage, type ChatMessage,
@ -104,6 +106,11 @@ interface TreeNode extends DirEntry {
const streamdownComponents = { pre: MarkdownPreOverride } const streamdownComponents = { pre: MarkdownPreOverride }
// Render user messages with markdown so bullets, bold, links, etc. survive the
// round-trip from the input textarea. `remarkBreaks` turns single newlines
// into <br> so typed line breaks are preserved without requiring blank lines.
const userMessageRemarkPlugins = [...Object.values(defaultRemarkPlugins), remarkBreaks]
function SmoothStreamingMessage({ text, components }: { text: string; components: typeof streamdownComponents }) { function SmoothStreamingMessage({ text, components }: { text: string; components: typeof streamdownComponents }) {
const smoothText = useSmoothedText(text) const smoothText = useSmoothedText(text)
return <MessageResponse components={components}>{smoothText}</MessageResponse> return <MessageResponse components={components}>{smoothText}</MessageResponse>
@ -127,8 +134,8 @@ const TITLEBAR_BUTTON_PX = 32
const TITLEBAR_BUTTON_GAP_PX = 4 const TITLEBAR_BUTTON_GAP_PX = 4
const TITLEBAR_HEADER_GAP_PX = 8 const TITLEBAR_HEADER_GAP_PX = 8
const TITLEBAR_TOGGLE_MARGIN_LEFT_PX = 12 const TITLEBAR_TOGGLE_MARGIN_LEFT_PX = 12
const TITLEBAR_BUTTONS_COLLAPSED = 4 const TITLEBAR_BUTTONS_COLLAPSED = 1
const TITLEBAR_BUTTON_GAPS_COLLAPSED = 3 const TITLEBAR_BUTTON_GAPS_COLLAPSED = 0
const GRAPH_TAB_PATH = '__rowboat_graph_view__' const GRAPH_TAB_PATH = '__rowboat_graph_view__'
const SUGGESTED_TOPICS_TAB_PATH = '__rowboat_suggested_topics__' const SUGGESTED_TOPICS_TAB_PATH = '__rowboat_suggested_topics__'
const BASES_DEFAULT_TAB_PATH = '__rowboat_bases_default__' const BASES_DEFAULT_TAB_PATH = '__rowboat_bases_default__'
@ -506,22 +513,13 @@ function viewStatesEqual(a: ViewState, b: ViewState): boolean {
return true // both graph return true // both graph
} }
/** Sidebar toggle + utility buttons (fixed position, top-left) */ /** Sidebar toggle (fixed position, top-left) */
function FixedSidebarToggle({ function FixedSidebarToggle({
onNavigateBack,
onNavigateForward,
canNavigateBack,
canNavigateForward,
leftInsetPx, leftInsetPx,
}: { }: {
onNavigateBack: () => void
onNavigateForward: () => void
canNavigateBack: boolean
canNavigateForward: boolean
leftInsetPx: number leftInsetPx: number
}) { }) {
const { toggleSidebar, state } = useSidebar() const { toggleSidebar } = useSidebar()
const isCollapsed = state === "collapsed"
return ( return (
<div className="fixed left-0 top-0 z-50 flex h-10 items-center" style={{ WebkitAppRegion: 'no-drag' } as React.CSSProperties}> <div className="fixed left-0 top-0 z-50 flex h-10 items-center" style={{ WebkitAppRegion: 'no-drag' } as React.CSSProperties}>
<div aria-hidden="true" className="h-10 shrink-0" style={{ width: leftInsetPx }} /> <div aria-hidden="true" className="h-10 shrink-0" style={{ width: leftInsetPx }} />
@ -535,30 +533,6 @@ function FixedSidebarToggle({
> >
<PanelLeftIcon className="size-5" /> <PanelLeftIcon className="size-5" />
</button> </button>
{/* Back / Forward navigation */}
{isCollapsed && (
<>
<button
type="button"
onClick={onNavigateBack}
disabled={!canNavigateBack}
className="flex h-8 w-8 items-center justify-center rounded-md text-muted-foreground hover:bg-accent hover:text-foreground transition-colors disabled:opacity-30 disabled:pointer-events-none"
style={{ marginLeft: TITLEBAR_BUTTON_GAP_PX }}
aria-label="Go back"
>
<ChevronLeftIcon className="size-5" />
</button>
<button
type="button"
onClick={onNavigateForward}
disabled={!canNavigateForward}
className="flex h-8 w-8 items-center justify-center rounded-md text-muted-foreground hover:bg-accent hover:text-foreground transition-colors disabled:opacity-30 disabled:pointer-events-none"
aria-label="Go forward"
>
<ChevronRightIcon className="size-5" />
</button>
</>
)}
</div> </div>
) )
} }
@ -850,6 +824,7 @@ function App() {
const chatTabIdCounterRef = useRef(0) const chatTabIdCounterRef = useRef(0)
const newChatTabId = () => `chat-tab-${++chatTabIdCounterRef.current}` const newChatTabId = () => `chat-tab-${++chatTabIdCounterRef.current}`
const chatDraftsRef = useRef(new Map<string, string>()) const chatDraftsRef = useRef(new Map<string, string>())
const selectedModelByTabRef = useRef(new Map<string, { provider: string; model: string }>())
const chatScrollTopByTabRef = useRef(new Map<string, number>()) const chatScrollTopByTabRef = useRef(new Map<string, number>())
const [toolOpenByTab, setToolOpenByTab] = useState<Record<string, Record<string, boolean>>>({}) const [toolOpenByTab, setToolOpenByTab] = useState<Record<string, Record<string, boolean>>>({})
const [chatViewportAnchorByTab, setChatViewportAnchorByTab] = useState<Record<string, ChatViewportAnchorState>>({}) const [chatViewportAnchorByTab, setChatViewportAnchorByTab] = useState<Record<string, ChatViewportAnchorState>>({})
@ -2198,8 +2173,10 @@ function App() {
let isNewRun = false let isNewRun = false
let newRunCreatedAt: string | null = null let newRunCreatedAt: string | null = null
if (!currentRunId) { if (!currentRunId) {
const selected = selectedModelByTabRef.current.get(submitTabId)
const run = await window.ipc.invoke('runs:create', { const run = await window.ipc.invoke('runs:create', {
agentId, agentId,
...(selected ? { model: selected.model, provider: selected.provider } : {}),
}) })
currentRunId = run.id currentRunId = run.id
newRunCreatedAt = run.createdAt newRunCreatedAt = run.createdAt
@ -2504,6 +2481,7 @@ function App() {
return next return next
}) })
chatDraftsRef.current.delete(tabId) chatDraftsRef.current.delete(tabId)
selectedModelByTabRef.current.delete(tabId)
chatScrollTopByTabRef.current.delete(tabId) chatScrollTopByTabRef.current.delete(tabId)
setToolOpenByTab((prev) => { setToolOpenByTab((prev) => {
if (!(tabId in prev)) return prev if (!(tabId in prev)) return prev
@ -4003,7 +3981,14 @@ function App() {
<ChatMessageAttachments attachments={item.attachments} /> <ChatMessageAttachments attachments={item.attachments} />
</MessageContent> </MessageContent>
{item.content && ( {item.content && (
<MessageContent>{item.content}</MessageContent> <MessageContent>
<MessageResponse
components={streamdownComponents}
remarkPlugins={userMessageRemarkPlugins}
>
{item.content}
</MessageResponse>
</MessageContent>
)} )}
</Message> </Message>
) )
@ -4024,7 +4009,12 @@ function App() {
))} ))}
</div> </div>
)} )}
{message} <MessageResponse
components={streamdownComponents}
remarkPlugins={userMessageRemarkPlugins}
>
{message}
</MessageResponse>
</MessageContent> </MessageContent>
</Message> </Message>
) )
@ -4677,6 +4667,13 @@ function App() {
runId={tabState.runId} runId={tabState.runId}
initialDraft={chatDraftsRef.current.get(tab.id)} initialDraft={chatDraftsRef.current.get(tab.id)}
onDraftChange={(text) => setChatDraftForTab(tab.id, text)} onDraftChange={(text) => setChatDraftForTab(tab.id, text)}
onSelectedModelChange={(m) => {
if (m) {
selectedModelByTabRef.current.set(tab.id, m)
} else {
selectedModelByTabRef.current.delete(tab.id)
}
}}
isRecording={isActive && isRecording} isRecording={isActive && isRecording}
recordingText={isActive ? voice.interimText : undefined} recordingText={isActive ? voice.interimText : undefined}
recordingState={isActive ? (voice.state === 'connecting' ? 'connecting' : 'listening') : undefined} recordingState={isActive ? (voice.state === 'connecting' ? 'connecting' : 'listening') : undefined}
@ -4730,6 +4727,13 @@ function App() {
onPresetMessageConsumed={() => setPresetMessage(undefined)} onPresetMessageConsumed={() => setPresetMessage(undefined)}
getInitialDraft={(tabId) => chatDraftsRef.current.get(tabId)} getInitialDraft={(tabId) => chatDraftsRef.current.get(tabId)}
onDraftChangeForTab={setChatDraftForTab} onDraftChangeForTab={setChatDraftForTab}
onSelectedModelChangeForTab={(tabId, m) => {
if (m) {
selectedModelByTabRef.current.set(tabId, m)
} else {
selectedModelByTabRef.current.delete(tabId)
}
}}
pendingAskHumanRequests={pendingAskHumanRequests} pendingAskHumanRequests={pendingAskHumanRequests}
allPermissionRequests={allPermissionRequests} allPermissionRequests={allPermissionRequests}
permissionResponses={permissionResponses} permissionResponses={permissionResponses}
@ -4756,10 +4760,6 @@ function App() {
)} )}
{/* Rendered last so its no-drag region paints over the sidebar drag region */} {/* Rendered last so its no-drag region paints over the sidebar drag region */}
<FixedSidebarToggle <FixedSidebarToggle
onNavigateBack={() => { void navigateBack() }}
onNavigateForward={() => { void navigateForward() }}
canNavigateBack={canNavigateBack}
canNavigateForward={canNavigateForward}
leftInsetPx={isMac ? MACOS_TRAFFIC_LIGHTS_RESERVED_PX : 0} leftInsetPx={isMac ? MACOS_TRAFFIC_LIGHTS_RESERVED_PX : 0}
/> />
</SidebarProvider> </SidebarProvider>

View file

@ -69,13 +69,20 @@ const providerDisplayNames: Record<string, string> = {
rowboat: 'Rowboat', rowboat: 'Rowboat',
} }
type ProviderName = "openai" | "anthropic" | "google" | "openrouter" | "aigateway" | "ollama" | "openai-compatible" | "rowboat"
interface ConfiguredModel { interface ConfiguredModel {
flavor: "openai" | "anthropic" | "google" | "openrouter" | "aigateway" | "ollama" | "openai-compatible" | "rowboat" provider: ProviderName
model: string model: string
apiKey?: string }
baseURL?: string
headers?: Record<string, string> export interface SelectedModel {
knowledgeGraphModel?: string provider: string
model: string
}
function getSelectedModelDisplayName(model: string) {
return model.split('/').pop() || model
} }
function getAttachmentIcon(kind: AttachmentIconKind) { function getAttachmentIcon(kind: AttachmentIconKind) {
@ -120,6 +127,8 @@ interface ChatInputInnerProps {
ttsMode?: 'summary' | 'full' ttsMode?: 'summary' | 'full'
onToggleTts?: () => void onToggleTts?: () => void
onTtsModeChange?: (mode: 'summary' | 'full') => void onTtsModeChange?: (mode: 'summary' | 'full') => void
/** Fired when the user picks a different model in the dropdown (only when no run exists yet). */
onSelectedModelChange?: (model: SelectedModel | null) => void
} }
function ChatInputInner({ function ChatInputInner({
@ -145,6 +154,7 @@ function ChatInputInner({
ttsMode, ttsMode,
onToggleTts, onToggleTts,
onTtsModeChange, onTtsModeChange,
onSelectedModelChange,
}: ChatInputInnerProps) { }: ChatInputInnerProps) {
const controller = usePromptInputController() const controller = usePromptInputController()
const message = controller.textInput.value const message = controller.textInput.value
@ -155,10 +165,27 @@ function ChatInputInner({
const [configuredModels, setConfiguredModels] = useState<ConfiguredModel[]>([]) const [configuredModels, setConfiguredModels] = useState<ConfiguredModel[]>([])
const [activeModelKey, setActiveModelKey] = useState('') const [activeModelKey, setActiveModelKey] = useState('')
const [lockedModel, setLockedModel] = useState<SelectedModel | null>(null)
const [searchEnabled, setSearchEnabled] = useState(false) const [searchEnabled, setSearchEnabled] = useState(false)
const [searchAvailable, setSearchAvailable] = useState(false) const [searchAvailable, setSearchAvailable] = useState(false)
const [isRowboatConnected, setIsRowboatConnected] = useState(false) const [isRowboatConnected, setIsRowboatConnected] = useState(false)
// When a run exists, freeze the dropdown to the run's resolved model+provider.
useEffect(() => {
if (!runId) {
setLockedModel(null)
return
}
let cancelled = false
window.ipc.invoke('runs:fetch', { runId }).then((run) => {
if (cancelled) return
if (run.provider && run.model) {
setLockedModel({ provider: run.provider, model: run.model })
}
}).catch(() => { /* legacy run or fetch failure — leave unlocked */ })
return () => { cancelled = true }
}, [runId])
// Check Rowboat sign-in state // Check Rowboat sign-in state
useEffect(() => { useEffect(() => {
window.ipc.invoke('oauth:getState', null).then((result) => { window.ipc.invoke('oauth:getState', null).then((result) => {
@ -176,42 +203,20 @@ function ChatInputInner({
return cleanup return cleanup
}, []) }, [])
// Load model config (gateway when signed in, local config when BYOK) // Load the list of models the user can choose from.
// Signed-in: gateway model list. Signed-out: providers configured in models.json.
const loadModelConfig = useCallback(async () => { const loadModelConfig = useCallback(async () => {
try { try {
if (isRowboatConnected) { if (isRowboatConnected) {
// Fetch gateway models
const listResult = await window.ipc.invoke('models:list', null) const listResult = await window.ipc.invoke('models:list', null)
const rowboatProvider = listResult.providers?.find( const rowboatProvider = listResult.providers?.find(
(p: { id: string }) => p.id === 'rowboat' (p: { id: string }) => p.id === 'rowboat'
) )
const models: ConfiguredModel[] = (rowboatProvider?.models || []).map( const models: ConfiguredModel[] = (rowboatProvider?.models || []).map(
(m: { id: string }) => ({ flavor: 'rowboat', model: m.id }) (m: { id: string }) => ({ provider: 'rowboat', model: m.id })
) )
// Read current default from config
let defaultModel = ''
try {
const result = await window.ipc.invoke('workspace:readFile', { path: 'config/models.json' })
const parsed = JSON.parse(result.data)
defaultModel = parsed?.model || ''
} catch { /* no config yet */ }
if (defaultModel) {
models.sort((a, b) => {
if (a.model === defaultModel) return -1
if (b.model === defaultModel) return 1
return 0
})
}
setConfiguredModels(models) setConfiguredModels(models)
const activeKey = defaultModel
? `rowboat/${defaultModel}`
: models[0] ? `rowboat/${models[0].model}` : ''
if (activeKey) setActiveModelKey(activeKey)
} else { } else {
// BYOK: read from local models.json
const result = await window.ipc.invoke('workspace:readFile', { path: 'config/models.json' }) const result = await window.ipc.invoke('workspace:readFile', { path: 'config/models.json' })
const parsed = JSON.parse(result.data) const parsed = JSON.parse(result.data)
const models: ConfiguredModel[] = [] const models: ConfiguredModel[] = []
@ -223,32 +228,12 @@ function ChatInputInner({
const allModels = modelList.length > 0 ? modelList : singleModel ? [singleModel] : [] const allModels = modelList.length > 0 ? modelList : singleModel ? [singleModel] : []
for (const model of allModels) { for (const model of allModels) {
if (model) { if (model) {
models.push({ models.push({ provider: flavor as ProviderName, model })
flavor: flavor as ConfiguredModel['flavor'],
model,
apiKey: (e.apiKey as string) || undefined,
baseURL: (e.baseURL as string) || undefined,
headers: (e.headers as Record<string, string>) || undefined,
knowledgeGraphModel: (e.knowledgeGraphModel as string) || undefined,
})
} }
} }
} }
} }
const defaultKey = parsed?.provider?.flavor && parsed?.model
? `${parsed.provider.flavor}/${parsed.model}`
: ''
models.sort((a, b) => {
const aKey = `${a.flavor}/${a.model}`
const bKey = `${b.flavor}/${b.model}`
if (aKey === defaultKey) return -1
if (bKey === defaultKey) return 1
return 0
})
setConfiguredModels(models) setConfiguredModels(models)
if (defaultKey) {
setActiveModelKey(defaultKey)
}
} }
} catch { } catch {
// No config yet // No config yet
@ -284,40 +269,15 @@ function ChatInputInner({
checkSearch() checkSearch()
}, [isActive, isRowboatConnected]) }, [isActive, isRowboatConnected])
const handleModelChange = useCallback(async (key: string) => { // Selecting a model affects only the *next* run created from this tab.
const entry = configuredModels.find((m) => `${m.flavor}/${m.model}` === key) // Once a run exists, model is frozen on the run and the dropdown is read-only.
const handleModelChange = useCallback((key: string) => {
if (lockedModel) return
const entry = configuredModels.find((m) => `${m.provider}/${m.model}` === key)
if (!entry) return if (!entry) return
setActiveModelKey(key) setActiveModelKey(key)
onSelectedModelChange?.({ provider: entry.provider, model: entry.model })
try { }, [configuredModels, lockedModel, onSelectedModelChange])
if (entry.flavor === 'rowboat') {
// Gateway model — save with valid Zod flavor, no credentials
await window.ipc.invoke('models:saveConfig', {
provider: { flavor: 'openrouter' as const },
model: entry.model,
knowledgeGraphModel: entry.knowledgeGraphModel,
})
} else {
// BYOK — preserve full provider config
const providerModels = configuredModels
.filter((m) => m.flavor === entry.flavor)
.map((m) => m.model)
await window.ipc.invoke('models:saveConfig', {
provider: {
flavor: entry.flavor,
apiKey: entry.apiKey,
baseURL: entry.baseURL,
headers: entry.headers,
},
model: entry.model,
models: providerModels,
knowledgeGraphModel: entry.knowledgeGraphModel,
})
}
} catch {
toast.error('Failed to switch model')
}
}, [configuredModels])
// Restore the tab draft when this input mounts. // Restore the tab draft when this input mounts.
useEffect(() => { useEffect(() => {
@ -555,7 +515,14 @@ function ChatInputInner({
) )
)} )}
<div className="flex-1" /> <div className="flex-1" />
{configuredModels.length > 0 && ( {lockedModel ? (
<span
className="flex h-7 shrink-0 items-center gap-1 rounded-full px-2 text-xs text-muted-foreground"
title={`${providerDisplayNames[lockedModel.provider] || lockedModel.provider} — fixed for this chat`}
>
<span className="max-w-[150px] truncate">{getSelectedModelDisplayName(lockedModel.model)}</span>
</span>
) : configuredModels.length > 0 ? (
<DropdownMenu> <DropdownMenu>
<DropdownMenuTrigger asChild> <DropdownMenuTrigger asChild>
<button <button
@ -563,7 +530,7 @@ function ChatInputInner({
className="flex h-7 shrink-0 items-center gap-1 rounded-full px-2 text-xs text-muted-foreground transition-colors hover:bg-muted hover:text-foreground" className="flex h-7 shrink-0 items-center gap-1 rounded-full px-2 text-xs text-muted-foreground transition-colors hover:bg-muted hover:text-foreground"
> >
<span className="max-w-[150px] truncate"> <span className="max-w-[150px] truncate">
{configuredModels.find((m) => `${m.flavor}/${m.model}` === activeModelKey)?.model || configuredModels[0]?.model || 'Model'} {getSelectedModelDisplayName(configuredModels.find((m) => `${m.provider}/${m.model}` === activeModelKey)?.model || configuredModels[0]?.model || 'Model')}
</span> </span>
<ChevronDown className="h-3 w-3" /> <ChevronDown className="h-3 w-3" />
</button> </button>
@ -571,18 +538,18 @@ function ChatInputInner({
<DropdownMenuContent align="end"> <DropdownMenuContent align="end">
<DropdownMenuRadioGroup value={activeModelKey} onValueChange={handleModelChange}> <DropdownMenuRadioGroup value={activeModelKey} onValueChange={handleModelChange}>
{configuredModels.map((m) => { {configuredModels.map((m) => {
const key = `${m.flavor}/${m.model}` const key = `${m.provider}/${m.model}`
return ( return (
<DropdownMenuRadioItem key={key} value={key}> <DropdownMenuRadioItem key={key} value={key}>
<span className="truncate">{m.model}</span> <span className="truncate">{m.model}</span>
<span className="ml-2 text-xs text-muted-foreground">{providerDisplayNames[m.flavor] || m.flavor}</span> <span className="ml-2 text-xs text-muted-foreground">{providerDisplayNames[m.provider] || m.provider}</span>
</DropdownMenuRadioItem> </DropdownMenuRadioItem>
) )
})} })}
</DropdownMenuRadioGroup> </DropdownMenuRadioGroup>
</DropdownMenuContent> </DropdownMenuContent>
</DropdownMenu> </DropdownMenu>
)} ) : null}
{onToggleTts && ttsAvailable && ( {onToggleTts && ttsAvailable && (
<div className="flex shrink-0 items-center"> <div className="flex shrink-0 items-center">
<Tooltip> <Tooltip>
@ -729,6 +696,7 @@ export interface ChatInputWithMentionsProps {
ttsMode?: 'summary' | 'full' ttsMode?: 'summary' | 'full'
onToggleTts?: () => void onToggleTts?: () => void
onTtsModeChange?: (mode: 'summary' | 'full') => void onTtsModeChange?: (mode: 'summary' | 'full') => void
onSelectedModelChange?: (model: SelectedModel | null) => void
} }
export function ChatInputWithMentions({ export function ChatInputWithMentions({
@ -757,6 +725,7 @@ export function ChatInputWithMentions({
ttsMode, ttsMode,
onToggleTts, onToggleTts,
onTtsModeChange, onTtsModeChange,
onSelectedModelChange,
}: ChatInputWithMentionsProps) { }: ChatInputWithMentionsProps) {
return ( return (
<PromptInputProvider knowledgeFiles={knowledgeFiles} recentFiles={recentFiles} visibleFiles={visibleFiles}> <PromptInputProvider knowledgeFiles={knowledgeFiles} recentFiles={recentFiles} visibleFiles={visibleFiles}>
@ -783,6 +752,7 @@ export function ChatInputWithMentions({
ttsMode={ttsMode} ttsMode={ttsMode}
onToggleTts={onToggleTts} onToggleTts={onToggleTts}
onTtsModeChange={onTtsModeChange} onTtsModeChange={onTtsModeChange}
onSelectedModelChange={onSelectedModelChange}
/> />
</PromptInputProvider> </PromptInputProvider>
) )

View file

@ -25,8 +25,10 @@ import { Suggestions } from '@/components/ai-elements/suggestions'
import { type PromptInputMessage, type FileMention } from '@/components/ai-elements/prompt-input' import { type PromptInputMessage, type FileMention } from '@/components/ai-elements/prompt-input'
import { FileCardProvider } from '@/contexts/file-card-context' import { FileCardProvider } from '@/contexts/file-card-context'
import { MarkdownPreOverride } from '@/components/ai-elements/markdown-code-override' import { MarkdownPreOverride } from '@/components/ai-elements/markdown-code-override'
import { defaultRemarkPlugins } from 'streamdown'
import remarkBreaks from 'remark-breaks'
import { TabBar, type ChatTab } from '@/components/tab-bar' import { TabBar, type ChatTab } from '@/components/tab-bar'
import { ChatInputWithMentions, type StagedAttachment } from '@/components/chat-input-with-mentions' import { ChatInputWithMentions, type StagedAttachment, type SelectedModel } from '@/components/chat-input-with-mentions'
import { ChatMessageAttachments } from '@/components/chat-message-attachments' import { ChatMessageAttachments } from '@/components/chat-message-attachments'
import { wikiLabel } from '@/lib/wiki-links' import { wikiLabel } from '@/lib/wiki-links'
import { import {
@ -49,6 +51,11 @@ import {
const streamdownComponents = { pre: MarkdownPreOverride } const streamdownComponents = { pre: MarkdownPreOverride }
// Render user messages with markdown so bullets, bold, links, etc. survive the
// round-trip from the input textarea. `remarkBreaks` turns single newlines
// into <br> so typed line breaks are preserved without requiring blank lines.
const userMessageRemarkPlugins = [...Object.values(defaultRemarkPlugins), remarkBreaks]
/* ─── Billing error helpers ─── */ /* ─── Billing error helpers ─── */
const BILLING_ERROR_PATTERNS = [ const BILLING_ERROR_PATTERNS = [
@ -158,6 +165,7 @@ interface ChatSidebarProps {
onPresetMessageConsumed?: () => void onPresetMessageConsumed?: () => void
getInitialDraft?: (tabId: string) => string | undefined getInitialDraft?: (tabId: string) => string | undefined
onDraftChangeForTab?: (tabId: string, text: string) => void onDraftChangeForTab?: (tabId: string, text: string) => void
onSelectedModelChangeForTab?: (tabId: string, model: SelectedModel | null) => void
pendingAskHumanRequests?: ChatTabViewState['pendingAskHumanRequests'] pendingAskHumanRequests?: ChatTabViewState['pendingAskHumanRequests']
allPermissionRequests?: ChatTabViewState['allPermissionRequests'] allPermissionRequests?: ChatTabViewState['allPermissionRequests']
permissionResponses?: ChatTabViewState['permissionResponses'] permissionResponses?: ChatTabViewState['permissionResponses']
@ -211,6 +219,7 @@ export function ChatSidebar({
onPresetMessageConsumed, onPresetMessageConsumed,
getInitialDraft, getInitialDraft,
onDraftChangeForTab, onDraftChangeForTab,
onSelectedModelChangeForTab,
pendingAskHumanRequests = new Map(), pendingAskHumanRequests = new Map(),
allPermissionRequests = new Map(), allPermissionRequests = new Map(),
permissionResponses = new Map(), permissionResponses = new Map(),
@ -351,7 +360,14 @@ export function ChatSidebar({
<ChatMessageAttachments attachments={item.attachments} /> <ChatMessageAttachments attachments={item.attachments} />
</MessageContent> </MessageContent>
{item.content && ( {item.content && (
<MessageContent>{item.content}</MessageContent> <MessageContent>
<MessageResponse
components={streamdownComponents}
remarkPlugins={userMessageRemarkPlugins}
>
{item.content}
</MessageResponse>
</MessageContent>
)} )}
</Message> </Message>
) )
@ -372,7 +388,12 @@ export function ChatSidebar({
))} ))}
</div> </div>
)} )}
{message} <MessageResponse
components={streamdownComponents}
remarkPlugins={userMessageRemarkPlugins}
>
{message}
</MessageResponse>
</MessageContent> </MessageContent>
</Message> </Message>
) )
@ -662,6 +683,7 @@ export function ChatSidebar({
runId={tabState.runId} runId={tabState.runId}
initialDraft={getInitialDraft?.(tab.id)} initialDraft={getInitialDraft?.(tab.id)}
onDraftChange={onDraftChangeForTab ? (text) => onDraftChangeForTab(tab.id, text) : undefined} onDraftChange={onDraftChangeForTab ? (text) => onDraftChangeForTab(tab.id, text) : undefined}
onSelectedModelChange={onSelectedModelChangeForTab ? (m) => onSelectedModelChangeForTab(tab.id, m) : undefined}
isRecording={isActive && isRecording} isRecording={isActive && isRecording}
recordingText={isActive ? recordingText : undefined} recordingText={isActive ? recordingText : undefined}
recordingState={isActive ? recordingState : undefined} recordingState={isActive ? recordingState : undefined}

View file

@ -59,14 +59,14 @@ export function OnboardingModal({ open, onComplete }: OnboardingModalProps) {
const [modelsCatalog, setModelsCatalog] = useState<Record<string, LlmModelOption[]>>({}) const [modelsCatalog, setModelsCatalog] = useState<Record<string, LlmModelOption[]>>({})
const [modelsLoading, setModelsLoading] = useState(false) const [modelsLoading, setModelsLoading] = useState(false)
const [modelsError, setModelsError] = useState<string | null>(null) const [modelsError, setModelsError] = useState<string | null>(null)
const [providerConfigs, setProviderConfigs] = useState<Record<LlmProviderFlavor, { apiKey: string; baseURL: string; model: string; knowledgeGraphModel: string }>>({ const [providerConfigs, setProviderConfigs] = useState<Record<LlmProviderFlavor, { apiKey: string; baseURL: string; model: string; knowledgeGraphModel: string; meetingNotesModel: string; trackBlockModel: string }>>({
openai: { apiKey: "", baseURL: "", model: "", knowledgeGraphModel: "" }, openai: { apiKey: "", baseURL: "", model: "", knowledgeGraphModel: "", meetingNotesModel: "", trackBlockModel: "" },
anthropic: { apiKey: "", baseURL: "", model: "", knowledgeGraphModel: "" }, anthropic: { apiKey: "", baseURL: "", model: "", knowledgeGraphModel: "", meetingNotesModel: "", trackBlockModel: "" },
google: { apiKey: "", baseURL: "", model: "", knowledgeGraphModel: "" }, google: { apiKey: "", baseURL: "", model: "", knowledgeGraphModel: "", meetingNotesModel: "", trackBlockModel: "" },
openrouter: { apiKey: "", baseURL: "", model: "", knowledgeGraphModel: "" }, openrouter: { apiKey: "", baseURL: "", model: "", knowledgeGraphModel: "", meetingNotesModel: "", trackBlockModel: "" },
aigateway: { apiKey: "", baseURL: "", model: "", knowledgeGraphModel: "" }, aigateway: { apiKey: "", baseURL: "", model: "", knowledgeGraphModel: "", meetingNotesModel: "", trackBlockModel: "" },
ollama: { apiKey: "", baseURL: "http://localhost:11434", model: "", knowledgeGraphModel: "" }, ollama: { apiKey: "", baseURL: "http://localhost:11434", model: "", knowledgeGraphModel: "", meetingNotesModel: "", trackBlockModel: "" },
"openai-compatible": { apiKey: "", baseURL: "http://localhost:1234/v1", model: "", knowledgeGraphModel: "" }, "openai-compatible": { apiKey: "", baseURL: "http://localhost:1234/v1", model: "", knowledgeGraphModel: "", meetingNotesModel: "", trackBlockModel: "" },
}) })
const [testState, setTestState] = useState<{ status: "idle" | "testing" | "success" | "error"; error?: string }>({ const [testState, setTestState] = useState<{ status: "idle" | "testing" | "success" | "error"; error?: string }>({
status: "idle", status: "idle",
@ -109,7 +109,7 @@ export function OnboardingModal({ open, onComplete }: OnboardingModalProps) {
const [googleCalendarConnecting, setGoogleCalendarConnecting] = useState(false) const [googleCalendarConnecting, setGoogleCalendarConnecting] = useState(false)
const updateProviderConfig = useCallback( const updateProviderConfig = useCallback(
(provider: LlmProviderFlavor, updates: Partial<{ apiKey: string; baseURL: string; model: string; knowledgeGraphModel: string }>) => { (provider: LlmProviderFlavor, updates: Partial<{ apiKey: string; baseURL: string; model: string; knowledgeGraphModel: string; meetingNotesModel: string; trackBlockModel: string }>) => {
setProviderConfigs(prev => ({ setProviderConfigs(prev => ({
...prev, ...prev,
[provider]: { ...prev[provider], ...updates }, [provider]: { ...prev[provider], ...updates },
@ -458,6 +458,8 @@ export function OnboardingModal({ open, onComplete }: OnboardingModalProps) {
const baseURL = activeConfig.baseURL.trim() || undefined const baseURL = activeConfig.baseURL.trim() || undefined
const model = activeConfig.model.trim() const model = activeConfig.model.trim()
const knowledgeGraphModel = activeConfig.knowledgeGraphModel.trim() || undefined const knowledgeGraphModel = activeConfig.knowledgeGraphModel.trim() || undefined
const meetingNotesModel = activeConfig.meetingNotesModel.trim() || undefined
const trackBlockModel = activeConfig.trackBlockModel.trim() || undefined
const providerConfig = { const providerConfig = {
provider: { provider: {
flavor: llmProvider, flavor: llmProvider,
@ -466,6 +468,8 @@ export function OnboardingModal({ open, onComplete }: OnboardingModalProps) {
}, },
model, model,
knowledgeGraphModel, knowledgeGraphModel,
meetingNotesModel,
trackBlockModel,
} }
const result = await window.ipc.invoke("models:test", providerConfig) const result = await window.ipc.invoke("models:test", providerConfig)
if (result.success) { if (result.success) {
@ -1157,6 +1161,72 @@ export function OnboardingModal({ open, onComplete }: OnboardingModalProps) {
</Select> </Select>
)} )}
</div> </div>
<div className="space-y-2">
<span className="text-xs font-medium text-muted-foreground uppercase tracking-wider">Meeting notes model</span>
{modelsLoading ? (
<div className="flex items-center gap-2 text-sm text-muted-foreground">
<Loader2 className="size-4 animate-spin" />
Loading...
</div>
) : showModelInput ? (
<Input
value={activeConfig.meetingNotesModel}
onChange={(e) => updateProviderConfig(llmProvider, { meetingNotesModel: e.target.value })}
placeholder={activeConfig.model || "Enter model"}
/>
) : (
<Select
value={activeConfig.meetingNotesModel || "__same__"}
onValueChange={(value) => updateProviderConfig(llmProvider, { meetingNotesModel: value === "__same__" ? "" : value })}
>
<SelectTrigger>
<SelectValue placeholder="Select a model" />
</SelectTrigger>
<SelectContent>
<SelectItem value="__same__">Same as assistant</SelectItem>
{modelsForProvider.map((model) => (
<SelectItem key={model.id} value={model.id}>
{model.name || model.id}
</SelectItem>
))}
</SelectContent>
</Select>
)}
</div>
<div className="space-y-2">
<span className="text-xs font-medium text-muted-foreground uppercase tracking-wider">Track block model</span>
{modelsLoading ? (
<div className="flex items-center gap-2 text-sm text-muted-foreground">
<Loader2 className="size-4 animate-spin" />
Loading...
</div>
) : showModelInput ? (
<Input
value={activeConfig.trackBlockModel}
onChange={(e) => updateProviderConfig(llmProvider, { trackBlockModel: e.target.value })}
placeholder={activeConfig.model || "Enter model"}
/>
) : (
<Select
value={activeConfig.trackBlockModel || "__same__"}
onValueChange={(value) => updateProviderConfig(llmProvider, { trackBlockModel: value === "__same__" ? "" : value })}
>
<SelectTrigger>
<SelectValue placeholder="Select a model" />
</SelectTrigger>
<SelectContent>
<SelectItem value="__same__">Same as assistant</SelectItem>
{modelsForProvider.map((model) => (
<SelectItem key={model.id} value={model.id}>
{model.name || model.id}
</SelectItem>
))}
</SelectContent>
</Select>
)}
</div>
</div> </div>
{showApiKey && ( {showApiKey && (

View file

@ -221,6 +221,76 @@ export function LlmSetupStep({ state }: LlmSetupStepProps) {
</Select> </Select>
)} )}
</div> </div>
<div className="space-y-2 min-w-0">
<label className="text-xs font-medium text-muted-foreground">
Meeting Notes Model
</label>
{modelsLoading ? (
<div className="flex items-center gap-2 text-sm text-muted-foreground">
<Loader2 className="size-4 animate-spin" />
Loading...
</div>
) : showModelInput ? (
<Input
value={activeConfig.meetingNotesModel}
onChange={(e) => updateProviderConfig(llmProvider, { meetingNotesModel: e.target.value })}
placeholder={activeConfig.model || "Enter model"}
/>
) : (
<Select
value={activeConfig.meetingNotesModel || "__same__"}
onValueChange={(value) => updateProviderConfig(llmProvider, { meetingNotesModel: value === "__same__" ? "" : value })}
>
<SelectTrigger className="w-full truncate">
<SelectValue placeholder="Select a model" />
</SelectTrigger>
<SelectContent>
<SelectItem value="__same__">Same as assistant</SelectItem>
{modelsForProvider.map((model) => (
<SelectItem key={model.id} value={model.id}>
{model.name || model.id}
</SelectItem>
))}
</SelectContent>
</Select>
)}
</div>
<div className="space-y-2 min-w-0">
<label className="text-xs font-medium text-muted-foreground">
Track Block Model
</label>
{modelsLoading ? (
<div className="flex items-center gap-2 text-sm text-muted-foreground">
<Loader2 className="size-4 animate-spin" />
Loading...
</div>
) : showModelInput ? (
<Input
value={activeConfig.trackBlockModel}
onChange={(e) => updateProviderConfig(llmProvider, { trackBlockModel: e.target.value })}
placeholder={activeConfig.model || "Enter model"}
/>
) : (
<Select
value={activeConfig.trackBlockModel || "__same__"}
onValueChange={(value) => updateProviderConfig(llmProvider, { trackBlockModel: value === "__same__" ? "" : value })}
>
<SelectTrigger className="w-full truncate">
<SelectValue placeholder="Select a model" />
</SelectTrigger>
<SelectContent>
<SelectItem value="__same__">Same as assistant</SelectItem>
{modelsForProvider.map((model) => (
<SelectItem key={model.id} value={model.id}>
{model.name || model.id}
</SelectItem>
))}
</SelectContent>
</Select>
)}
</div>
</div> </div>
{showApiKey && ( {showApiKey && (

View file

@ -29,14 +29,14 @@ export function useOnboardingState(open: boolean, onComplete: () => void) {
const [modelsCatalog, setModelsCatalog] = useState<Record<string, LlmModelOption[]>>({}) const [modelsCatalog, setModelsCatalog] = useState<Record<string, LlmModelOption[]>>({})
const [modelsLoading, setModelsLoading] = useState(false) const [modelsLoading, setModelsLoading] = useState(false)
const [modelsError, setModelsError] = useState<string | null>(null) const [modelsError, setModelsError] = useState<string | null>(null)
const [providerConfigs, setProviderConfigs] = useState<Record<LlmProviderFlavor, { apiKey: string; baseURL: string; model: string; knowledgeGraphModel: string }>>({ const [providerConfigs, setProviderConfigs] = useState<Record<LlmProviderFlavor, { apiKey: string; baseURL: string; model: string; knowledgeGraphModel: string; meetingNotesModel: string; trackBlockModel: string }>>({
openai: { apiKey: "", baseURL: "", model: "", knowledgeGraphModel: "" }, openai: { apiKey: "", baseURL: "", model: "", knowledgeGraphModel: "", meetingNotesModel: "", trackBlockModel: "" },
anthropic: { apiKey: "", baseURL: "", model: "", knowledgeGraphModel: "" }, anthropic: { apiKey: "", baseURL: "", model: "", knowledgeGraphModel: "", meetingNotesModel: "", trackBlockModel: "" },
google: { apiKey: "", baseURL: "", model: "", knowledgeGraphModel: "" }, google: { apiKey: "", baseURL: "", model: "", knowledgeGraphModel: "", meetingNotesModel: "", trackBlockModel: "" },
openrouter: { apiKey: "", baseURL: "", model: "", knowledgeGraphModel: "" }, openrouter: { apiKey: "", baseURL: "", model: "", knowledgeGraphModel: "", meetingNotesModel: "", trackBlockModel: "" },
aigateway: { apiKey: "", baseURL: "", model: "", knowledgeGraphModel: "" }, aigateway: { apiKey: "", baseURL: "", model: "", knowledgeGraphModel: "", meetingNotesModel: "", trackBlockModel: "" },
ollama: { apiKey: "", baseURL: "http://localhost:11434", model: "", knowledgeGraphModel: "" }, ollama: { apiKey: "", baseURL: "http://localhost:11434", model: "", knowledgeGraphModel: "", meetingNotesModel: "", trackBlockModel: "" },
"openai-compatible": { apiKey: "", baseURL: "http://localhost:1234/v1", model: "", knowledgeGraphModel: "" }, "openai-compatible": { apiKey: "", baseURL: "http://localhost:1234/v1", model: "", knowledgeGraphModel: "", meetingNotesModel: "", trackBlockModel: "" },
}) })
const [testState, setTestState] = useState<{ status: "idle" | "testing" | "success" | "error"; error?: string }>({ const [testState, setTestState] = useState<{ status: "idle" | "testing" | "success" | "error"; error?: string }>({
status: "idle", status: "idle",
@ -81,7 +81,7 @@ export function useOnboardingState(open: boolean, onComplete: () => void) {
const [googleCalendarConnecting, setGoogleCalendarConnecting] = useState(false) const [googleCalendarConnecting, setGoogleCalendarConnecting] = useState(false)
const updateProviderConfig = useCallback( const updateProviderConfig = useCallback(
(provider: LlmProviderFlavor, updates: Partial<{ apiKey: string; baseURL: string; model: string; knowledgeGraphModel: string }>) => { (provider: LlmProviderFlavor, updates: Partial<{ apiKey: string; baseURL: string; model: string; knowledgeGraphModel: string; meetingNotesModel: string; trackBlockModel: string }>) => {
setProviderConfigs(prev => ({ setProviderConfigs(prev => ({
...prev, ...prev,
[provider]: { ...prev[provider], ...updates }, [provider]: { ...prev[provider], ...updates },
@ -435,6 +435,8 @@ export function useOnboardingState(open: boolean, onComplete: () => void) {
const baseURL = activeConfig.baseURL.trim() || undefined const baseURL = activeConfig.baseURL.trim() || undefined
const model = activeConfig.model.trim() const model = activeConfig.model.trim()
const knowledgeGraphModel = activeConfig.knowledgeGraphModel.trim() || undefined const knowledgeGraphModel = activeConfig.knowledgeGraphModel.trim() || undefined
const meetingNotesModel = activeConfig.meetingNotesModel.trim() || undefined
const trackBlockModel = activeConfig.trackBlockModel.trim() || undefined
const providerConfig = { const providerConfig = {
provider: { provider: {
flavor: llmProvider, flavor: llmProvider,
@ -443,6 +445,8 @@ export function useOnboardingState(open: boolean, onComplete: () => void) {
}, },
model, model,
knowledgeGraphModel, knowledgeGraphModel,
meetingNotesModel,
trackBlockModel,
} }
const result = await window.ipc.invoke("models:test", providerConfig) const result = await window.ipc.invoke("models:test", providerConfig)
if (result.success) { if (result.success) {
@ -459,7 +463,7 @@ export function useOnboardingState(open: boolean, onComplete: () => void) {
setTestState({ status: "error", error: "Connection test failed" }) setTestState({ status: "error", error: "Connection test failed" })
toast.error("Connection test failed") toast.error("Connection test failed")
} }
}, [activeConfig.apiKey, activeConfig.baseURL, activeConfig.model, activeConfig.knowledgeGraphModel, canTest, llmProvider, handleNext]) }, [activeConfig.apiKey, activeConfig.baseURL, activeConfig.model, activeConfig.knowledgeGraphModel, activeConfig.meetingNotesModel, activeConfig.trackBlockModel, canTest, llmProvider, handleNext])
// Check connection status for all providers // Check connection status for all providers
const refreshAllStatuses = useCallback(async () => { const refreshAllStatuses = useCallback(async () => {

View file

@ -196,14 +196,14 @@ const defaultBaseURLs: Partial<Record<LlmProviderFlavor, string>> = {
function ModelSettings({ dialogOpen }: { dialogOpen: boolean }) { function ModelSettings({ dialogOpen }: { dialogOpen: boolean }) {
const [provider, setProvider] = useState<LlmProviderFlavor>("openai") const [provider, setProvider] = useState<LlmProviderFlavor>("openai")
const [defaultProvider, setDefaultProvider] = useState<LlmProviderFlavor | null>(null) const [defaultProvider, setDefaultProvider] = useState<LlmProviderFlavor | null>(null)
const [providerConfigs, setProviderConfigs] = useState<Record<LlmProviderFlavor, { apiKey: string; baseURL: string; models: string[]; knowledgeGraphModel: string }>>({ const [providerConfigs, setProviderConfigs] = useState<Record<LlmProviderFlavor, { apiKey: string; baseURL: string; models: string[]; knowledgeGraphModel: string; meetingNotesModel: string; trackBlockModel: string }>>({
openai: { apiKey: "", baseURL: "", models: [""], knowledgeGraphModel: "" }, openai: { apiKey: "", baseURL: "", models: [""], knowledgeGraphModel: "", meetingNotesModel: "", trackBlockModel: "" },
anthropic: { apiKey: "", baseURL: "", models: [""], knowledgeGraphModel: "" }, anthropic: { apiKey: "", baseURL: "", models: [""], knowledgeGraphModel: "", meetingNotesModel: "", trackBlockModel: "" },
google: { apiKey: "", baseURL: "", models: [""], knowledgeGraphModel: "" }, google: { apiKey: "", baseURL: "", models: [""], knowledgeGraphModel: "", meetingNotesModel: "", trackBlockModel: "" },
openrouter: { apiKey: "", baseURL: "", models: [""], knowledgeGraphModel: "" }, openrouter: { apiKey: "", baseURL: "", models: [""], knowledgeGraphModel: "", meetingNotesModel: "", trackBlockModel: "" },
aigateway: { apiKey: "", baseURL: "", models: [""], knowledgeGraphModel: "" }, aigateway: { apiKey: "", baseURL: "", models: [""], knowledgeGraphModel: "", meetingNotesModel: "", trackBlockModel: "" },
ollama: { apiKey: "", baseURL: "http://localhost:11434", models: [""], knowledgeGraphModel: "" }, ollama: { apiKey: "", baseURL: "http://localhost:11434", models: [""], knowledgeGraphModel: "", meetingNotesModel: "", trackBlockModel: "" },
"openai-compatible": { apiKey: "", baseURL: "http://localhost:1234/v1", models: [""], knowledgeGraphModel: "" }, "openai-compatible": { apiKey: "", baseURL: "http://localhost:1234/v1", models: [""], knowledgeGraphModel: "", meetingNotesModel: "", trackBlockModel: "" },
}) })
const [modelsCatalog, setModelsCatalog] = useState<Record<string, LlmModelOption[]>>({}) const [modelsCatalog, setModelsCatalog] = useState<Record<string, LlmModelOption[]>>({})
const [modelsLoading, setModelsLoading] = useState(false) const [modelsLoading, setModelsLoading] = useState(false)
@ -229,7 +229,7 @@ function ModelSettings({ dialogOpen }: { dialogOpen: boolean }) {
(!requiresBaseURL || activeConfig.baseURL.trim().length > 0) (!requiresBaseURL || activeConfig.baseURL.trim().length > 0)
const updateConfig = useCallback( const updateConfig = useCallback(
(prov: LlmProviderFlavor, updates: Partial<{ apiKey: string; baseURL: string; models: string[]; knowledgeGraphModel: string }>) => { (prov: LlmProviderFlavor, updates: Partial<{ apiKey: string; baseURL: string; models: string[]; knowledgeGraphModel: string; meetingNotesModel: string; trackBlockModel: string }>) => {
setProviderConfigs(prev => ({ setProviderConfigs(prev => ({
...prev, ...prev,
[prov]: { ...prev[prov], ...updates }, [prov]: { ...prev[prov], ...updates },
@ -302,6 +302,8 @@ function ModelSettings({ dialogOpen }: { dialogOpen: boolean }) {
baseURL: e.baseURL || (defaultBaseURLs[key as LlmProviderFlavor] || ""), baseURL: e.baseURL || (defaultBaseURLs[key as LlmProviderFlavor] || ""),
models: savedModels, models: savedModels,
knowledgeGraphModel: e.knowledgeGraphModel || "", knowledgeGraphModel: e.knowledgeGraphModel || "",
meetingNotesModel: e.meetingNotesModel || "",
trackBlockModel: e.trackBlockModel || "",
}; };
} }
} }
@ -318,6 +320,8 @@ function ModelSettings({ dialogOpen }: { dialogOpen: boolean }) {
baseURL: parsed.provider.baseURL || (defaultBaseURLs[flavor] || ""), baseURL: parsed.provider.baseURL || (defaultBaseURLs[flavor] || ""),
models: activeModels.length > 0 ? activeModels : [""], models: activeModels.length > 0 ? activeModels : [""],
knowledgeGraphModel: parsed.knowledgeGraphModel || "", knowledgeGraphModel: parsed.knowledgeGraphModel || "",
meetingNotesModel: parsed.meetingNotesModel || "",
trackBlockModel: parsed.trackBlockModel || "",
}; };
} }
return next; return next;
@ -391,6 +395,8 @@ function ModelSettings({ dialogOpen }: { dialogOpen: boolean }) {
model: allModels[0] || "", model: allModels[0] || "",
models: allModels, models: allModels,
knowledgeGraphModel: activeConfig.knowledgeGraphModel.trim() || undefined, knowledgeGraphModel: activeConfig.knowledgeGraphModel.trim() || undefined,
meetingNotesModel: activeConfig.meetingNotesModel.trim() || undefined,
trackBlockModel: activeConfig.trackBlockModel.trim() || undefined,
} }
const result = await window.ipc.invoke("models:test", providerConfig) const result = await window.ipc.invoke("models:test", providerConfig)
if (result.success) { if (result.success) {
@ -423,6 +429,8 @@ function ModelSettings({ dialogOpen }: { dialogOpen: boolean }) {
model: allModels[0], model: allModels[0],
models: allModels, models: allModels,
knowledgeGraphModel: config.knowledgeGraphModel.trim() || undefined, knowledgeGraphModel: config.knowledgeGraphModel.trim() || undefined,
meetingNotesModel: config.meetingNotesModel.trim() || undefined,
trackBlockModel: config.trackBlockModel.trim() || undefined,
}) })
setDefaultProvider(prov) setDefaultProvider(prov)
window.dispatchEvent(new Event('models-config-changed')) window.dispatchEvent(new Event('models-config-changed'))
@ -452,6 +460,8 @@ function ModelSettings({ dialogOpen }: { dialogOpen: boolean }) {
parsed.model = defModels[0] || "" parsed.model = defModels[0] || ""
parsed.models = defModels parsed.models = defModels
parsed.knowledgeGraphModel = defConfig.knowledgeGraphModel.trim() || undefined parsed.knowledgeGraphModel = defConfig.knowledgeGraphModel.trim() || undefined
parsed.meetingNotesModel = defConfig.meetingNotesModel.trim() || undefined
parsed.trackBlockModel = defConfig.trackBlockModel.trim() || undefined
} }
await window.ipc.invoke("workspace:writeFile", { await window.ipc.invoke("workspace:writeFile", {
path: "config/models.json", path: "config/models.json",
@ -459,7 +469,7 @@ function ModelSettings({ dialogOpen }: { dialogOpen: boolean }) {
}) })
setProviderConfigs(prev => ({ setProviderConfigs(prev => ({
...prev, ...prev,
[prov]: { apiKey: "", baseURL: defaultBaseURLs[prov] || "", models: [""], knowledgeGraphModel: "" }, [prov]: { apiKey: "", baseURL: defaultBaseURLs[prov] || "", models: [""], knowledgeGraphModel: "", meetingNotesModel: "", trackBlockModel: "" },
})) }))
setTestState({ status: "idle" }) setTestState({ status: "idle" })
window.dispatchEvent(new Event('models-config-changed')) window.dispatchEvent(new Event('models-config-changed'))
@ -649,6 +659,74 @@ function ModelSettings({ dialogOpen }: { dialogOpen: boolean }) {
</Select> </Select>
)} )}
</div> </div>
{/* Meeting notes model */}
<div className="space-y-2">
<span className="text-xs font-medium text-muted-foreground uppercase tracking-wider">Meeting notes model</span>
{modelsLoading ? (
<div className="flex items-center gap-2 text-sm text-muted-foreground">
<Loader2 className="size-4 animate-spin" />
Loading...
</div>
) : showModelInput ? (
<Input
value={activeConfig.meetingNotesModel}
onChange={(e) => updateConfig(provider, { meetingNotesModel: e.target.value })}
placeholder={primaryModel || "Enter model"}
/>
) : (
<Select
value={activeConfig.meetingNotesModel || "__same__"}
onValueChange={(value) => updateConfig(provider, { meetingNotesModel: value === "__same__" ? "" : value })}
>
<SelectTrigger>
<SelectValue placeholder="Select a model" />
</SelectTrigger>
<SelectContent>
<SelectItem value="__same__">Same as assistant</SelectItem>
{modelsForProvider.map((m) => (
<SelectItem key={m.id} value={m.id}>
{m.name || m.id}
</SelectItem>
))}
</SelectContent>
</Select>
)}
</div>
{/* Track block model */}
<div className="space-y-2">
<span className="text-xs font-medium text-muted-foreground uppercase tracking-wider">Track block model</span>
{modelsLoading ? (
<div className="flex items-center gap-2 text-sm text-muted-foreground">
<Loader2 className="size-4 animate-spin" />
Loading...
</div>
) : showModelInput ? (
<Input
value={activeConfig.trackBlockModel}
onChange={(e) => updateConfig(provider, { trackBlockModel: e.target.value })}
placeholder={primaryModel || "Enter model"}
/>
) : (
<Select
value={activeConfig.trackBlockModel || "__same__"}
onValueChange={(value) => updateConfig(provider, { trackBlockModel: value === "__same__" ? "" : value })}
>
<SelectTrigger>
<SelectValue placeholder="Select a model" />
</SelectTrigger>
<SelectContent>
<SelectItem value="__same__">Same as assistant</SelectItem>
{modelsForProvider.map((m) => (
<SelectItem key={m.id} value={m.id}>
{m.name || m.id}
</SelectItem>
))}
</SelectContent>
</Select>
)}
</div>
</div> </div>
{/* API Key */} {/* API Key */}

View file

@ -156,6 +156,8 @@ export function TrackModal() {
const lastRunAt = track?.lastRunAt ?? '' const lastRunAt = track?.lastRunAt ?? ''
const lastRunId = track?.lastRunId ?? '' const lastRunId = track?.lastRunId ?? ''
const lastRunSummary = track?.lastRunSummary ?? '' const lastRunSummary = track?.lastRunSummary ?? ''
const model = track?.model ?? ''
const provider = track?.provider ?? ''
const scheduleSummary = useMemo(() => summarizeSchedule(schedule), [schedule]) const scheduleSummary = useMemo(() => summarizeSchedule(schedule), [schedule])
const triggerType: 'scheduled' | 'event' | 'manual' = const triggerType: 'scheduled' | 'event' | 'manual' =
schedule ? 'scheduled' : eventMatchCriteria ? 'event' : 'manual' schedule ? 'scheduled' : eventMatchCriteria ? 'event' : 'manual'
@ -393,6 +395,12 @@ export function TrackModal() {
<dt>Track ID</dt><dd><code>{trackId}</code></dd> <dt>Track ID</dt><dd><code>{trackId}</code></dd>
<dt>File</dt><dd><code>{detail.filePath}</code></dd> <dt>File</dt><dd><code>{detail.filePath}</code></dd>
<dt>Status</dt><dd>{active ? 'Active' : 'Paused'}</dd> <dt>Status</dt><dd>{active ? 'Active' : 'Paused'}</dd>
{model && (<>
<dt>Model</dt><dd><code>{model}</code></dd>
</>)}
{provider && (<>
<dt>Provider</dt><dd><code>{provider}</code></dd>
</>)}
{lastRunAt && (<> {lastRunAt && (<>
<dt>Last run</dt><dd>{formatDateTime(lastRunAt)}</dd> <dt>Last run</dt><dd>{formatDateTime(lastRunAt)}</dd>
</>)} </>)}

View file

@ -58,15 +58,29 @@ export function useAnalyticsIdentity() {
// Listen for OAuth connect/disconnect events to update identity // Listen for OAuth connect/disconnect events to update identity
useEffect(() => { useEffect(() => {
const cleanup = window.ipc.on('oauth:didConnect', (event) => { const cleanup = window.ipc.on('oauth:didConnect', (event) => {
if (!event.success) return if (event.provider !== 'rowboat') {
// Other providers: just toggle the connection flag
// If Rowboat provider connected, identify user if (event.success) {
if (event.provider === 'rowboat' && event.userId) { posthog.people.set({ [`${event.provider}_connected`]: true })
posthog.identify(event.userId) }
posthog.people.set({ signed_in: true }) return
} }
posthog.people.set({ [`${event.provider}_connected`]: true }) // Rowboat sign-in
if (event.success) {
if (event.userId) {
posthog.identify(event.userId)
}
posthog.people.set({ signed_in: true, rowboat_connected: true })
posthog.capture('user_signed_in')
return
}
// Rowboat sign-out — flip flags, capture, and reset distinct_id so
// future events on this device don't get attributed to the prior user.
posthog.people.set({ signed_in: false, rowboat_connected: false })
posthog.capture('user_signed_out')
posthog.reset()
}) })
return cleanup return cleanup

View file

@ -2,20 +2,45 @@ import { StrictMode } from 'react'
import { createRoot } from 'react-dom/client' import { createRoot } from 'react-dom/client'
import './index.css' import './index.css'
import App from './App.tsx' import App from './App.tsx'
import posthog from 'posthog-js'
import { PostHogProvider } from 'posthog-js/react' import { PostHogProvider } from 'posthog-js/react'
import { ThemeProvider } from '@/contexts/theme-context' import { ThemeProvider } from '@/contexts/theme-context'
const options = { // Fetch the stable installation ID from main so renderer + main share one
api_host: import.meta.env.VITE_PUBLIC_POSTHOG_HOST, // PostHog distinct_id. Falls back to PostHog's auto-generated anonymous ID
defaults: '2025-11-30', // if the IPC call fails (rare — main is always up before renderer).
} as const async function bootstrap() {
let installationId: string | undefined
let apiUrl: string | undefined
try {
const result = await window.ipc.invoke('analytics:bootstrap', null)
installationId = result.installationId
apiUrl = result.apiUrl
} catch (err) {
console.error('[Analytics] Failed to bootstrap from main:', err)
}
createRoot(document.getElementById('root')!).render( const options = {
<StrictMode> api_host: import.meta.env.VITE_PUBLIC_POSTHOG_HOST,
<PostHogProvider apiKey={import.meta.env.VITE_PUBLIC_POSTHOG_KEY} options={options}> defaults: '2025-11-30',
<ThemeProvider defaultTheme="system"> ...(installationId ? { bootstrap: { distinctID: installationId } } : {}),
<App /> } as const
</ThemeProvider>
</PostHogProvider> createRoot(document.getElementById('root')!).render(
</StrictMode>, <StrictMode>
) <PostHogProvider apiKey={import.meta.env.VITE_PUBLIC_POSTHOG_KEY} options={options}>
<ThemeProvider defaultTheme="system">
<App />
</ThemeProvider>
</PostHogProvider>
</StrictMode>,
)
// Tag the active person record with api_url so anonymous users are also
// segmentable by environment.
if (apiUrl) {
posthog.people.set({ api_url: apiUrl })
}
}
bootstrap()

View file

@ -37,6 +37,7 @@
"openid-client": "^6.8.1", "openid-client": "^6.8.1",
"papaparse": "^5.5.3", "papaparse": "^5.5.3",
"pdf-parse": "^2.4.5", "pdf-parse": "^2.4.5",
"posthog-node": "^4.18.0",
"react": "^19.2.3", "react": "^19.2.3",
"xlsx": "^0.18.5", "xlsx": "^0.18.5",
"yaml": "^2.8.2", "yaml": "^2.8.2",

View file

@ -8,6 +8,7 @@ import { IMonotonicallyIncreasingIdGenerator } from "../application/lib/id-gen.j
import { AgentScheduleConfig, AgentScheduleEntry } from "@x/shared/dist/agent-schedule.js"; import { AgentScheduleConfig, AgentScheduleEntry } from "@x/shared/dist/agent-schedule.js";
import { AgentScheduleState, AgentScheduleStateEntry } from "@x/shared/dist/agent-schedule-state.js"; import { AgentScheduleState, AgentScheduleStateEntry } from "@x/shared/dist/agent-schedule-state.js";
import { MessageEvent } from "@x/shared/dist/runs.js"; import { MessageEvent } from "@x/shared/dist/runs.js";
import { createRun } from "../runs/runs.js";
import z from "zod"; import z from "zod";
const DEFAULT_STARTING_MESSAGE = "go"; const DEFAULT_STARTING_MESSAGE = "go";
@ -162,8 +163,12 @@ async function runAgent(
}); });
try { try {
// Create a new run // Create a new run via core (resolves agent + default model+provider).
const run = await runsRepo.create({ agentId: agentName }); const run = await createRun({
agentId: agentName,
useCase: 'copilot_chat',
subUseCase: 'scheduled',
});
console.log(`[AgentRunner] Created run ${run.id} for agent ${agentName}`); console.log(`[AgentRunner] Created run ${run.id} for agent ${agentName}`);
// Add the starting message as a user message // Add the starting message as a user message

View file

@ -16,8 +16,7 @@ import { isBlocked, extractCommandNames } from "../application/lib/command-execu
import container from "../di/container.js"; import container from "../di/container.js";
import { IModelConfigRepo } from "../models/repo.js"; import { IModelConfigRepo } from "../models/repo.js";
import { createProvider } from "../models/models.js"; import { createProvider } from "../models/models.js";
import { isSignedIn } from "../account/account.js"; import { resolveProviderConfig } from "../models/defaults.js";
import { getGatewayProvider } from "../models/gateway.js";
import { IAgentsRepo } from "./repo.js"; import { IAgentsRepo } from "./repo.js";
import { IMonotonicallyIncreasingIdGenerator } from "../application/lib/id-gen.js"; import { IMonotonicallyIncreasingIdGenerator } from "../application/lib/id-gen.js";
import { IBus } from "../application/lib/bus.js"; import { IBus } from "../application/lib/bus.js";
@ -27,6 +26,8 @@ import { IRunsLock } from "../runs/lock.js";
import { IAbortRegistry } from "../runs/abort-registry.js"; import { IAbortRegistry } from "../runs/abort-registry.js";
import { PrefixLogger } from "@x/shared"; import { PrefixLogger } from "@x/shared";
import { parse } from "yaml"; import { parse } from "yaml";
import { captureLlmUsage } from "../analytics/usage.js";
import { enterUseCase, type UseCase } from "../analytics/use_case.js";
import { getRaw as getNoteCreationRaw } from "../knowledge/note_creation.js"; import { getRaw as getNoteCreationRaw } from "../knowledge/note_creation.js";
import { getRaw as getLabelingAgentRaw } from "../knowledge/labeling_agent.js"; import { getRaw as getLabelingAgentRaw } from "../knowledge/labeling_agent.js";
import { getRaw as getNoteTaggingAgentRaw } from "../knowledge/note_tagging_agent.js"; import { getRaw as getNoteTaggingAgentRaw } from "../knowledge/note_tagging_agent.js";
@ -194,6 +195,19 @@ export class AgentRuntime implements IAgentRuntime {
await this.runsRepo.appendEvents(runId, [stoppedEvent]); await this.runsRepo.appendEvents(runId, [stoppedEvent]);
await this.bus.publish(stoppedEvent); await this.bus.publish(stoppedEvent);
} }
} catch (error) {
console.error(`Run ${runId} failed:`, error);
const message = error instanceof Error
? (error.stack || error.message || error.name)
: typeof error === "string" ? error : JSON.stringify(error);
const errorEvent: z.infer<typeof RunEvent> = {
runId,
type: "error",
error: message,
subflow: [],
};
await this.runsRepo.appendEvents(runId, [errorEvent]);
await this.bus.publish(errorEvent);
} finally { } finally {
this.abortRegistry.cleanup(runId); this.abortRegistry.cleanup(runId);
await this.runsLock.release(runId); await this.runsLock.release(runId);
@ -636,6 +650,10 @@ export class AgentState {
runId: string | null = null; runId: string | null = null;
agent: z.infer<typeof Agent> | null = null; agent: z.infer<typeof Agent> | null = null;
agentName: string | null = null; agentName: string | null = null;
runModel: string | null = null;
runProvider: string | null = null;
runUseCase: UseCase | null = null;
runSubUseCase: string | null = null;
messages: z.infer<typeof MessageList> = []; messages: z.infer<typeof MessageList> = [];
lastAssistantMsg: z.infer<typeof AssistantMessage> | null = null; lastAssistantMsg: z.infer<typeof AssistantMessage> | null = null;
subflowStates: Record<string, AgentState> = {}; subflowStates: Record<string, AgentState> = {};
@ -749,13 +767,22 @@ export class AgentState {
case "start": case "start":
this.runId = event.runId; this.runId = event.runId;
this.agentName = event.agentName; this.agentName = event.agentName;
this.runModel = event.model;
this.runProvider = event.provider;
this.runUseCase = event.useCase ?? null;
this.runSubUseCase = event.subUseCase ?? null;
break; break;
case "spawn-subflow": case "spawn-subflow":
// Seed the subflow state with its agent so downstream loadAgent works. // Seed the subflow state with its agent so downstream loadAgent works.
// Subflows inherit the parent run's model+provider — there's one pair per run.
if (!this.subflowStates[event.toolCallId]) { if (!this.subflowStates[event.toolCallId]) {
this.subflowStates[event.toolCallId] = new AgentState(); this.subflowStates[event.toolCallId] = new AgentState();
} }
this.subflowStates[event.toolCallId].agentName = event.agentName; this.subflowStates[event.toolCallId].agentName = event.agentName;
this.subflowStates[event.toolCallId].runModel = this.runModel;
this.subflowStates[event.toolCallId].runProvider = this.runProvider;
this.subflowStates[event.toolCallId].runUseCase = this.runUseCase;
this.subflowStates[event.toolCallId].runSubUseCase = this.runSubUseCase;
break; break;
case "message": case "message":
this.messages.push(event.message); this.messages.push(event.message);
@ -844,35 +871,31 @@ export async function* streamAgent({
yield event; yield event;
} }
const modelConfig = await modelConfigRepo.getConfig();
if (!modelConfig) {
throw new Error("Model config not found");
}
// set up agent // set up agent
const agent = await loadAgent(state.agentName!); const agent = await loadAgent(state.agentName!);
// set up tools // set up tools
const tools = await buildTools(agent); const tools = await buildTools(agent);
// set up provider + model // model+provider were resolved and frozen on the run at runs:create time.
const signedIn = await isSignedIn(); // Look up the named provider's current credentials from models.json and
const provider = signedIn // instantiate the LLM client. No selection happens here.
? await getGatewayProvider() if (!state.runModel || !state.runProvider) {
: createProvider(modelConfig.provider); throw new Error(`Run ${runId} is missing model/provider on its start event`);
const knowledgeGraphAgents = ["note_creation", "email-draft", "meeting-prep", "labeling_agent", "note_tagging_agent", "agent_notes_agent"]; }
const isKgAgent = knowledgeGraphAgents.includes(state.agentName!); const modelId = state.runModel;
const isInlineTaskAgent = state.agentName === "inline_task_agent"; const providerConfig = await resolveProviderConfig(state.runProvider);
const defaultModel = signedIn ? "gpt-5.4" : modelConfig.model; const provider = createProvider(providerConfig);
const defaultKgModel = signedIn ? "anthropic/claude-haiku-4.5" : defaultModel;
const defaultInlineTaskModel = signedIn ? "anthropic/claude-sonnet-4.6" : defaultModel;
const modelId = isInlineTaskAgent
? defaultInlineTaskModel
: (isKgAgent && modelConfig.knowledgeGraphModel)
? modelConfig.knowledgeGraphModel
: isKgAgent ? defaultKgModel : defaultModel;
const model = provider.languageModel(modelId); const model = provider.languageModel(modelId);
logger.log(`using model: ${modelId}`); logger.log(`using model: ${modelId} (provider: ${state.runProvider})`);
// Install use-case context for tool-internal LLM calls (e.g. parseFile)
// so they can tag their `llm_usage` events with the parent run's category.
enterUseCase({
useCase: state.runUseCase ?? "copilot_chat",
...(state.runSubUseCase ? { subUseCase: state.runSubUseCase } : {}),
...(state.agentName ? { agentName: state.agentName } : {}),
});
let loopCounter = 0; let loopCounter = 0;
let voiceInput = false; let voiceInput = false;
@ -942,27 +965,40 @@ export async function* streamAgent({
subflow: [], subflow: [],
}); });
let result: unknown = null; let result: unknown = null;
if (agent.tools![toolCall.toolName].type === "agent") { try {
const subflowState = state.subflowStates[toolCallId]; if (agent.tools![toolCall.toolName].type === "agent") {
for await (const event of streamAgent({ const subflowState = state.subflowStates[toolCallId];
state: subflowState, for await (const event of streamAgent({
idGenerator, state: subflowState,
runId, idGenerator,
messageQueue, runId,
modelConfigRepo, messageQueue,
signal, modelConfigRepo,
abortRegistry, signal,
})) { abortRegistry,
yield* processEvent({ })) {
...event, yield* processEvent({
subflow: [toolCallId, ...event.subflow], ...event,
}); subflow: [toolCallId, ...event.subflow],
});
}
if (!subflowState.getPendingAskHumans().length && !subflowState.getPendingPermissions().length) {
result = subflowState.finalResponse();
}
} else {
result = await execTool(agent.tools![toolCall.toolName], toolCall.arguments, { runId, signal, abortRegistry });
} }
if (!subflowState.getPendingAskHumans().length && !subflowState.getPendingPermissions().length) { } catch (error) {
result = subflowState.finalResponse(); if ((error instanceof Error && error.name === "AbortError") || signal.aborted) {
throw error;
} }
} else { const message = error instanceof Error ? (error.message || error.name) : String(error);
result = await execTool(agent.tools![toolCall.toolName], toolCall.arguments, { runId, signal, abortRegistry }); _logger.log('tool failed', message);
result = {
success: false,
error: message,
toolName: toolCall.toolName,
};
} }
const resultPayload = result === undefined ? null : result; const resultPayload = result === undefined ? null : result;
const resultMsg: z.infer<typeof ToolMessage> = { const resultMsg: z.infer<typeof ToolMessage> = {
@ -1094,6 +1130,13 @@ export async function* streamAgent({
instructionsWithDateTime, instructionsWithDateTime,
tools, tools,
signal, signal,
{
useCase: state.runUseCase ?? "copilot_chat",
...(state.runSubUseCase ? { subUseCase: state.runSubUseCase } : {}),
agentName: state.agentName ?? undefined,
modelId,
providerName: state.runProvider!,
},
)) { )) {
messageBuilder.ingest(event); messageBuilder.ingest(event);
yield* processEvent({ yield* processEvent({
@ -1181,12 +1224,21 @@ export async function* streamAgent({
} }
} }
interface StreamLlmAnalytics {
useCase: UseCase;
subUseCase?: string;
agentName?: string;
modelId: string;
providerName: string;
}
async function* streamLlm( async function* streamLlm(
model: LanguageModel, model: LanguageModel,
messages: z.infer<typeof MessageList>, messages: z.infer<typeof MessageList>,
instructions: string, instructions: string,
tools: ToolSet, tools: ToolSet,
signal?: AbortSignal, signal?: AbortSignal,
analytics?: StreamLlmAnalytics,
): AsyncGenerator<z.infer<typeof LlmStepStreamEvent>, void, unknown> { ): AsyncGenerator<z.infer<typeof LlmStepStreamEvent>, void, unknown> {
const converted = convertFromMessages(messages); const converted = convertFromMessages(messages);
console.log(`! SENDING payload to model: `, JSON.stringify(converted)) console.log(`! SENDING payload to model: `, JSON.stringify(converted))
@ -1257,6 +1309,16 @@ async function* streamLlm(
}; };
break; break;
case "finish-step": case "finish-step":
if (analytics) {
captureLlmUsage({
useCase: analytics.useCase,
...(analytics.subUseCase ? { subUseCase: analytics.subUseCase } : {}),
...(analytics.agentName ? { agentName: analytics.agentName } : {}),
model: analytics.modelId,
provider: analytics.providerName,
usage: event.usage,
});
}
yield { yield {
type: "finish-step", type: "finish-step",
usage: event.usage, usage: event.usage,

View file

@ -0,0 +1,23 @@
import { isSignedIn } from '../account/account.js';
import { getBillingInfo } from '../billing/billing.js';
import { identify } from './posthog.js';
/**
* If the user has rowboat OAuth tokens, fetch their billing info and
* call posthog.identify(). Idempotent safe to call on every app start.
* Catches all errors so analytics never blocks app launch.
*/
export async function identifyIfSignedIn(): Promise<void> {
try {
if (!(await isSignedIn())) return;
const billing = await getBillingInfo();
if (!billing.userId) return;
identify(billing.userId, {
...(billing.userEmail ? { email: billing.userEmail } : {}),
plan: billing.subscriptionPlan,
status: billing.subscriptionStatus,
});
} catch (err) {
console.error('[Analytics] startup identify failed:', err);
}
}

View file

@ -0,0 +1,37 @@
import fs from 'node:fs';
import path from 'node:path';
import { randomUUID } from 'node:crypto';
import { WorkDir } from '../config/config.js';
const INSTALLATION_PATH = path.join(WorkDir, 'config', 'installation.json');
let cached: string | null = null;
export function getInstallationId(): string {
if (cached) return cached;
try {
if (fs.existsSync(INSTALLATION_PATH)) {
const raw = fs.readFileSync(INSTALLATION_PATH, 'utf-8');
const parsed = JSON.parse(raw) as { installationId?: string };
if (parsed.installationId && typeof parsed.installationId === 'string') {
cached = parsed.installationId;
return cached;
}
}
} catch (err) {
console.error('[Analytics] Failed to read installation.json:', err);
}
const id = randomUUID();
try {
const dir = path.dirname(INSTALLATION_PATH);
if (!fs.existsSync(dir)) {
fs.mkdirSync(dir, { recursive: true });
}
fs.writeFileSync(INSTALLATION_PATH, JSON.stringify({ installationId: id }, null, 2));
} catch (err) {
console.error('[Analytics] Failed to write installation.json:', err);
}
cached = id;
return id;
}

View file

@ -0,0 +1,90 @@
import { PostHog } from 'posthog-node';
import { getInstallationId } from './installation.js';
import { API_URL } from '../config/env.js';
// Build-time injected via esbuild `define` (apps/main/bundle.mjs).
// In dev/tsc, fall back to process.env so local runs work too.
const POSTHOG_KEY = process.env.POSTHOG_KEY ?? process.env.VITE_PUBLIC_POSTHOG_KEY ?? '';
const POSTHOG_HOST = process.env.POSTHOG_HOST ?? process.env.VITE_PUBLIC_POSTHOG_HOST ?? 'https://us.i.posthog.com';
let client: PostHog | null = null;
let initAttempted = false;
let identifiedUserId: string | null = null;
function getClient(): PostHog | null {
if (initAttempted) return client;
initAttempted = true;
if (!POSTHOG_KEY) {
console.log('[Analytics] POSTHOG_KEY not set; analytics disabled');
return null;
}
try {
client = new PostHog(POSTHOG_KEY, {
host: POSTHOG_HOST,
flushAt: 20,
flushInterval: 10_000,
});
// Tag the install with api_url as a person property up-front,
// so anonymous users are also segmentable by environment (api_url
// distinguishes prod / staging / custom — meaning is assigned in PostHog).
client.identify({
distinctId: getInstallationId(),
properties: { api_url: API_URL },
});
} catch (err) {
console.error('[Analytics] Failed to init PostHog:', err);
client = null;
}
return client;
}
function activeDistinctId(): string {
return identifiedUserId ?? getInstallationId();
}
export function capture(event: string, properties?: Record<string, unknown>): void {
const ph = getClient();
if (!ph) return;
try {
ph.capture({
distinctId: activeDistinctId(),
event,
properties,
});
} catch (err) {
console.error('[Analytics] capture failed:', err);
}
}
export function identify(userId: string, properties?: Record<string, unknown>): void {
const ph = getClient();
if (!ph) return;
try {
// Alias the anonymous installation ID to the rowboat user ID so historical
// anonymous events are linked to the identified user.
ph.alias({ distinctId: userId, alias: getInstallationId() });
ph.identify({
distinctId: userId,
properties: {
...properties,
api_url: API_URL,
},
});
identifiedUserId = userId;
} catch (err) {
console.error('[Analytics] identify failed:', err);
}
}
export function reset(): void {
identifiedUserId = null;
}
export async function shutdown(): Promise<void> {
if (!client) return;
try {
await client.shutdown();
} catch (err) {
console.error('[Analytics] shutdown failed:', err);
}
}

View file

@ -0,0 +1,38 @@
import { capture } from './posthog.js';
import type { UseCase } from './use_case.js';
// Shape compatible with ai-sdk v5 `LanguageModelUsage`.
// All fields are optional because providers report subsets.
export interface LlmUsageInput {
inputTokens?: number;
outputTokens?: number;
totalTokens?: number;
reasoningTokens?: number;
cachedInputTokens?: number;
}
export interface CaptureLlmUsageArgs {
useCase: UseCase;
subUseCase?: string;
agentName?: string;
model: string;
provider: string;
usage: LlmUsageInput | undefined;
}
export function captureLlmUsage(args: CaptureLlmUsageArgs): void {
const usage = args.usage ?? {};
const properties: Record<string, unknown> = {
use_case: args.useCase,
model: args.model,
provider: args.provider,
input_tokens: usage.inputTokens ?? 0,
output_tokens: usage.outputTokens ?? 0,
total_tokens: usage.totalTokens ?? (usage.inputTokens ?? 0) + (usage.outputTokens ?? 0),
};
if (args.subUseCase) properties.sub_use_case = args.subUseCase;
if (args.agentName) properties.agent_name = args.agentName;
if (usage.cachedInputTokens != null) properties.cached_input_tokens = usage.cachedInputTokens;
if (usage.reasoningTokens != null) properties.reasoning_tokens = usage.reasoningTokens;
capture('llm_usage', properties);
}

View file

@ -0,0 +1,28 @@
import { AsyncLocalStorage } from 'node:async_hooks';
export type UseCase = 'copilot_chat' | 'track_block' | 'meeting_note' | 'knowledge_sync';
export interface UseCaseContext {
useCase: UseCase;
subUseCase?: string;
agentName?: string;
}
const storage = new AsyncLocalStorage<UseCaseContext>();
export function withUseCase<T>(ctx: UseCaseContext, fn: () => T): T {
return storage.run(ctx, fn);
}
/**
* Permanently install a use-case context for the current async chain.
* Use inside generator functions where wrapping with `withUseCase()` doesn't
* compose. Child async work (e.g. tool execution) will inherit it.
*/
export function enterUseCase(ctx: UseCaseContext): void {
storage.enterWith(ctx);
}
export function getCurrentUseCase(): UseCaseContext | undefined {
return storage.getStore();
}

View file

@ -87,6 +87,23 @@ ${schemaYaml}
**Runtime-managed fields never write these yourself:** ` + "`" + `lastRunAt` + "`" + `, ` + "`" + `lastRunId` + "`" + `, ` + "`" + `lastRunSummary` + "`" + `. **Runtime-managed fields never write these yourself:** ` + "`" + `lastRunAt` + "`" + `, ` + "`" + `lastRunId` + "`" + `, ` + "`" + `lastRunSummary` + "`" + `.
## Do Not Set ` + "`" + `model` + "`" + ` or ` + "`" + `provider` + "`" + ` (almost always)
The schema includes optional ` + "`" + `model` + "`" + ` and ` + "`" + `provider` + "`" + ` fields. **Omit them.** A user-configurable global default already picks the right model and provider for tracks; setting per-track values bypasses that and is almost always wrong.
The only time these belong on a track:
- The user **explicitly** named a model or provider for *this specific track* in their request ("use Claude Opus for this one", "force this track onto OpenAI"). Quote the user's wording back when confirming.
Things that are **not** reasons to set these:
- "Tracks should be fast" / "I want a small model" that's a global preference, not a per-track one. Leave it; the global default exists.
- "This track is complex" write a clearer instruction; don't reach for a different model.
- "Just to be safe" / "in case it matters" this is the antipattern. Leave them out.
- The user changed their main chat model that has nothing to do with tracks. Leave them out.
When in doubt: omit both fields. Never volunteer them. Never include them in a starter template you suggest. If you find yourself adding them as a sensible default, stop you're wrong.
## Choosing a trackId ## Choosing a trackId
- Kebab-case, short, descriptive: ` + "`" + `chicago-time` + "`" + `, ` + "`" + `sfo-weather` + "`" + `, ` + "`" + `hn-top5` + "`" + `, ` + "`" + `btc-usd` + "`" + `. - Kebab-case, short, descriptive: ` + "`" + `chicago-time` + "`" + `, ` + "`" + `sfo-weather` + "`" + `, ` + "`" + `hn-top5` + "`" + `, ` + "`" + `btc-usd` + "`" + `.

View file

@ -21,9 +21,10 @@ import { BrowserControlInputSchema, type BrowserControlInput } from "@x/shared/d
import type { ToolContext } from "./exec-tool.js"; import type { ToolContext } from "./exec-tool.js";
import { generateText } from "ai"; import { generateText } from "ai";
import { createProvider } from "../../models/models.js"; import { createProvider } from "../../models/models.js";
import { IModelConfigRepo } from "../../models/repo.js"; import { getDefaultModelAndProvider, resolveProviderConfig } from "../../models/defaults.js";
import { captureLlmUsage } from "../../analytics/usage.js";
import { getCurrentUseCase } from "../../analytics/use_case.js";
import { isSignedIn } from "../../account/account.js"; import { isSignedIn } from "../../account/account.js";
import { getGatewayProvider } from "../../models/gateway.js";
import { getAccessToken } from "../../auth/tokens.js"; import { getAccessToken } from "../../auth/tokens.js";
import { API_URL } from "../../config/env.js"; import { API_URL } from "../../config/env.js";
import { updateContent, updateTrackBlock } from "../../knowledge/track/fileops.js"; import { updateContent, updateTrackBlock } from "../../knowledge/track/fileops.js";
@ -746,13 +747,9 @@ export const BuiltinTools: z.infer<typeof BuiltinToolsSchema> = {
const base64 = buffer.toString('base64'); const base64 = buffer.toString('base64');
// Resolve model config from DI container const { model: modelId, provider: providerName } = await getDefaultModelAndProvider();
const modelConfigRepo = container.resolve<IModelConfigRepo>('modelConfigRepo'); const providerConfig = await resolveProviderConfig(providerName);
const modelConfig = await modelConfigRepo.getConfig(); const model = createProvider(providerConfig).languageModel(modelId);
const provider = await isSignedIn()
? await getGatewayProvider()
: createProvider(modelConfig.provider);
const model = provider.languageModel(modelConfig.model);
const userPrompt = prompt || 'Convert this file to well-structured markdown.'; const userPrompt = prompt || 'Convert this file to well-structured markdown.';
@ -769,6 +766,16 @@ export const BuiltinTools: z.infer<typeof BuiltinToolsSchema> = {
], ],
}); });
const ctx = getCurrentUseCase();
captureLlmUsage({
useCase: ctx?.useCase ?? 'copilot_chat',
subUseCase: 'file_parse',
...(ctx?.agentName ? { agentName: ctx.agentName } : {}),
model: modelId,
provider: providerName,
usage: response.usage,
});
return { return {
success: true, success: true,
fileName, fileName,

View file

@ -1,2 +1,2 @@
export const API_URL = export const API_URL =
process.env.API_URL || 'https://api.x.rowboatlabs.com'; process.env.API_URL || 'https://api.x.rowboatlabs.com';

View file

@ -3,6 +3,7 @@ import path from 'path';
import { google } from 'googleapis'; import { google } from 'googleapis';
import { WorkDir } from '../config/config.js'; import { WorkDir } from '../config/config.js';
import { createRun, createMessage } from '../runs/runs.js'; import { createRun, createMessage } from '../runs/runs.js';
import { getKgModel } from '../models/defaults.js';
import { waitForRunCompletion } from '../agents/utils.js'; import { waitForRunCompletion } from '../agents/utils.js';
import { serviceLogger } from '../services/service_logger.js'; import { serviceLogger } from '../services/service_logger.js';
import { loadUserConfig, updateUserEmail } from '../config/user_config.js'; import { loadUserConfig, updateUserEmail } from '../config/user_config.js';
@ -305,7 +306,12 @@ async function processAgentNotes(): Promise<void> {
const timestamp = new Date().toISOString(); const timestamp = new Date().toISOString();
const message = `Current timestamp: ${timestamp}\n\nProcess the following source material and update the Agent Notes folder accordingly.\n\n${messageParts.join('\n\n')}`; const message = `Current timestamp: ${timestamp}\n\nProcess the following source material and update the Agent Notes folder accordingly.\n\n${messageParts.join('\n\n')}`;
const agentRun = await createRun({ agentId: AGENT_ID }); const agentRun = await createRun({
agentId: AGENT_ID,
model: await getKgModel(),
useCase: 'knowledge_sync',
subUseCase: 'agent_notes',
});
await createMessage(agentRun.id, message); await createMessage(agentRun.id, message);
await waitForRunCompletion(agentRun.id); await waitForRunCompletion(agentRun.id);

View file

@ -38,6 +38,7 @@ const SOURCE_FOLDERS = [
'gmail_sync', 'gmail_sync',
path.join('knowledge', 'Meetings', 'fireflies'), path.join('knowledge', 'Meetings', 'fireflies'),
path.join('knowledge', 'Meetings', 'granola'), path.join('knowledge', 'Meetings', 'granola'),
path.join('knowledge', 'Meetings', 'rowboat'),
]; ];
// Voice memos are now created directly in knowledge/Voice Memos/<date>/ // Voice memos are now created directly in knowledge/Voice Memos/<date>/
@ -251,6 +252,8 @@ async function createNotesFromBatch(
// Create a run for the note creation agent // Create a run for the note creation agent
const run = await createRun({ const run = await createRun({
agentId: NOTE_CREATION_AGENT, agentId: NOTE_CREATION_AGENT,
useCase: 'knowledge_sync',
subUseCase: 'build_graph',
}); });
const suggestedTopicsContent = readSuggestedTopicsFile(); const suggestedTopicsContent = readSuggestedTopicsFile();

View file

@ -21,14 +21,14 @@ const SECTIONS: Section[] = [
instruction: instruction:
`Write 1-3 sentences of plain markdown giving the user a shoulder-tap about what's next on their calendar today. `Write 1-3 sentences of plain markdown giving the user a shoulder-tap about what's next on their calendar today.
Data: read today's events from calendar_sync/ (workspace-readdir, then workspace-readFile each .json file). Filter to events whose start datetime is today and hasn't started yet. This section refreshes on calendar changes, not on a clock tick do NOT promise live minute countdowns. Frame urgency in buckets based on the event's start time relative to now:
- Start time is in the past or within roughly half an hour imminent: name the meeting and say it's starting soon (e.g. "Standup is starting — join link in the Calendar section below.").
- Start time is later this morning or this afternoon upcoming: name the meeting and roughly when (e.g. "Design review later this morning." / "1:1 with Sam this afternoon.").
- Start time is several hours out or nothing before then focus block: frame the gap (e.g. "Next up is the all-hands at 3pm — good long focus block until then.").
Lead based on how soon the next event is: Use the event's start time of day ("at 3pm", "this afternoon") rather than a countdown ("in 40 minutes"). Countdowns go stale between syncs.
- Under 15 minutes urgent ("Standup starts in 10 minutes — join link in the Calendar section below.")
- Under 2 hours lead with the event ("Design review in 40 minutes.")
- 2+ hours frame the gap as focus time ("Next up is standup at noon — you've got a solid 3-hour focus block.")
Always compute minutes-to-start against the actual current local time never say "nothing in the next X hours" if an event is in that window. Data: read today's events from calendar_sync/ (workspace-readdir, then workspace-readFile each .json file). Filter to events whose start datetime is today and hasn't ended yet for finding the next event, pick the earliest upcoming one; if all have passed, treat as clear.
If you find quick context in knowledge/ that's genuinely useful, add one short clause ("Ramnique pushed the OAuth PR yesterday — might come up"). Use workspace-grep / workspace-readFile conservatively; don't stall on deep research. If you find quick context in knowledge/ that's genuinely useful, add one short clause ("Ramnique pushed the OAuth PR yesterday — might come up"). Use workspace-grep / workspace-readFile conservatively; don't stall on deep research.
@ -38,10 +38,6 @@ Plain markdown prose only — no calendar block, no email block, no headings.`,
eventMatchCriteria: eventMatchCriteria:
`Calendar event changes affecting today — new meetings, reschedules, cancellations, meetings starting soon. Skip changes to events on other days.`, `Calendar event changes affecting today — new meetings, reschedules, cancellations, meetings starting soon. Skip changes to events on other days.`,
active: true, active: true,
schedule: {
type: 'cron',
expression: '*/15 * * * *',
},
}, },
}, },
{ {
@ -53,16 +49,14 @@ Plain markdown prose only — no calendar block, no email block, no headings.`,
Data: read calendar_sync/ via workspace-readdir, then workspace-readFile each .json event file. Filter to events occurring today. After 10am local time, drop meetings that have already ended only include meetings that haven't ended yet. Data: read calendar_sync/ via workspace-readdir, then workspace-readFile each .json event file. Filter to events occurring today. After 10am local time, drop meetings that have already ended only include meetings that haven't ended yet.
This section refreshes on calendar changes, not on a clock tick the "drop ended meetings" rule applies on each refresh, so an ended meeting disappears the next time any calendar event changes (not exactly on the clock hour). That's fine.
Always emit the calendar block, even when there are no remaining events (in that case use events: [] and showJoinButton: false). Set showJoinButton: true whenever any event has a conferenceLink. Always emit the calendar block, even when there are no remaining events (in that case use events: [] and showJoinButton: false). Set showJoinButton: true whenever any event has a conferenceLink.
After the block, you MAY add one short markdown line per event giving useful prep context pulled from knowledge/ ("Design review: last week we agreed to revisit the type-picker UX."). Keep it tight one line each, only when meaningful. Skip routine/recurring meetings.`, After the block, you MAY add one short markdown line per event giving useful prep context pulled from knowledge/ ("Design review: last week we agreed to revisit the type-picker UX."). Keep it tight one line each, only when meaningful. Skip routine/recurring meetings.`,
eventMatchCriteria: eventMatchCriteria:
`Calendar event changes affecting today — additions, updates, cancellations, reschedules.`, `Calendar event changes affecting today — additions, updates, cancellations, reschedules.`,
active: true, active: true,
schedule: {
type: 'cron',
expression: '0 * * * *',
},
}, },
}, },
{ {
@ -72,7 +66,7 @@ After the block, you MAY add one short markdown line per event giving useful pre
instruction: instruction:
`Maintain a digest of email threads worth the user's attention today, rendered as zero or more email blocks (one per thread). `Maintain a digest of email threads worth the user's attention today, rendered as zero or more email blocks (one per thread).
Event-driven path (primary): the agent message will include a freshly-synced thread's markdown as the event payload. Decide whether THIS thread warrants surfacing. If it's marketing, an auto-notification, a thread already closed out, or otherwise low-signal, skip the update do NOT call update-track-content. If it's attention-worthy, integrate it into the digest: add a new email block, or update the existing one if the same threadId is already shown. Event-driven path (primary): the agent message will include a "Gmail sync update" digest payload describing one or more freshly-synced threads from a single sync run. The digest lists each thread with its subject, sender, date, threadId, and body. Iterate over every thread in the payload and decide per thread whether it warrants surfacing. Skip marketing, auto-notifications, closed-out threads, and other low-signal mail. For threads that are attention-worthy, integrate them into the existing digest: add a new email block for a new threadId, or update the existing block if the threadId is already shown. If NONE of the threads in the payload are attention-worthy, skip the update do NOT call update-track-content. Emit at most one update-track-content call that covers the full set of changes from this event.
Manual path (fallback): with no event payload, scan gmail_sync/ via workspace-readdir (skip sync_state.json and attachments/). Read threads with workspace-readFile. Prioritize threads whose frontmatter action field is "reply" or "respond", plus other high-signal recent threads. Manual path (fallback): with no event payload, scan gmail_sync/ via workspace-readdir (skip sync_state.json and attachments/). Read threads with workspace-readFile. Prioritize threads whose frontmatter action field is "reply" or "respond", plus other high-signal recent threads.

View file

@ -13,7 +13,6 @@ export function getRaw(): string {
const defaultEndISO = defaultEnd.toISOString(); const defaultEndISO = defaultEnd.toISOString();
return `--- return `---
model: gpt-5.2
tools: tools:
${toolEntries} ${toolEntries}
--- ---

View file

@ -4,11 +4,13 @@ import { CronExpressionParser } from 'cron-parser';
import { generateText } from 'ai'; import { generateText } from 'ai';
import { WorkDir } from '../config/config.js'; import { WorkDir } from '../config/config.js';
import { createRun, createMessage, fetchRun } from '../runs/runs.js'; import { createRun, createMessage, fetchRun } from '../runs/runs.js';
import { getKgModel } from '../models/defaults.js';
import container from '../di/container.js'; import container from '../di/container.js';
import type { IModelConfigRepo } from '../models/repo.js'; import type { IModelConfigRepo } from '../models/repo.js';
import { createProvider } from '../models/models.js'; import { createProvider } from '../models/models.js';
import { inlineTask } from '@x/shared'; import { inlineTask } from '@x/shared';
import { extractAgentResponse, waitForRunCompletion } from '../agents/utils.js'; import { extractAgentResponse, waitForRunCompletion } from '../agents/utils.js';
import { captureLlmUsage } from '../analytics/usage.js';
const SYNC_INTERVAL_MS = 15 * 1000; // 15 seconds const SYNC_INTERVAL_MS = 15 * 1000; // 15 seconds
const INLINE_TASK_AGENT = 'inline_task_agent'; const INLINE_TASK_AGENT = 'inline_task_agent';
@ -467,7 +469,12 @@ async function processInlineTasks(): Promise<void> {
console.log(`[InlineTasks] Running task: "${task.instruction.slice(0, 80)}..."`); console.log(`[InlineTasks] Running task: "${task.instruction.slice(0, 80)}..."`);
try { try {
const run = await createRun({ agentId: INLINE_TASK_AGENT }); const run = await createRun({
agentId: INLINE_TASK_AGENT,
model: await getKgModel(),
useCase: 'knowledge_sync',
subUseCase: 'inline_task_run',
});
const message = [ const message = [
`Execute the following instruction from the note "${relativePath}":`, `Execute the following instruction from the note "${relativePath}":`,
@ -547,7 +554,12 @@ export async function processRowboatInstruction(
scheduleLabel: string | null; scheduleLabel: string | null;
response: string | null; response: string | null;
}> { }> {
const run = await createRun({ agentId: INLINE_TASK_AGENT }); const run = await createRun({
agentId: INLINE_TASK_AGENT,
model: await getKgModel(),
useCase: 'knowledge_sync',
subUseCase: 'inline_task_run',
});
const message = [ const message = [
`Process the following @rowboat instruction from the note "${notePath}":`, `Process the following @rowboat instruction from the note "${notePath}":`,
@ -658,6 +670,14 @@ Respond with ONLY valid JSON: either a schedule object or null. No other text.`;
prompt: instruction, prompt: instruction,
}); });
captureLlmUsage({
useCase: 'knowledge_sync',
subUseCase: 'inline_task_classify',
model: config.model,
provider: config.provider.flavor,
usage: result.usage,
});
let text = result.text.trim(); let text = result.text.trim();
console.log('[classifySchedule] LLM response:', text); console.log('[classifySchedule] LLM response:', text);
// Strip markdown code fences if the LLM wraps the JSON // Strip markdown code fences if the LLM wraps the JSON

View file

@ -2,6 +2,7 @@ import fs from 'fs';
import path from 'path'; import path from 'path';
import { WorkDir } from '../config/config.js'; import { WorkDir } from '../config/config.js';
import { createRun, createMessage } from '../runs/runs.js'; import { createRun, createMessage } from '../runs/runs.js';
import { getKgModel } from '../models/defaults.js';
import { bus } from '../runs/bus.js'; import { bus } from '../runs/bus.js';
import { waitForRunCompletion } from '../agents/utils.js'; import { waitForRunCompletion } from '../agents/utils.js';
import { serviceLogger } from '../services/service_logger.js'; import { serviceLogger } from '../services/service_logger.js';
@ -71,6 +72,9 @@ async function labelEmailBatch(
): Promise<{ runId: string; filesEdited: Set<string> }> { ): Promise<{ runId: string; filesEdited: Set<string> }> {
const run = await createRun({ const run = await createRun({
agentId: LABELING_AGENT, agentId: LABELING_AGENT,
model: await getKgModel(),
useCase: 'knowledge_sync',
subUseCase: 'label_emails',
}); });
let message = `Label the following ${files.length} email files by prepending YAML frontmatter.\n\n`; let message = `Label the following ${files.length} email files by prepending YAML frontmatter.\n\n`;

View file

@ -2,7 +2,6 @@ import { renderTagSystemForEmails } from './tag_system.js';
export function getRaw(): string { export function getRaw(): string {
return `--- return `---
model: gpt-5.2
tools: tools:
workspace-readFile: workspace-readFile:
type: builtin type: builtin

View file

@ -3,7 +3,6 @@ import { renderNoteEffectRules } from './tag_system.js';
export function getRaw(): string { export function getRaw(): string {
return `--- return `---
model: gpt-5.2
tools: tools:
workspace-writeFile: workspace-writeFile:
type: builtin type: builtin

View file

@ -2,7 +2,6 @@ import { renderTagSystemForNotes } from './tag_system.js';
export function getRaw(): string { export function getRaw(): string {
return `--- return `---
model: gpt-5.2
tools: tools:
workspace-readFile: workspace-readFile:
type: builtin type: builtin

View file

@ -1,12 +1,10 @@
import fs from 'fs'; import fs from 'fs';
import path from 'path'; import path from 'path';
import { generateText } from 'ai'; import { generateText } from 'ai';
import container from '../di/container.js';
import type { IModelConfigRepo } from '../models/repo.js';
import { createProvider } from '../models/models.js'; import { createProvider } from '../models/models.js';
import { isSignedIn } from '../account/account.js'; import { getDefaultModelAndProvider, getMeetingNotesModel, resolveProviderConfig } from '../models/defaults.js';
import { getGatewayProvider } from '../models/gateway.js';
import { WorkDir } from '../config/config.js'; import { WorkDir } from '../config/config.js';
import { captureLlmUsage } from '../analytics/usage.js';
const CALENDAR_SYNC_DIR = path.join(WorkDir, 'calendar_sync'); const CALENDAR_SYNC_DIR = path.join(WorkDir, 'calendar_sync');
@ -138,15 +136,10 @@ function loadCalendarEventContext(calendarEventJson: string): string {
} }
export async function summarizeMeeting(transcript: string, meetingStartTime?: string, calendarEventJson?: string): Promise<string> { export async function summarizeMeeting(transcript: string, meetingStartTime?: string, calendarEventJson?: string): Promise<string> {
const repo = container.resolve<IModelConfigRepo>('modelConfigRepo'); const modelId = await getMeetingNotesModel();
const config = await repo.getConfig(); const { provider: providerName } = await getDefaultModelAndProvider();
const signedIn = await isSignedIn(); const providerConfig = await resolveProviderConfig(providerName);
const provider = signedIn const model = createProvider(providerConfig).languageModel(modelId);
? await getGatewayProvider()
: createProvider(config.provider);
const modelId = config.meetingNotesModel
|| (signedIn ? "gpt-5.4" : config.model);
const model = provider.languageModel(modelId);
// If a specific calendar event was linked, use it directly. // If a specific calendar event was linked, use it directly.
// Otherwise fall back to scanning events within ±3 hours. // Otherwise fall back to scanning events within ±3 hours.
@ -165,5 +158,12 @@ export async function summarizeMeeting(transcript: string, meetingStartTime?: st
prompt, prompt,
}); });
captureLlmUsage({
useCase: 'meeting_note',
model: modelId,
provider: providerName,
usage: result.usage,
});
return result.text.trim(); return result.text.trim();
} }

View file

@ -15,8 +15,52 @@ import { createEvent } from './track/events.js';
const SYNC_DIR = path.join(WorkDir, 'gmail_sync'); const SYNC_DIR = path.join(WorkDir, 'gmail_sync');
const SYNC_INTERVAL_MS = 5 * 60 * 1000; // Check every 5 minutes const SYNC_INTERVAL_MS = 5 * 60 * 1000; // Check every 5 minutes
const REQUIRED_SCOPE = 'https://www.googleapis.com/auth/gmail.readonly'; const REQUIRED_SCOPE = 'https://www.googleapis.com/auth/gmail.readonly';
const MAX_THREADS_IN_DIGEST = 10;
const nhm = new NodeHtmlMarkdown(); const nhm = new NodeHtmlMarkdown();
interface SyncedThread {
threadId: string;
markdown: string;
}
function summarizeGmailSync(threads: SyncedThread[]): string {
const lines: string[] = [
`# Gmail sync update`,
``,
`${threads.length} new/updated thread${threads.length === 1 ? '' : 's'}.`,
``,
];
const shown = threads.slice(0, MAX_THREADS_IN_DIGEST);
const hidden = threads.length - shown.length;
if (shown.length > 0) {
lines.push(`## Threads`, ``);
for (const { markdown } of shown) {
lines.push(markdown.trimEnd(), ``, `---`, ``);
}
if (hidden > 0) {
lines.push(`_…and ${hidden} more thread(s) omitted from digest._`, ``);
}
}
return lines.join('\n');
}
async function publishGmailSyncEvent(threads: SyncedThread[]): Promise<void> {
if (threads.length === 0) return;
try {
await createEvent({
source: 'gmail',
type: 'email.synced',
createdAt: new Date().toISOString(),
payload: summarizeGmailSync(threads),
});
} catch (err) {
console.error('[Gmail] Failed to publish sync event:', err);
}
}
// --- Wake Signal for Immediate Sync Trigger --- // --- Wake Signal for Immediate Sync Trigger ---
let wakeResolve: (() => void) | null = null; let wakeResolve: (() => void) | null = null;
@ -113,14 +157,14 @@ async function saveAttachment(gmail: gmail.Gmail, userId: string, msgId: string,
// --- Sync Logic --- // --- Sync Logic ---
async function processThread(auth: OAuth2Client, threadId: string, syncDir: string, attachmentsDir: string) { async function processThread(auth: OAuth2Client, threadId: string, syncDir: string, attachmentsDir: string): Promise<SyncedThread | null> {
const gmail = google.gmail({ version: 'v1', auth }); const gmail = google.gmail({ version: 'v1', auth });
try { try {
const res = await gmail.users.threads.get({ userId: 'me', id: threadId }); const res = await gmail.users.threads.get({ userId: 'me', id: threadId });
const thread = res.data; const thread = res.data;
const messages = thread.messages; const messages = thread.messages;
if (!messages || messages.length === 0) return; if (!messages || messages.length === 0) return null;
// Subject from first message // Subject from first message
const firstHeader = messages[0].payload?.headers; const firstHeader = messages[0].payload?.headers;
@ -173,15 +217,11 @@ async function processThread(auth: OAuth2Client, threadId: string, syncDir: stri
fs.writeFileSync(path.join(syncDir, `${threadId}.md`), mdContent); fs.writeFileSync(path.join(syncDir, `${threadId}.md`), mdContent);
console.log(`Synced Thread: ${subject} (${threadId})`); console.log(`Synced Thread: ${subject} (${threadId})`);
await createEvent({ return { threadId, markdown: mdContent };
source: 'gmail',
type: 'email.synced',
createdAt: new Date().toISOString(),
payload: mdContent,
});
} catch (error) { } catch (error) {
console.error(`Error processing thread ${threadId}:`, error); console.error(`Error processing thread ${threadId}:`, error);
return null;
} }
} }
@ -262,10 +302,14 @@ async function fullSync(auth: OAuth2Client, syncDir: string, attachmentsDir: str
truncated: limitedThreads.truncated, truncated: limitedThreads.truncated,
}); });
const synced: SyncedThread[] = [];
for (const threadId of threadIds) { for (const threadId of threadIds) {
await processThread(auth, threadId, syncDir, attachmentsDir); const result = await processThread(auth, threadId, syncDir, attachmentsDir);
if (result) synced.push(result);
} }
await publishGmailSyncEvent(synced);
saveState(currentHistoryId, stateFile); saveState(currentHistoryId, stateFile);
await serviceLogger.log({ await serviceLogger.log({
type: 'run_complete', type: 'run_complete',
@ -365,10 +409,14 @@ async function partialSync(auth: OAuth2Client, startHistoryId: string, syncDir:
truncated: limitedThreads.truncated, truncated: limitedThreads.truncated,
}); });
const synced: SyncedThread[] = [];
for (const tid of threadIdList) { for (const tid of threadIdList) {
await processThread(auth, tid, syncDir, attachmentsDir); const result = await processThread(auth, tid, syncDir, attachmentsDir);
if (result) synced.push(result);
} }
await publishGmailSyncEvent(synced);
const profile = await gmail.users.getProfile({ userId: 'me' }); const profile = await gmail.users.getProfile({ userId: 'me' });
saveState(profile.data.historyId!, stateFile); saveState(profile.data.historyId!, stateFile);
await serviceLogger.log({ await serviceLogger.log({
@ -565,7 +613,12 @@ function extractBodyFromPayload(payload: Record<string, unknown>): string {
return ''; return '';
} }
async function processThreadComposio(connectedAccountId: string, threadId: string, syncDir: string): Promise<string | null> { interface ComposioThreadResult {
synced: SyncedThread | null;
newestIsoPlusOne: string | null;
}
async function processThreadComposio(connectedAccountId: string, threadId: string, syncDir: string): Promise<ComposioThreadResult> {
let threadResult; let threadResult;
try { try {
threadResult = await executeAction( threadResult = await executeAction(
@ -579,40 +632,34 @@ async function processThreadComposio(connectedAccountId: string, threadId: strin
); );
} catch (error) { } catch (error) {
console.warn(`[Gmail] Skipping thread ${threadId} (fetch failed):`, error instanceof Error ? error.message : error); console.warn(`[Gmail] Skipping thread ${threadId} (fetch failed):`, error instanceof Error ? error.message : error);
return null; return { synced: null, newestIsoPlusOne: null };
} }
if (!threadResult.successful || !threadResult.data) { if (!threadResult.successful || !threadResult.data) {
console.error(`[Gmail] Failed to fetch thread ${threadId}:`, threadResult.error); console.error(`[Gmail] Failed to fetch thread ${threadId}:`, threadResult.error);
return null; return { synced: null, newestIsoPlusOne: null };
} }
const data = threadResult.data as Record<string, unknown>; const data = threadResult.data as Record<string, unknown>;
const messages = data.messages as Array<Record<string, unknown>> | undefined; const messages = data.messages as Array<Record<string, unknown>> | undefined;
let newestDate: Date | null = null; let newestDate: Date | null = null;
let mdContent: string;
let subjectForLog: string;
if (!messages || messages.length === 0) { if (!messages || messages.length === 0) {
const parsed = parseMessageData(data); const parsed = parseMessageData(data);
const mdContent = `# ${parsed.subject}\n\n` + mdContent = `# ${parsed.subject}\n\n` +
`**Thread ID:** ${threadId}\n` + `**Thread ID:** ${threadId}\n` +
`**Message Count:** 1\n\n---\n\n` + `**Message Count:** 1\n\n---\n\n` +
`### From: ${parsed.from}\n` + `### From: ${parsed.from}\n` +
`**Date:** ${parsed.date}\n\n` + `**Date:** ${parsed.date}\n\n` +
`${parsed.body}\n\n---\n\n`; `${parsed.body}\n\n---\n\n`;
subjectForLog = parsed.subject;
fs.writeFileSync(path.join(syncDir, `${cleanFilename(threadId)}.md`), mdContent);
console.log(`[Gmail] Synced Thread: ${parsed.subject} (${threadId})`);
await createEvent({
source: 'gmail',
type: 'email.synced',
createdAt: new Date().toISOString(),
payload: mdContent,
});
newestDate = tryParseDate(parsed.date); newestDate = tryParseDate(parsed.date);
} else { } else {
const firstParsed = parseMessageData(messages[0]); const firstParsed = parseMessageData(messages[0]);
let mdContent = `# ${firstParsed.subject}\n\n`; mdContent = `# ${firstParsed.subject}\n\n`;
mdContent += `**Thread ID:** ${threadId}\n`; mdContent += `**Thread ID:** ${threadId}\n`;
mdContent += `**Message Count:** ${messages.length}\n\n---\n\n`; mdContent += `**Message Count:** ${messages.length}\n\n---\n\n`;
@ -628,19 +675,14 @@ async function processThreadComposio(connectedAccountId: string, threadId: strin
newestDate = msgDate; newestDate = msgDate;
} }
} }
subjectForLog = firstParsed.subject;
fs.writeFileSync(path.join(syncDir, `${cleanFilename(threadId)}.md`), mdContent);
console.log(`[Gmail] Synced Thread: ${firstParsed.subject} (${threadId})`);
await createEvent({
source: 'gmail',
type: 'email.synced',
createdAt: new Date().toISOString(),
payload: mdContent,
});
} }
if (!newestDate) return null; fs.writeFileSync(path.join(syncDir, `${cleanFilename(threadId)}.md`), mdContent);
return new Date(newestDate.getTime() + 1000).toISOString(); console.log(`[Gmail] Synced Thread: ${subjectForLog} (${threadId})`);
const newestIsoPlusOne = newestDate ? new Date(newestDate.getTime() + 1000).toISOString() : null;
return { synced: { threadId, markdown: mdContent }, newestIsoPlusOne };
} }
async function performSyncComposio() { async function performSyncComposio() {
@ -751,19 +793,22 @@ async function performSyncComposio() {
let highWaterMark: string | null = state?.last_sync ?? null; let highWaterMark: string | null = state?.last_sync ?? null;
let processedCount = 0; let processedCount = 0;
const synced: SyncedThread[] = [];
for (const threadId of allThreadIds) { for (const threadId of allThreadIds) {
// Re-check connection in case user disconnected mid-sync // Re-check connection in case user disconnected mid-sync
if (!composioAccountsRepo.isConnected('gmail')) { if (!composioAccountsRepo.isConnected('gmail')) {
console.log('[Gmail] Account disconnected during sync. Stopping.'); console.log('[Gmail] Account disconnected during sync. Stopping.');
return; break;
} }
try { try {
const newestInThread = await processThreadComposio(connectedAccountId, threadId, SYNC_DIR); const result = await processThreadComposio(connectedAccountId, threadId, SYNC_DIR);
processedCount++; processedCount++;
if (newestInThread) { if (result.synced) synced.push(result.synced);
if (!highWaterMark || new Date(newestInThread) > new Date(highWaterMark)) {
highWaterMark = newestInThread; if (result.newestIsoPlusOne) {
if (!highWaterMark || new Date(result.newestIsoPlusOne) > new Date(highWaterMark)) {
highWaterMark = result.newestIsoPlusOne;
} }
saveComposioState(STATE_FILE, highWaterMark); saveComposioState(STATE_FILE, highWaterMark);
} }
@ -772,6 +817,8 @@ async function performSyncComposio() {
} }
} }
await publishGmailSyncEvent(synced);
await serviceLogger.log({ await serviceLogger.log({
type: 'run_complete', type: 'run_complete',
service: run!.service, service: run!.service,

View file

@ -2,6 +2,7 @@ import fs from 'fs';
import path from 'path'; import path from 'path';
import { WorkDir } from '../config/config.js'; import { WorkDir } from '../config/config.js';
import { createRun, createMessage } from '../runs/runs.js'; import { createRun, createMessage } from '../runs/runs.js';
import { getKgModel } from '../models/defaults.js';
import { bus } from '../runs/bus.js'; import { bus } from '../runs/bus.js';
import { waitForRunCompletion } from '../agents/utils.js'; import { waitForRunCompletion } from '../agents/utils.js';
import { serviceLogger } from '../services/service_logger.js'; import { serviceLogger } from '../services/service_logger.js';
@ -84,6 +85,9 @@ async function tagNoteBatch(
): Promise<{ runId: string; filesEdited: Set<string> }> { ): Promise<{ runId: string; filesEdited: Set<string> }> {
const run = await createRun({ const run = await createRun({
agentId: NOTE_TAGGING_AGENT, agentId: NOTE_TAGGING_AGENT,
model: await getKgModel(),
useCase: 'knowledge_sync',
subUseCase: 'tag_notes',
}); });
let message = `Tag the following ${files.length} knowledge notes by prepending YAML frontmatter with appropriate tags.\n\n`; let message = `Tag the following ${files.length} knowledge notes by prepending YAML frontmatter with appropriate tags.\n\n`;

View file

@ -1,11 +1,9 @@
import { generateObject } from 'ai'; import { generateObject } from 'ai';
import { trackBlock, PrefixLogger } from '@x/shared'; import { trackBlock, PrefixLogger } from '@x/shared';
import type { KnowledgeEvent } from '@x/shared/dist/track-block.js'; import type { KnowledgeEvent } from '@x/shared/dist/track-block.js';
import container from '../../di/container.js';
import type { IModelConfigRepo } from '../../models/repo.js';
import { createProvider } from '../../models/models.js'; import { createProvider } from '../../models/models.js';
import { isSignedIn } from '../../account/account.js'; import { getDefaultModelAndProvider, getTrackBlockModel, resolveProviderConfig } from '../../models/defaults.js';
import { getGatewayProvider } from '../../models/gateway.js'; import { captureLlmUsage } from '../../analytics/usage.js';
const log = new PrefixLogger('TrackRouting'); const log = new PrefixLogger('TrackRouting');
@ -37,15 +35,14 @@ Rules:
- For each candidate, return BOTH trackId and filePath exactly as given. trackIds are not globally unique.`; - For each candidate, return BOTH trackId and filePath exactly as given. trackIds are not globally unique.`;
async function resolveModel() { async function resolveModel() {
const repo = container.resolve<IModelConfigRepo>('modelConfigRepo'); const modelId = await getTrackBlockModel();
const config = await repo.getConfig(); const { provider } = await getDefaultModelAndProvider();
const signedIn = await isSignedIn(); const config = await resolveProviderConfig(provider);
const provider = signedIn return {
? await getGatewayProvider() model: createProvider(config).languageModel(modelId),
: createProvider(config.provider); modelId,
const modelId = config.knowledgeGraphModel providerName: provider,
|| (signedIn ? 'gpt-5.4' : config.model); };
return provider.languageModel(modelId);
} }
function buildRoutingPrompt(event: KnowledgeEvent, batch: ParsedTrack[]): string { function buildRoutingPrompt(event: KnowledgeEvent, batch: ParsedTrack[]): string {
@ -92,19 +89,26 @@ export async function findCandidates(
log.log(`Routing event ${event.id} against ${filtered.length} track(s)`); log.log(`Routing event ${event.id} against ${filtered.length} track(s)`);
const model = await resolveModel(); const { model, modelId, providerName } = await resolveModel();
const candidateKeys = new Set<string>(); const candidateKeys = new Set<string>();
for (let i = 0; i < filtered.length; i += BATCH_SIZE) { for (let i = 0; i < filtered.length; i += BATCH_SIZE) {
const batch = filtered.slice(i, i + BATCH_SIZE); const batch = filtered.slice(i, i + BATCH_SIZE);
try { try {
const { object } = await generateObject({ const result = await generateObject({
model, model,
system: ROUTING_SYSTEM_PROMPT, system: ROUTING_SYSTEM_PROMPT,
prompt: buildRoutingPrompt(event, batch), prompt: buildRoutingPrompt(event, batch),
schema: trackBlock.Pass1OutputSchema, schema: trackBlock.Pass1OutputSchema,
}); });
for (const c of object.candidates) { captureLlmUsage({
useCase: 'track_block',
subUseCase: 'routing',
model: modelId,
provider: providerName,
usage: result.usage,
});
for (const c of result.object.candidates) {
candidateKeys.add(trackKey(c.trackId, c.filePath)); candidateKeys.add(trackKey(c.trackId, c.filePath));
} }
} catch (err) { } catch (err) {

View file

@ -1,6 +1,7 @@
import z from 'zod'; import z from 'zod';
import { fetchAll, updateTrackBlock } from './fileops.js'; import { fetchAll, updateTrackBlock } from './fileops.js';
import { createRun, createMessage } from '../../runs/runs.js'; import { createRun, createMessage } from '../../runs/runs.js';
import { getTrackBlockModel } from '../../models/defaults.js';
import { extractAgentResponse, waitForRunCompletion } from '../../agents/utils.js'; import { extractAgentResponse, waitForRunCompletion } from '../../agents/utils.js';
import { trackBus } from './bus.js'; import { trackBus } from './bus.js';
import type { TrackStateSchema } from './types.js'; import type { TrackStateSchema } from './types.js';
@ -101,8 +102,17 @@ export async function triggerTrackUpdate(
const contentBefore = track.content; const contentBefore = track.content;
// Emit start event — runId is set after agent run is created // Per-track model/provider overrides win when set; otherwise fall back
const agentRun = await createRun({ agentId: 'track-run' }); // to the configured trackBlockModel default and the run-creation
// provider default (signed-in: rowboat; BYOK: active provider).
const model = track.track.model ?? await getTrackBlockModel();
const agentRun = await createRun({
agentId: 'track-run',
model,
...(track.track.provider ? { provider: track.track.provider } : {}),
useCase: 'track_block',
subUseCase: 'run',
});
// Set lastRunAt and lastRunId immediately (before agent executes) so // Set lastRunAt and lastRunId immediately (before agent executes) so
// the scheduler's next poll won't re-trigger this track. // the scheduler's next poll won't re-trigger this track.

View file

@ -0,0 +1,88 @@
import z from "zod";
import { LlmProvider } from "@x/shared/dist/models.js";
import { IModelConfigRepo } from "./repo.js";
import { isSignedIn } from "../account/account.js";
import container from "../di/container.js";
const SIGNED_IN_DEFAULT_MODEL = "gpt-5.4";
const SIGNED_IN_DEFAULT_PROVIDER = "rowboat";
const SIGNED_IN_KG_MODEL = "anthropic/claude-haiku-4.5";
const SIGNED_IN_TRACK_BLOCK_MODEL = "anthropic/claude-haiku-4.5";
/**
* The single source of truth for "what model+provider should we use when
* the caller didn't specify and the agent didn't declare". Returns names only.
* This is the only place that branches on signed-in state.
*/
export async function getDefaultModelAndProvider(): Promise<{ model: string; provider: string }> {
if (await isSignedIn()) {
return { model: SIGNED_IN_DEFAULT_MODEL, provider: SIGNED_IN_DEFAULT_PROVIDER };
}
const repo = container.resolve<IModelConfigRepo>("modelConfigRepo");
const cfg = await repo.getConfig();
return { model: cfg.model, provider: cfg.provider.flavor };
}
/**
* Resolve a provider name (as stored on a run, an agent, or returned by
* getDefaultModelAndProvider) into the full LlmProvider config that
* createProvider expects (apiKey/baseURL/headers).
*
* - "rowboat" gateway provider (auth via OAuth bearer; no creds field).
* - other names look up models.json's `providers[name]` map.
* - fallback: if the name matches the active default's flavor (legacy
* single-provider configs that didn't write to the providers map yet).
*/
export async function resolveProviderConfig(name: string): Promise<z.infer<typeof LlmProvider>> {
if (name === "rowboat") {
return { flavor: "rowboat" };
}
const repo = container.resolve<IModelConfigRepo>("modelConfigRepo");
const cfg = await repo.getConfig();
const entry = cfg.providers?.[name];
if (entry) {
return LlmProvider.parse({
flavor: name,
apiKey: entry.apiKey,
baseURL: entry.baseURL,
headers: entry.headers,
});
}
if (cfg.provider.flavor === name) {
return cfg.provider;
}
throw new Error(`Provider '${name}' is referenced but not configured`);
}
/**
* Model used by knowledge-graph agents (note_creation, labeling_agent, etc.)
* when they're the top-level of a run. Signed-in: curated default.
* BYOK: user override (`knowledgeGraphModel`) or assistant model.
*/
export async function getKgModel(): Promise<string> {
if (await isSignedIn()) return SIGNED_IN_KG_MODEL;
const cfg = await container.resolve<IModelConfigRepo>("modelConfigRepo").getConfig();
return cfg.knowledgeGraphModel ?? cfg.model;
}
/**
* Model used by track-block runner + routing classifier.
* Signed-in: curated default. BYOK: user override (`trackBlockModel`) or
* assistant model.
*/
export async function getTrackBlockModel(): Promise<string> {
if (await isSignedIn()) return SIGNED_IN_TRACK_BLOCK_MODEL;
const cfg = await container.resolve<IModelConfigRepo>("modelConfigRepo").getConfig();
return cfg.trackBlockModel ?? cfg.model;
}
/**
* Model used by the meeting-notes summarizer. No special signed-in default
* historically meetings used the assistant model. BYOK: user override
* (`meetingNotesModel`) or assistant model.
*/
export async function getMeetingNotesModel(): Promise<string> {
if (await isSignedIn()) return SIGNED_IN_DEFAULT_MODEL;
const cfg = await container.resolve<IModelConfigRepo>("modelConfigRepo").getConfig();
return cfg.meetingNotesModel ?? cfg.model;
}

View file

@ -10,7 +10,7 @@ const authedFetch: typeof fetch = async (input, init) => {
return fetch(input, { ...init, headers }); return fetch(input, { ...init, headers });
}; };
export async function getGatewayProvider(): Promise<ProviderV2> { export function getGatewayProvider(): ProviderV2 {
return createOpenRouter({ return createOpenRouter({
baseURL: `${API_URL}/v1/llm`, baseURL: `${API_URL}/v1/llm`,
apiKey: 'managed-by-rowboat', apiKey: 'managed-by-rowboat',

View file

@ -8,7 +8,6 @@ import { createOpenRouter } from '@openrouter/ai-sdk-provider';
import { createOpenAICompatible } from '@ai-sdk/openai-compatible'; import { createOpenAICompatible } from '@ai-sdk/openai-compatible';
import { LlmModelConfig, LlmProvider } from "@x/shared/dist/models.js"; import { LlmModelConfig, LlmProvider } from "@x/shared/dist/models.js";
import z from "zod"; import z from "zod";
import { isSignedIn } from "../account/account.js";
import { getGatewayProvider } from "./gateway.js"; import { getGatewayProvider } from "./gateway.js";
export const Provider = LlmProvider; export const Provider = LlmProvider;
@ -65,6 +64,8 @@ export function createProvider(config: z.infer<typeof Provider>): ProviderV2 {
baseURL, baseURL,
headers, headers,
}) as unknown as ProviderV2; }) as unknown as ProviderV2;
case "rowboat":
return getGatewayProvider();
default: default:
throw new Error(`Unsupported provider flavor: ${config.flavor}`); throw new Error(`Unsupported provider flavor: ${config.flavor}`);
} }
@ -80,9 +81,7 @@ export async function testModelConnection(
const controller = new AbortController(); const controller = new AbortController();
const timeout = setTimeout(() => controller.abort(), effectiveTimeout); const timeout = setTimeout(() => controller.abort(), effectiveTimeout);
try { try {
const provider = await isSignedIn() const provider = createProvider(providerConfig);
? await getGatewayProvider()
: createProvider(providerConfig);
const languageModel = provider.languageModel(model); const languageModel = provider.languageModel(model);
await generateText({ await generateText({
model: languageModel, model: languageModel,

View file

@ -52,6 +52,7 @@ export class FSModelConfigRepo implements IModelConfigRepo {
models: config.models, models: config.models,
knowledgeGraphModel: config.knowledgeGraphModel, knowledgeGraphModel: config.knowledgeGraphModel,
meetingNotesModel: config.meetingNotesModel, meetingNotesModel: config.meetingNotesModel,
trackBlockModel: config.trackBlockModel,
}; };
const toWrite = { ...config, providers: existingProviders }; const toWrite = { ...config, providers: existingProviders };

View file

@ -1,5 +1,4 @@
--- ---
model: gpt-4.1
tools: tools:
workspace-readFile: workspace-readFile:
type: builtin type: builtin

View file

@ -1,5 +1,4 @@
--- ---
model: gpt-4.1
tools: tools:
workspace-readFile: workspace-readFile:
type: builtin type: builtin

View file

@ -2,6 +2,7 @@ import fs from 'fs';
import path from 'path'; import path from 'path';
import { WorkDir } from '../config/config.js'; import { WorkDir } from '../config/config.js';
import { createRun, createMessage } from '../runs/runs.js'; import { createRun, createMessage } from '../runs/runs.js';
import { getKgModel } from '../models/defaults.js';
import { waitForRunCompletion } from '../agents/utils.js'; import { waitForRunCompletion } from '../agents/utils.js';
import { import {
loadConfig, loadConfig,
@ -41,6 +42,9 @@ async function runAgent(agentName: string): Promise<void> {
// The agent file is expected to be in the agents directory with the same name // The agent file is expected to be in the agents directory with the same name
const run = await createRun({ const run = await createRun({
agentId: agentName, agentId: agentName,
model: await getKgModel(),
useCase: 'knowledge_sync',
subUseCase: 'pre_built',
}); });
// Build trigger message with user context // Build trigger message with user context

View file

@ -5,10 +5,35 @@ import path from "path";
import fsp from "fs/promises"; import fsp from "fs/promises";
import fs from "fs"; import fs from "fs";
import readline from "readline"; import readline from "readline";
import { Run, RunEvent, StartEvent, CreateRunOptions, ListRunsResponse, MessageEvent } from "@x/shared/dist/runs.js"; import { Run, RunEvent, StartEvent, ListRunsResponse, MessageEvent, UseCase } from "@x/shared/dist/runs.js";
import { getDefaultModelAndProvider } from "../models/defaults.js";
/**
* Reading-only schemas: extend the canonical `StartEvent` / `RunEvent` to
* accept legacy run files written before `model`/`provider` were required.
*
* `RunEvent.or(LegacyStartEvent)` works because zod unions try left-to-right:
* for any non-start event RunEvent matches first; for a strict start event
* RunEvent still matches; only a legacy start event falls through and parses
* as LegacyStartEvent. New event types stay maintained in one place
* (`@x/shared/dist/runs.js`) the lenient form just adds one fallback variant.
*/
const LegacyStartEvent = StartEvent.extend({
model: z.string().optional(),
provider: z.string().optional(),
});
const ReadRunEvent = RunEvent.or(LegacyStartEvent);
export type CreateRunRepoOptions = {
agentId: string;
model: string;
provider: string;
useCase: z.infer<typeof UseCase>;
subUseCase?: string;
};
export interface IRunsRepo { export interface IRunsRepo {
create(options: z.infer<typeof CreateRunOptions>): Promise<z.infer<typeof Run>>; create(options: CreateRunRepoOptions): Promise<z.infer<typeof Run>>;
fetch(id: string): Promise<z.infer<typeof Run>>; fetch(id: string): Promise<z.infer<typeof Run>>;
list(cursor?: string): Promise<z.infer<typeof ListRunsResponse>>; list(cursor?: string): Promise<z.infer<typeof ListRunsResponse>>;
appendEvents(runId: string, events: z.infer<typeof RunEvent>[]): Promise<void>; appendEvents(runId: string, events: z.infer<typeof RunEvent>[]): Promise<void>;
@ -69,16 +94,19 @@ export class FSRunsRepo implements IRunsRepo {
/** /**
* Read file line-by-line using streams, stopping early once we have * Read file line-by-line using streams, stopping early once we have
* the start event and title (or determine there's no title). * the start event and title (or determine there's no title).
*
* Parses the start event with `LegacyStartEvent` so runs written before
* `model`/`provider` were required still surface in the list view.
*/ */
private async readRunMetadata(filePath: string): Promise<{ private async readRunMetadata(filePath: string): Promise<{
start: z.infer<typeof StartEvent>; start: z.infer<typeof LegacyStartEvent>;
title: string | undefined; title: string | undefined;
} | null> { } | null> {
return new Promise((resolve) => { return new Promise((resolve) => {
const stream = fs.createReadStream(filePath, { encoding: 'utf8' }); const stream = fs.createReadStream(filePath, { encoding: 'utf8' });
const rl = readline.createInterface({ input: stream, crlfDelay: Infinity }); const rl = readline.createInterface({ input: stream, crlfDelay: Infinity });
let start: z.infer<typeof StartEvent> | null = null; let start: z.infer<typeof LegacyStartEvent> | null = null;
let title: string | undefined; let title: string | undefined;
let lineIndex = 0; let lineIndex = 0;
@ -88,11 +116,10 @@ export class FSRunsRepo implements IRunsRepo {
try { try {
if (lineIndex === 0) { if (lineIndex === 0) {
// First line should be the start event start = LegacyStartEvent.parse(JSON.parse(trimmed));
start = StartEvent.parse(JSON.parse(trimmed));
} else { } else {
// Subsequent lines - look for first user message or assistant response // Subsequent lines - look for first user message or assistant response
const event = RunEvent.parse(JSON.parse(trimmed)); const event = ReadRunEvent.parse(JSON.parse(trimmed));
if (event.type === 'message') { if (event.type === 'message') {
const msg = event.message; const msg = event.message;
if (msg.role === 'user') { if (msg.role === 'user') {
@ -157,13 +184,17 @@ export class FSRunsRepo implements IRunsRepo {
); );
} }
async create(options: z.infer<typeof CreateRunOptions>): Promise<z.infer<typeof Run>> { async create(options: CreateRunRepoOptions): Promise<z.infer<typeof Run>> {
const runId = await this.idGenerator.next(); const runId = await this.idGenerator.next();
const ts = new Date().toISOString(); const ts = new Date().toISOString();
const start: z.infer<typeof StartEvent> = { const start: z.infer<typeof StartEvent> = {
type: "start", type: "start",
runId, runId,
agentName: options.agentId, agentName: options.agentId,
model: options.model,
provider: options.provider,
useCase: options.useCase,
...(options.subUseCase ? { subUseCase: options.subUseCase } : {}),
subflow: [], subflow: [],
ts, ts,
}; };
@ -172,24 +203,45 @@ export class FSRunsRepo implements IRunsRepo {
id: runId, id: runId,
createdAt: ts, createdAt: ts,
agentId: options.agentId, agentId: options.agentId,
model: options.model,
provider: options.provider,
useCase: options.useCase,
...(options.subUseCase ? { subUseCase: options.subUseCase } : {}),
log: [start], log: [start],
}; };
} }
async fetch(id: string): Promise<z.infer<typeof Run>> { async fetch(id: string): Promise<z.infer<typeof Run>> {
const contents = await fsp.readFile(path.join(WorkDir, 'runs', `${id}.jsonl`), 'utf8'); const contents = await fsp.readFile(path.join(WorkDir, 'runs', `${id}.jsonl`), 'utf8');
const events = contents.split('\n') // Parse with the lenient schema so legacy start events (no model/provider) load.
const rawEvents = contents.split('\n')
.filter(line => line.trim() !== '') .filter(line => line.trim() !== '')
.map(line => RunEvent.parse(JSON.parse(line))); .map(line => ReadRunEvent.parse(JSON.parse(line)));
if (events.length === 0 || events[0].type !== 'start') { if (rawEvents.length === 0 || rawEvents[0].type !== 'start') {
throw new Error('Corrupt run data'); throw new Error('Corrupt run data');
} }
// Backfill model/provider on the start event from current defaults if missing,
// then promote to the canonical strict types for callers.
const rawStart = rawEvents[0];
const defaults = (!rawStart.model || !rawStart.provider)
? await getDefaultModelAndProvider()
: null;
const start: z.infer<typeof StartEvent> = {
...rawStart,
model: rawStart.model ?? defaults!.model,
provider: rawStart.provider ?? defaults!.provider,
};
const events: z.infer<typeof RunEvent>[] = [start, ...rawEvents.slice(1) as z.infer<typeof RunEvent>[]];
const title = this.extractTitle(events); const title = this.extractTitle(events);
return { return {
id, id,
title, title,
createdAt: events[0].ts!, createdAt: start.ts!,
agentId: events[0].agentName, agentId: start.agentName,
model: start.model,
provider: start.provider,
...(start.useCase ? { useCase: start.useCase } : {}),
...(start.subUseCase ? { subUseCase: start.subUseCase } : {}),
log: events, log: events,
}; };
} }

View file

@ -10,11 +10,28 @@ import { IRunsLock } from "./lock.js";
import { forceCloseAllMcpClients } from "../mcp/mcp.js"; import { forceCloseAllMcpClients } from "../mcp/mcp.js";
import { extractCommandNames } from "../application/lib/command-executor.js"; import { extractCommandNames } from "../application/lib/command-executor.js";
import { addToSecurityConfig } from "../config/security.js"; import { addToSecurityConfig } from "../config/security.js";
import { loadAgent } from "../agents/runtime.js";
import { getDefaultModelAndProvider } from "../models/defaults.js";
export async function createRun(opts: z.infer<typeof CreateRunOptions>): Promise<z.infer<typeof Run>> { export async function createRun(opts: z.infer<typeof CreateRunOptions>): Promise<z.infer<typeof Run>> {
const repo = container.resolve<IRunsRepo>('runsRepo'); const repo = container.resolve<IRunsRepo>('runsRepo');
const bus = container.resolve<IBus>('bus'); const bus = container.resolve<IBus>('bus');
const run = await repo.create(opts);
// Resolve model+provider once at creation: opts > agent declaration > defaults.
// Both fields are plain strings (provider is a name, looked up at runtime).
const agent = await loadAgent(opts.agentId);
const defaults = await getDefaultModelAndProvider();
const model = opts.model ?? agent.model ?? defaults.model;
const provider = opts.provider ?? agent.provider ?? defaults.provider;
const useCase = opts.useCase ?? "copilot_chat";
const run = await repo.create({
agentId: opts.agentId,
model,
provider,
useCase,
...(opts.subUseCase ? { subUseCase: opts.subUseCase } : {}),
});
await bus.publish(run.log[0]); await bus.publish(run.log[0]);
return run; return run;
} }
@ -110,4 +127,4 @@ export async function fetchRun(runId: string): Promise<z.infer<typeof Run>> {
export async function listRuns(cursor?: string): Promise<z.infer<typeof ListRunsResponse>> { export async function listRuns(cursor?: string): Promise<z.infer<typeof ListRunsResponse>> {
const repo = container.resolve<IRunsRepo>('runsRepo'); const repo = container.resolve<IRunsRepo>('runsRepo');
return repo.list(cursor); return repo.list(cursor);
} }

View file

@ -25,6 +25,13 @@ const ipcSchemas = {
electron: z.string(), electron: z.string(),
}), }),
}, },
'analytics:bootstrap': {
req: z.null(),
res: z.object({
installationId: z.string(),
apiUrl: z.string(),
}),
},
'workspace:getRoot': { 'workspace:getRoot': {
req: z.null(), req: z.null(),
res: z.object({ res: z.object({

View file

@ -1,7 +1,7 @@
import { z } from "zod"; import { z } from "zod";
export const LlmProvider = z.object({ export const LlmProvider = z.object({
flavor: z.enum(["openai", "anthropic", "google", "openrouter", "aigateway", "ollama", "openai-compatible"]), flavor: z.enum(["openai", "anthropic", "google", "openrouter", "aigateway", "ollama", "openai-compatible", "rowboat"]),
apiKey: z.string().optional(), apiKey: z.string().optional(),
baseURL: z.string().optional(), baseURL: z.string().optional(),
headers: z.record(z.string(), z.string()).optional(), headers: z.record(z.string(), z.string()).optional(),
@ -11,6 +11,16 @@ export const LlmModelConfig = z.object({
provider: LlmProvider, provider: LlmProvider,
model: z.string(), model: z.string(),
models: z.array(z.string()).optional(), models: z.array(z.string()).optional(),
providers: z.record(z.string(), z.object({
apiKey: z.string().optional(),
baseURL: z.string().optional(),
headers: z.record(z.string(), z.string()).optional(),
model: z.string().optional(),
models: z.array(z.string()).optional(),
})).optional(),
// Per-category model overrides (BYOK only — signed-in users always get
// the curated gateway defaults). Read by helpers in core/models/defaults.ts.
knowledgeGraphModel: z.string().optional(), knowledgeGraphModel: z.string().optional(),
meetingNotesModel: z.string().optional(), meetingNotesModel: z.string().optional(),
trackBlockModel: z.string().optional(),
}); });

View file

@ -19,6 +19,17 @@ export const RunProcessingEndEvent = BaseRunEvent.extend({
export const StartEvent = BaseRunEvent.extend({ export const StartEvent = BaseRunEvent.extend({
type: z.literal("start"), type: z.literal("start"),
agentName: z.string(), agentName: z.string(),
model: z.string(),
provider: z.string(),
// useCase/subUseCase tag the run for analytics. Optional on read so legacy
// run files written before these fields existed still parse cleanly.
useCase: z.enum([
"copilot_chat",
"track_block",
"meeting_note",
"knowledge_sync",
]).optional(),
subUseCase: z.string().optional(),
}); });
export const SpawnSubFlowEvent = BaseRunEvent.extend({ export const SpawnSubFlowEvent = BaseRunEvent.extend({
@ -116,11 +127,22 @@ export const AskHumanResponsePayload = AskHumanResponseEvent.pick({
response: true, response: true,
}); });
export const UseCase = z.enum([
"copilot_chat",
"track_block",
"meeting_note",
"knowledge_sync",
]);
export const Run = z.object({ export const Run = z.object({
id: z.string(), id: z.string(),
title: z.string().optional(), title: z.string().optional(),
createdAt: z.iso.datetime(), createdAt: z.iso.datetime(),
agentId: z.string(), agentId: z.string(),
model: z.string(),
provider: z.string(),
useCase: UseCase.optional(),
subUseCase: z.string().optional(),
log: z.array(RunEvent), log: z.array(RunEvent),
}); });
@ -134,6 +156,10 @@ export const ListRunsResponse = z.object({
nextCursor: z.string().optional(), nextCursor: z.string().optional(),
}); });
export const CreateRunOptions = Run.pick({ export const CreateRunOptions = z.object({
agentId: true, agentId: z.string(),
}); model: z.string().optional(),
provider: z.string().optional(),
useCase: UseCase.optional(),
subUseCase: z.string().optional(),
});

View file

@ -25,6 +25,8 @@ export const TrackBlockSchema = z.object({
eventMatchCriteria: z.string().optional().describe('When set, this track participates in event-based triggering. Describe what kinds of events should consider this track for an update (e.g. "Emails about Q3 planning"). Omit to disable event triggers — the track will only run on schedule or manually.'), eventMatchCriteria: z.string().optional().describe('When set, this track participates in event-based triggering. Describe what kinds of events should consider this track for an update (e.g. "Emails about Q3 planning"). Omit to disable event triggers — the track will only run on schedule or manually.'),
active: z.boolean().default(true).describe('Set false to pause without deleting'), active: z.boolean().default(true).describe('Set false to pause without deleting'),
schedule: TrackScheduleSchema.optional(), schedule: TrackScheduleSchema.optional(),
model: z.string().optional().describe('ADVANCED — leave unset. Per-track LLM model override (e.g. "anthropic/claude-sonnet-4.6"). Only set when the user explicitly asked for a specific model for THIS track. The global default already picks a tuned model for tracks; overriding usually makes things worse, not better.'),
provider: z.string().optional().describe('ADVANCED — leave unset. Per-track provider name override (e.g. "openai", "anthropic"). Only set when the user explicitly asked for a specific provider for THIS track. Almost always omitted; the global default flows through correctly.'),
lastRunAt: z.string().optional().describe('Runtime-managed — never write this yourself'), lastRunAt: z.string().optional().describe('Runtime-managed — never write this yourself'),
lastRunId: z.string().optional().describe('Runtime-managed — never write this yourself'), lastRunId: z.string().optional().describe('Runtime-managed — never write this yourself'),
lastRunSummary: z.string().optional().describe('Runtime-managed — never write this yourself'), lastRunSummary: z.string().optional().describe('Runtime-managed — never write this yourself'),

33
apps/x/pnpm-lock.yaml generated
View file

@ -247,6 +247,9 @@ importers:
recharts: recharts:
specifier: ^3.8.0 specifier: ^3.8.0
version: 3.8.1(@types/react@19.2.7)(react-dom@19.2.3(react@19.2.3))(react-is@16.13.1)(react@19.2.3)(redux@5.0.1) version: 3.8.1(@types/react@19.2.7)(react-dom@19.2.3(react@19.2.3))(react-is@16.13.1)(react@19.2.3)(redux@5.0.1)
remark-breaks:
specifier: ^4.0.0
version: 4.0.0
sonner: sonner:
specifier: ^2.0.7 specifier: ^2.0.7
version: 2.0.7(react-dom@19.2.3(react@19.2.3))(react@19.2.3) version: 2.0.7(react-dom@19.2.3(react@19.2.3))(react@19.2.3)
@ -401,6 +404,9 @@ importers:
pdf-parse: pdf-parse:
specifier: ^2.4.5 specifier: ^2.4.5
version: 2.4.5 version: 2.4.5
posthog-node:
specifier: ^4.18.0
version: 4.18.0
react: react:
specifier: ^19.2.3 specifier: ^19.2.3
version: 19.2.3 version: 19.2.3
@ -5808,6 +5814,9 @@ packages:
mdast-util-mdxjs-esm@2.0.1: mdast-util-mdxjs-esm@2.0.1:
resolution: {integrity: sha512-EcmOpxsZ96CvlP03NghtH1EsLtr0n9Tm4lPUJUBccV9RwUOneqSycg19n5HGzCf+10LozMRSObtVr3ee1WoHtg==} resolution: {integrity: sha512-EcmOpxsZ96CvlP03NghtH1EsLtr0n9Tm4lPUJUBccV9RwUOneqSycg19n5HGzCf+10LozMRSObtVr3ee1WoHtg==}
mdast-util-newline-to-break@2.0.0:
resolution: {integrity: sha512-MbgeFca0hLYIEx/2zGsszCSEJJ1JSCdiY5xQxRcLDDGa8EPvlLPupJ4DSajbMPAnC0je8jfb9TiUATnxxrHUog==}
mdast-util-phrasing@4.1.0: mdast-util-phrasing@4.1.0:
resolution: {integrity: sha512-TqICwyvJJpBwvGAMZjj4J2n0X8QWp21b9l0o7eXyVJ25YNWYbJDVIyD1bZXE6WtV6RmKJVYmQAKWa0zWOABz2w==} resolution: {integrity: sha512-TqICwyvJJpBwvGAMZjj4J2n0X8QWp21b9l0o7eXyVJ25YNWYbJDVIyD1bZXE6WtV6RmKJVYmQAKWa0zWOABz2w==}
@ -6465,6 +6474,10 @@ packages:
posthog-js@1.332.0: posthog-js@1.332.0:
resolution: {integrity: sha512-w3+sL+IFK4mpfFmgTW7On8cR+z34pre+SOewx+eHZQSYF9RYqXsLIhrxagWbQKkowPd4tCwUHrkS1+VHsjnPqA==} resolution: {integrity: sha512-w3+sL+IFK4mpfFmgTW7On8cR+z34pre+SOewx+eHZQSYF9RYqXsLIhrxagWbQKkowPd4tCwUHrkS1+VHsjnPqA==}
posthog-node@4.18.0:
resolution: {integrity: sha512-XROs1h+DNatgKh/AlIlCtDxWzwrKdYDb2mOs58n4yN8BkGN9ewqeQwG5ApS4/IzwCb7HPttUkOVulkYatd2PIw==}
engines: {node: '>=15.0.0'}
postject@1.0.0-alpha.6: postject@1.0.0-alpha.6:
resolution: {integrity: sha512-b9Eb8h2eVqNE8edvKdwqkrY6O7kAwmI8kcnBv1NScolYJbo59XUF0noFq+lxbC1yN20bmC0WBEbDC5H/7ASb0A==} resolution: {integrity: sha512-b9Eb8h2eVqNE8edvKdwqkrY6O7kAwmI8kcnBv1NScolYJbo59XUF0noFq+lxbC1yN20bmC0WBEbDC5H/7ASb0A==}
engines: {node: '>=14.0.0'} engines: {node: '>=14.0.0'}
@ -6768,6 +6781,9 @@ packages:
rehype-raw@7.0.0: rehype-raw@7.0.0:
resolution: {integrity: sha512-/aE8hCfKlQeA8LmyeyQvQF3eBiLRGNlfBJEvWH7ivp9sBqs7TNqBL5X3v157rM4IFETqDnIOO+z5M/biZbo9Ww==} resolution: {integrity: sha512-/aE8hCfKlQeA8LmyeyQvQF3eBiLRGNlfBJEvWH7ivp9sBqs7TNqBL5X3v157rM4IFETqDnIOO+z5M/biZbo9Ww==}
remark-breaks@4.0.0:
resolution: {integrity: sha512-IjEjJOkH4FuJvHZVIW0QCDWxcG96kCq7An/KVH2NfJe6rKZU2AsHeB3OEjPNRxi4QC34Xdx7I2KGYn6IpT7gxQ==}
remark-cjk-friendly-gfm-strikethrough@1.2.3: remark-cjk-friendly-gfm-strikethrough@1.2.3:
resolution: {integrity: sha512-bXfMZtsaomK6ysNN/UGRIcasQAYkC10NtPmP0oOHOV8YOhA2TXmwRXCku4qOzjIFxAPfish5+XS0eIug2PzNZA==} resolution: {integrity: sha512-bXfMZtsaomK6ysNN/UGRIcasQAYkC10NtPmP0oOHOV8YOhA2TXmwRXCku4qOzjIFxAPfish5+XS0eIug2PzNZA==}
engines: {node: '>=16'} engines: {node: '>=16'}
@ -14414,6 +14430,11 @@ snapshots:
transitivePeerDependencies: transitivePeerDependencies:
- supports-color - supports-color
mdast-util-newline-to-break@2.0.0:
dependencies:
'@types/mdast': 4.0.4
mdast-util-find-and-replace: 3.0.2
mdast-util-phrasing@4.1.0: mdast-util-phrasing@4.1.0:
dependencies: dependencies:
'@types/mdast': 4.0.4 '@types/mdast': 4.0.4
@ -15189,6 +15210,12 @@ snapshots:
query-selector-shadow-dom: 1.0.1 query-selector-shadow-dom: 1.0.1
web-vitals: 4.2.4 web-vitals: 4.2.4
posthog-node@4.18.0:
dependencies:
axios: 1.13.2
transitivePeerDependencies:
- debug
postject@1.0.0-alpha.6: postject@1.0.0-alpha.6:
dependencies: dependencies:
commander: 9.5.0 commander: 9.5.0
@ -15608,6 +15635,12 @@ snapshots:
hast-util-raw: 9.1.0 hast-util-raw: 9.1.0
vfile: 6.0.3 vfile: 6.0.3
remark-breaks@4.0.0:
dependencies:
'@types/mdast': 4.0.4
mdast-util-newline-to-break: 2.0.0
unified: 11.0.5
remark-cjk-friendly-gfm-strikethrough@1.2.3(@types/mdast@4.0.4)(micromark-util-types@2.0.2)(micromark@4.0.2)(unified@11.0.5): remark-cjk-friendly-gfm-strikethrough@1.2.3(@types/mdast@4.0.4)(micromark-util-types@2.0.2)(micromark@4.0.2)(unified@11.0.5):
dependencies: dependencies:
micromark-extension-cjk-friendly-gfm-strikethrough: 1.2.3(micromark-util-types@2.0.2)(micromark@4.0.2) micromark-extension-cjk-friendly-gfm-strikethrough: 1.2.3(micromark-util-types@2.0.2)(micromark@4.0.2)