Compare commits

...

36 commits

Author SHA1 Message Date
Ramnique Singh
17afc935bf
Merge pull request #524 from rowboatlabs/dev
identify signed-in users on every app startup
2026-04-28 20:22:26 +05:30
Ramnique Singh
de176ec458 identify signed-in users on every app startup
Previously identify() only fired during the OAuth completion flow, so
existing installs (signed in before analytics shipped) and every cold
start of v0.3.4+ would emit main-process events under the anonymous
installation_id until the user happened to re-sign-in.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-28 20:21:37 +05:30
Ramnique Singh
0dff57e8f7
Merge pull request #523 from rowboatlabs/dev
add posthog analytics for llm usage and auth events
2026-04-28 20:10:13 +05:30
Ramnique Singh
43c1ba719f add posthog analytics for llm usage and auth events
Captures per-LLM-call token usage tagged by feature (copilot chat,
track block, meeting note, knowledge sync), plus sign-in / sign-out
and identity. Renderer and main share one PostHog identity so events
from either process resolve to the same user.

See apps/x/ANALYTICS.md for the event catalog, person properties,
use-case taxonomy, and how to add new events.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-28 19:53:40 +05:30
arkml
f14f3b0347
Merge pull request #520 from rowboatlabs/dev
Dev
2026-04-24 18:44:24 +05:30
Ramnique Singh
d42fb26bcc allow per-track model + provider overrides
Track block YAML gains optional `model` and `provider` fields. When set,
the track runner passes them through to `createRun` so this specific
track runs on the chosen model/provider; when unset the global default
flows through (`getTrackBlockModel()` + the resolved provider).

The track skill picks up the new fields automatically via the embedded
`z.toJSONSchema(TrackBlockSchema)` and adds an explicit "Do Not Set"
section: copilot leaves them omitted unless the user named a specific
model or provider for the track. Common bad reasons ("might be faster",
"in case it matters", complex instruction) are called out so the
defaults stay the path of least resistance.

Track modal Details tab shows the values when set, in the same
conditional `<dt>/<dd>` style as the lastRun fields.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-24 16:58:18 +05:30
Ramnique Singh
caf00fae0c configurable kg / meeting / track-block model overrides
Bring back per-category model selection that 5c4aa772 dropped, plus add a
new track-block category. Each is a BYOK-only override on `LlmModelConfig`
(`knowledgeGraphModel`, `meetingNotesModel`, `trackBlockModel`); signed-in
users always get the curated gateway default and never hit the on-disk
config.

Three helpers in core/models/defaults.ts — `getKgModel`,
`getTrackBlockModel`, `getMeetingNotesModel` — each check `isSignedIn`
first (fast path) and fall through to `cfg.<field> ?? cfg.model` for BYOK.

The model is now picked at the invocation site rather than via runtime
agent-name branching: each top-level `createRun` for a polling KG agent
or a track-block update passes `model: await getXxxModel()`. The `model:`
declarations on the affected agent YAMLs are dropped — they were dead
code under the per-call override. Standalone (non-run) callers
`track/routing` and `summarize_meeting` use the helpers inline.

Settings dialog and the two onboarding flows surface the two new fields
("Meeting Notes Model", "Track Block Model") next to the existing
"Knowledge Graph Model"; `repo.setConfig` persists all three per-provider.

Note: the signed-in `RowboatModelSettings` panel still has its
now-defunct kg selector; that's a UI cleanup for a later pass.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-24 16:44:02 +05:30
Ramnique Singh
bdf270b7a1 convert Today.md track blocks to event-driven and batch Gmail sync events
Removes polling schedules from the up-next and calendar track blocks on
Today.md so they refresh only on calendar.synced events, and rewrites
the emails track instruction to consume a multi-thread digest payload.
Batches Gmail sync so one email.synced event covers a whole sync run
(capped at 10 threads per digest) instead of one event per thread,
which collapses Pass 1 routing calls for multi-thread syncs.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-24 11:15:56 +05:30
Arjun
0bb256879c preserve formatting in chat input text 2026-04-23 21:29:51 +05:30
Arjun
75842fa06b assistant chat ui shows the model name properly 2026-04-23 00:49:06 +05:30
Arjun
f4dbb58a77 add rowboat meeting notes to graph 2026-04-23 00:35:08 +05:30
Ramnique Singh
5c4aa77255 freeze model + provider per run at creation time
The model dropdown was broken in two ways: it wrote to ~/.rowboat/config/models.json
(the BYOK creds file, stamped with a fake `flavor: 'openrouter'` to satisfy zod
when signed in), and the runtime ignored that write entirely for signed-in users
because `streamAgent` hard-coded `gpt-5.4`. Model selection was also globally
scoped, so every chat shared one brain.

This change moves model + provider out of the global config and onto the run
itself, resolved once at runs:create and frozen for the run's lifetime.

## Resolution

`runsCore.createRun` resolves per-field, falling through:

  run.model    = opts.model    ?? agent.model    ?? defaults.model
  run.provider = opts.provider ?? agent.provider ?? defaults.provider

A new `core/models/defaults.ts` is the only place in the codebase that branches
on signed-in state. `getDefaultModelAndProvider()` returns name strings;
`resolveProviderConfig(name)` does the name → full LlmProvider lookup at
runtime. `createProvider` learns about `flavor: 'rowboat'` so the gateway is
just another flavor.

`provider` is stored as a name (e.g. `"rowboat"`, `"openai"`), not a full
LlmProvider object. API keys never get written into the JSONL log; rotating a
key in models.json applies to existing runs without re-creation. Cost: deleting
a provider from settings breaks runs that referenced it (clear error surfaced
via `resolveProviderConfig`).

## Runtime

`streamAgent` no longer resolves anything — it reads `state.runModel` /
`state.runProvider`, looks up the provider config, instantiates. Subflows
inherit the parent run's pair, so KG / inline-task subagents run on whatever
the main run resolved to at creation. The `knowledgeGraphAgents` array,
`isKgAgent`, and the per-agent default constants are gone.

KG / inline-task / pre-built agents declare their preferred model in YAML
frontmatter (claude-haiku-4.5 / claude-sonnet-4.6) — used at resolution time
when those agents are themselves the top-level agent of a run (background
triggers, scheduled tasks, etc.).

## Standalone callers

Non-run LLM call sites (summarize_meeting, track/routing, builtin-tools
parseFile) and `agent-schedule/runner` were branching on signed-in
independently. They all route through `getDefaultModelAndProvider` +
`resolveProviderConfig` + `createProvider` now; `agent-schedule/runner`
switched from raw `runsRepo.create` to `runsCore.createRun` so resolution
applies to scheduled-agent runs too.

## UI

`chat-input-with-mentions` stops calling `models:saveConfig`. The dropdown
notifies the parent via `onSelectedModelChange` ({provider, model} as names);
App.tsx stashes selection per-tab and passes it to the next `runs:create`.
When a run already exists, the input fetches it and renders a static label —
model can't change mid-run.

## Legacy runs

A lenient zod schema in `repo.ts` (`StartEvent.extend(...optional)` plus
`RunEvent.or(LegacyStartEvent)`) parses pre-existing runs. `repo.fetch` fills
missing model/provider from current defaults and returns the strict canonical
`Run` type. No file-rewriting migration; no impact on the canonical schema in
`@x/shared`.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-22 12:26:01 +05:30
Ramnique Singh
51f2ad6e8a
Merge pull request #517 from rowboatlabs/dev
Dev
2026-04-21 14:39:46 +05:30
Ramnique Singh
15567cd1dd let tool failures be observed by the model instead of killing the run
streamAgent executed tools with no try/catch around the call. A throw
from execTool or from a subflow agent streamed up through streamAgent,
out of trigger's inner catch (which rethrows non-abort errors), and
into the new top-level catch that the previous commit added. That
surfaces the failure — but it ends the run. One misbehaving tool took
down the whole conversation.

Wrap the tool-execution block in a try/catch. On abort, rethrow so the
existing AbortError path still fires. On any other error, convert the
exception into a tool-result payload ({ success: false, error, toolName })
and keep going. The model then sees a tool-result message saying the
tool failed with a specific message and can apologize, retry with
different arguments, pick a different tool, or explain to the user —
the normal recovery moves it already knows how to make.

No change to happy-path tool execution, no change to abort handling,
no change to subflow agent semantics (subflows that themselves error
are treated identically to regular tool errors at the call site).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-21 14:38:19 +05:30
Ramnique Singh
c81d3cb27b surface silent runtime failures as error events
AgentRuntime.trigger() wrapped its body in try/finally with no outer
catch. An inner catch around the streamAgent for-await only handled
AbortError and rethrew everything else. Call sites fire-and-forget
trigger (runs.ts:26,60,72), so any thrown error became an unhandled
promise rejection. The finally still ran and published
run-processing-end, but nothing told the renderer why — the chat
showed the spinner, then an empty assistant bubble.

Provider misconfig, invalid API keys, unknown model ids, streamText
setup throws, runsRepo.fetch or loadAgent failing, and provider
auth/rate-limit rejections on the first chunk all hit this path on a
first message. All invisible.

Add a top-level catch that formats the error to a string and emits a
{type: "error"} RunEvent via the existing runsRepo/bus path. The
renderer already renders those as a chat bubble plus toast
(App.tsx:2069) — no UI work needed.

No changes to the abort path: user-initiated stops still flow through
the existing inner catch and the signal.aborted branch that emits
run-stopped.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-21 14:36:00 +05:30
Ramnique Singh
32b6b2f1c0 Merge branch 'main' into dev 2026-04-21 13:36:58 +05:30
tusharmagar
0f051ea467 fix: duplicate navigation button 2026-04-21 13:02:44 +05:30
Ramnique Singh
7ad1a91ea8
Merge pull request #515 from rowboatlabs/dev
Dev changes
2026-04-21 11:12:40 +05:30
Ramnique Singh
ae296c7723 serialize knowledge file writes behind a per-path mutex
Concurrent track runs on the same note were corrupting the file. In a
fresh workspace, four tracks fired on cron at 05:09:17Z (all failed on
AI_LoadAPIKeyError, but each still wrote lastRunAt/lastRunId before the
agent ran) and three more fired at 05:09:32Z. The resulting Today.md
ended with stray fragments "\n>\nes-->\n-->" — tail pieces of
<!--/track-target:priorities--> that a mis-aimed splice had truncated —
and the priorities YAML lost its lastRunId entirely.

Two compounding issues in knowledge/track/fileops.ts:

1. updateTrackBlock read the file twice: once via fetch() to resolve
   fenceStart/fenceEnd, and again via fs.readFile to get the bytes to
   splice. If another writer landed between the reads, the line indices
   from read #1 pointed into unrelated content in read #2, so the
   splice replaced the wrong range and left tag fragments behind.

2. None of the mutators (updateContent, updateTrackBlock,
   replaceTrackBlockYaml, deleteTrackBlock) held any lock, so
   concurrent read-modify-writes clobbered each other's updates. The
   missing lastRunId was exactly that: set by one run, overwritten by
   another run's stale snapshot.

The fix: introduce withFileLock(absPath, fn) in knowledge/file-lock.ts,
a per-path Promise-chain mutex modeled on the commitLock pattern in
knowledge/version_history.ts. Callers append onto that file's chain
and await — wait-queue semantics, FIFO, no timeout. The map self-cleans
when a file's chain goes idle so it stays bounded across a long-running
process.

Wrap all four fileops mutators in it, and also wrap workspace.writeFile
(which can touch the same files from the agent's tool surface and
previously raced with fileops). Both callers key on the resolved
absolute path so they share the same lock for the same file.

Reads (fetchAll, fetch, fetchYaml) stay lock-free — fs.writeFile on
files this size is atomic enough that readers see either pre- or
post-state, never corruption, and stale reads are not a correctness
issue for the callers that use them (scheduler, event dispatcher).

The debounced version-history commit in workspace.writeFile stays
outside the lock; it's deferred work that shouldn't hold up the write.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-21 11:11:33 +05:30
Ramnique Singh
fbbaeea1df refactor ensure-daily-note 2026-04-21 11:06:09 +05:30
Ramnique Singh
a86f555cbb refresh rowboat access token on every gateway request
Wire a custom fetch into the OpenRouter gateway provider so each outbound
request resolves a fresh access token, instead of baking one token into
the provider at turn start. Add a 60s expiry margin and serialize
concurrent refreshes behind a single in-flight promise.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-21 10:13:40 +05:30
Ramnique Singh
a80ef4d320 Revert "add suggested topics using track blocks"
This reverts commit 93054066fa.
2026-04-20 22:21:58 +05:30
Ramnique Singh
dc3e25c98b add today.md using track blocks 2026-04-20 17:20:30 +05:30
Ramnique Singh
93054066fa add suggested topics using track blocks 2026-04-20 17:20:21 +05:30
Ramnique Singh
4c46bf4c25 serialise prompt blocks to markdown 2026-04-20 17:19:36 +05:30
Ramnique Singh
e8a7cd59c1 fix \n repitition in markdown editor 2026-04-20 17:19:21 +05:30
Ramnique Singh
1306b7f442 improve prompting around output blocks 2026-04-20 16:22:53 +05:30
Ramnique Singh
56edc5a730 clean up invisible chars in yaml parse 2026-04-20 16:22:31 +05:30
Ramnique Singh
5d65616cfb add prompt block 2026-04-20 14:42:13 +05:30
Ramnique Singh
9f776ce526 improve track run prompts 2026-04-20 14:30:50 +05:30
Ramnique Singh
8e0a3e2991 render tables in markdown 2026-04-20 10:43:27 +05:30
Ramnique Singh
0d71ad33f5 improve track skill re: yaml strings 2026-04-20 10:27:13 +05:30
arkml
2133d7226f
Merge pull request #490 from rowboatlabs/dev
Dev
2026-04-13 22:21:12 +05:30
Ramnique Singh
c5e984e4c4
Merge pull request #481 from rowboatlabs/dev
Dev
2026-04-10 01:23:05 +05:30
Ramnique Singh
41bbec6296
Merge pull request #480 from rowboatlabs/dev
feat(oauth): switch Google OAuth from PKCE to authorization code flow…
2026-04-10 00:44:37 +05:30
arkml
70ca18a7fa
Merge pull request #479 from rowboatlabs/dev
Dev
2026-04-10 00:00:32 +05:30
68 changed files with 2467 additions and 548 deletions

View file

@ -109,6 +109,7 @@ Long-form docs for specific features. Read the relevant file before making chang
| Feature | Doc | | Feature | Doc |
|---------|-----| |---------|-----|
| Track Blocks — auto-updating note content (scheduled / event-driven / manual), Copilot skill, prompts catalog | `apps/x/TRACKS.md` | | Track Blocks — auto-updating note content (scheduled / event-driven / manual), Copilot skill, prompts catalog | `apps/x/TRACKS.md` |
| Analytics — PostHog event catalog, person properties, use-case taxonomy, how to add a new event | `apps/x/ANALYTICS.md` |
## Common Tasks ## Common Tasks

146
apps/x/ANALYTICS.md Normal file
View file

@ -0,0 +1,146 @@
# Analytics
> PostHog instrumentation for `apps/x`. We capture LLM token usage (broken down by feature) and identity/auth events. Renderer (`posthog-js`) and main (`posthog-node`) share one stable distinct_id and one identified user, so events from either process resolve to the same person.
## Identity model
- **Anonymous distinct_id** = `installationId` from `~/.rowboat/config/installation.json` (auto-generated on first run; see `packages/core/src/analytics/installation.ts`).
- Renderer fetches it from main on startup via the `analytics:bootstrap` IPC channel and passes it as PostHog's `bootstrap.distinctID`. Main uses it directly in `posthog-node`.
- **On rowboat sign-in**: `posthog.identify(rowboatUserId)` runs in **both** processes.
- Main does it from `apps/main/src/oauth-handler.ts:285` (after `getBillingInfo()` resolves) — this is the load-bearing call, since main always runs.
- Renderer mirrors via `apps/renderer/src/hooks/useAnalyticsIdentity.ts` listening on the `oauth:didConnect` IPC event.
- Main also calls `alias()` so events emitted under the anonymous installation_id are linked to the identified user retroactively.
- **On every app startup**: main re-identifies if rowboat tokens exist (`packages/core/src/analytics/identify.ts`, called from `apps/main/src/main.ts` whenReady). Idempotent — PostHog merges person properties on duplicate identifies. This catches users who installed before analytics existed, and refreshes person properties (plan/status) on every launch.
- **On rowboat sign-out**: `posthog.reset()` in both processes; future events resolve to the installation_id again.
- **`email`** is set on `identify` from main only (sourced from `/v1/me`). Person properties are server-side, so the renderer's events resolve to the same record without redundantly setting it.
## Event catalog
### `llm_usage`
Emitted whenever ai-sdk returns token usage (one event per LLM call, not per run).
| Property | Type | Notes |
|---|---|---|
| `use_case` | enum | `copilot_chat` / `track_block` / `meeting_note` / `knowledge_sync` |
| `sub_use_case` | string? | Refines `use_case` — see taxonomy table below |
| `agent_name` | string? | Present when the call goes through an agent run (`createRun`); omitted for direct `generateText`/`generateObject` |
| `model` | string | e.g. `claude-sonnet-4-6` |
| `provider` | string | `rowboat` = cloud LLM gateway; otherwise the BYOK provider (`openai`, `anthropic`, `ollama`, etc.) |
| `input_tokens` | number | |
| `output_tokens` | number | |
| `total_tokens` | number | |
| `cached_input_tokens` | number? | When the provider reports it |
| `reasoning_tokens` | number? | When the provider reports it |
#### Use-case taxonomy
Every `llm_usage` emit point in the codebase:
| `use_case` | `sub_use_case` | `agent_name`? | Where | File:line |
|---|---|---|---|---|
| `copilot_chat` | (none) | yes | User chat in renderer (default for any `createRun` without `useCase`) | `packages/core/src/agents/runtime.ts:1313` (finish-step in `streamLlm`) |
| `copilot_chat` | `scheduled` | yes | Background scheduled agent runner | `packages/core/src/agent-schedule/runner.ts:167` |
| `copilot_chat` | `file_parse` | inherits | `parseFile` builtin tool inside any chat | `packages/core/src/application/lib/builtin-tools.ts:770` |
| `track_block` | `routing` | no | Pass 1 routing classifier (`generateObject`) | `packages/core/src/knowledge/track/routing.ts:104` |
| `track_block` | `run` | yes | Pass 2 track block execution | `packages/core/src/knowledge/track/runner.ts:109` (createRun) |
| `meeting_note` | (none) | no | Meeting transcript summarizer (`generateText`) | `packages/core/src/knowledge/summarize_meeting.ts:161` |
| `knowledge_sync` | `agent_notes` | yes | Agent notes learning service | `packages/core/src/knowledge/agent_notes.ts:309` (createRun) |
| `knowledge_sync` | `tag_notes` | yes | Note tagging | `packages/core/src/knowledge/tag_notes.ts:86` (createRun) |
| `knowledge_sync` | `build_graph` | yes | Knowledge graph note creation | `packages/core/src/knowledge/build_graph.ts:253` (createRun) |
| `knowledge_sync` | `label_emails` | yes | Email labeling | `packages/core/src/knowledge/label_emails.ts:73` (createRun) |
| `knowledge_sync` | `inline_task_run` | yes | Inline `@rowboat` task execution (two call sites) | `packages/core/src/knowledge/inline_tasks.ts:471, 552` (createRun) |
| `knowledge_sync` | `inline_task_classify` | no | Inline task scheduling classifier (`generateText`) | `packages/core/src/knowledge/inline_tasks.ts:673` |
| `knowledge_sync` | `pre_built` | yes | Pre-built scheduled agents | `packages/core/src/pre_built/runner.ts:43` (createRun) |
`testModelConnection` in `packages/core/src/models/models.ts` is **not** instrumented (diagnostic only — would skew per-model counts).
### `user_signed_in`
Emitted when rowboat OAuth completes. Properties: `plan`, `status` (subscription state from `/v1/me`).
Emitted from **both** processes:
- Main (`apps/main/src/oauth-handler.ts:290`) — always fires; load-bearing.
- Renderer (`apps/renderer/src/hooks/useAnalyticsIdentity.ts:75`) — fires only when the renderer is open. Same distinct_id, so dedup is automatic in PostHog dashboards.
### `user_signed_out`
Emitted on rowboat disconnect. No properties. Followed immediately by `posthog.reset()`.
Emit points: `apps/main/src/oauth-handler.ts:369` and `apps/renderer/src/hooks/useAnalyticsIdentity.ts:82`.
### Other events (pre-existing, not added by the LLM-usage work)
All in `apps/renderer/src/lib/analytics.ts`:
- `chat_session_created``{ run_id }`
- `chat_message_sent``{ voice_input, voice_output, search_enabled }`
- `oauth_connected` / `oauth_disconnected``{ provider }`
- `voice_input_started` — no properties
- `search_executed``{ types: string[] }`
- `note_exported``{ format }`
## Person properties
Persistent across sessions for the same user. Set via `posthog.people.set` or as the `properties` arg to `identify`.
| Property | Set by | Notes |
|---|---|---|
| `email` | main on identify | From `/v1/me`; powers PostHog cohort match + integrations |
| `plan`, `status` | main on identify | Subscription state |
| `api_url` | both processes (init + identify) | Distinguishes prod / staging / custom — assign meaning in PostHog dashboard. `https://api.x.rowboatlabs.com` = production |
| `signed_in` | renderer | `true` while rowboat OAuth is connected |
| `{provider}_connected` | renderer | One of `gmail`, `calendar`, `slack`, `rowboat` |
| `total_notes` | renderer (init) | Workspace size signal |
| `has_used_search`, `has_used_voice` | renderer | One-shot first-use flags |
## How to add a new event
1. **Naming**: `snake_case`, `[object]_[verb]` shape (e.g. `note_exported`, not `exportedNote`). Matches PostHog convention.
2. **Pick the right helper**:
- LLM token usage → `captureLlmUsage()` from `@x/core/dist/analytics/usage.js`. Always include `useCase`; add `subUseCase` if it refines an existing top-level case.
- Anything else from main → `capture()` from `@x/core/dist/analytics/posthog.js`.
- Anything else from renderer → add a typed wrapper to `apps/renderer/src/lib/analytics.ts` and call it from the UI code (don't call `posthog.capture()` directly from components).
3. **If it's a new LLM call site**:
- Goes through `createRun`? Pass `useCase` (and optionally `subUseCase`) to the create call. The runtime auto-emits at every `finish-step` — no further code needed.
- Direct `generateText` / `generateObject`? Call `captureLlmUsage` after the call with `model`, `provider`, `usage` from the result.
- Inside a builtin tool? Call `getCurrentUseCase()` from `analytics/use_case.ts` first — the parent run's tag is propagated via `AsyncLocalStorage`. Use `ctx?.useCase ?? 'copilot_chat'` as fallback.
4. **Update this file in the same PR.** That's the contract — without it, dashboards and downstream consumers drift.
## How to add a new use-case sub-case
- **New `sub_use_case` under an existing top-level case**: just pick a string and add a row to the taxonomy table above. No code changes beyond the call site.
- **New top-level `use_case`**: edit the `UseCase` enum in `packages/shared/src/runs.ts` and the matching `UseCase` type in `packages/core/src/analytics/use_case.ts`. Then update this doc.
## Configuration
PostHog credentials live in two env vars (also baked into the binary at packaging time — never set at runtime in distributed builds):
- `VITE_PUBLIC_POSTHOG_KEY` — project API key (e.g. `phc_xxx`). Public-facing — safe to commit if you'd rather hardcode.
- `VITE_PUBLIC_POSTHOG_HOST` — e.g. `https://us.i.posthog.com`. Defaults to US cloud if unset.
Where they're consumed:
- **Renderer** (Vite): `import.meta.env.VITE_PUBLIC_POSTHOG_*` — inlined at build time.
- **Main** (esbuild via `apps/main/bundle.mjs`): inlined into `main.cjs` at packaging time using esbuild `define`. In dev (`npm run dev`), main reads them from `process.env` at runtime.
For GitHub Actions / packaged builds: set both as workflow env vars (from secrets) on the step that runs `npm run package` or `npm run make`. They'll be baked in.
If unset, analytics no-op silently — you'll see `[Analytics] POSTHOG_KEY not set; analytics disabled` in main-process logs.
`installationId`: stored in `~/.rowboat/config/installation.json`, generated on first run.
## File map
| File | Purpose |
|---|---|
| `packages/core/src/analytics/installation.ts` | Stable per-install distinct_id |
| `packages/core/src/analytics/posthog.ts` | Main-process client (`capture`, `identify`, `reset`, `shutdown`) |
| `packages/core/src/analytics/usage.ts` | `captureLlmUsage()` helper |
| `packages/core/src/analytics/use_case.ts` | `AsyncLocalStorage` for tool-internal LLM call inheritance |
| `apps/renderer/src/lib/analytics.ts` | Renderer event wrappers |
| `apps/renderer/src/hooks/useAnalyticsIdentity.ts` | Renderer identify/reset on OAuth events |
| `apps/main/src/oauth-handler.ts` | Main-side identify/reset/sign-in/sign-out events |
| `apps/main/src/main.ts` | `before-quit` hook flushes queued events |
| `packages/shared/src/ipc.ts` | `analytics:bootstrap` IPC channel definition |
| `apps/main/src/ipc.ts` | `analytics:bootstrap` handler + forwards `userId` on `oauth:didConnect` |
| `apps/main/bundle.mjs` | Bakes `POSTHOG_KEY`/`POSTHOG_HOST` into packaged `main.cjs` |

View file

@ -31,6 +31,11 @@ await esbuild.build({
// Replace import.meta.url directly with our polyfill variable // Replace import.meta.url directly with our polyfill variable
define: { define: {
'import.meta.url': '__import_meta_url', 'import.meta.url': '__import_meta_url',
// Inject PostHog credentials at build time. Reuse the renderer's
// VITE_PUBLIC_* envs so packaging only needs one set of values.
// Empty strings disable analytics gracefully.
'process.env.POSTHOG_KEY': JSON.stringify(process.env.VITE_PUBLIC_POSTHOG_KEY ?? ''),
'process.env.POSTHOG_HOST': JSON.stringify(process.env.VITE_PUBLIC_POSTHOG_HOST ?? 'https://us.i.posthog.com'),
}, },
}); });

View file

@ -46,6 +46,8 @@ import { getAccessToken } from '@x/core/dist/auth/tokens.js';
import { getRowboatConfig } from '@x/core/dist/config/rowboat.js'; import { getRowboatConfig } from '@x/core/dist/config/rowboat.js';
import { triggerTrackUpdate } from '@x/core/dist/knowledge/track/runner.js'; import { triggerTrackUpdate } from '@x/core/dist/knowledge/track/runner.js';
import { trackBus } from '@x/core/dist/knowledge/track/bus.js'; import { trackBus } from '@x/core/dist/knowledge/track/bus.js';
import { getInstallationId } from '@x/core/dist/analytics/installation.js';
import { API_URL } from '@x/core/dist/config/env.js';
import { import {
fetchYaml, fetchYaml,
updateTrackBlock, updateTrackBlock,
@ -342,7 +344,7 @@ function emitServiceEvent(event: z.infer<typeof ServiceEvent>): void {
} }
} }
export function emitOAuthEvent(event: { provider: string; success: boolean; error?: string }): void { export function emitOAuthEvent(event: { provider: string; success: boolean; error?: string; userId?: string }): void {
const windows = BrowserWindow.getAllWindows(); const windows = BrowserWindow.getAllWindows();
for (const win of windows) { for (const win of windows) {
if (!win.isDestroyed() && win.webContents) { if (!win.isDestroyed() && win.webContents) {
@ -415,6 +417,12 @@ export function setupIpcHandlers() {
// args is null for this channel (no request payload) // args is null for this channel (no request payload)
return getVersions(); return getVersions();
}, },
'analytics:bootstrap': async () => {
return {
installationId: getInstallationId(),
apiUrl: API_URL,
};
},
'workspace:getRoot': async () => { 'workspace:getRoot': async () => {
return workspace.getRoot(); return workspace.getRoot();
}, },

View file

@ -26,6 +26,8 @@ import { init as initAgentNotes } from "@x/core/dist/knowledge/agent_notes.js";
import { init as initTrackScheduler } from "@x/core/dist/knowledge/track/scheduler.js"; import { init as initTrackScheduler } from "@x/core/dist/knowledge/track/scheduler.js";
import { init as initTrackEventProcessor } from "@x/core/dist/knowledge/track/events.js"; import { init as initTrackEventProcessor } from "@x/core/dist/knowledge/track/events.js";
import { init as initLocalSites, shutdown as shutdownLocalSites } from "@x/core/dist/local-sites/server.js"; import { init as initLocalSites, shutdown as shutdownLocalSites } from "@x/core/dist/local-sites/server.js";
import { shutdown as shutdownAnalytics } from "@x/core/dist/analytics/posthog.js";
import { identifyIfSignedIn } from "@x/core/dist/analytics/identify.js";
import { initConfigs } from "@x/core/dist/config/initConfigs.js"; import { initConfigs } from "@x/core/dist/config/initConfigs.js";
import started from "electron-squirrel-startup"; import started from "electron-squirrel-startup";
@ -230,6 +232,13 @@ app.whenReady().then(async () => {
// Initialize all config files before UI can access them // Initialize all config files before UI can access them
await initConfigs(); await initConfigs();
// PostHog identify() is idempotent — call it on every startup so existing
// signed-in installs (and every cold start of v0.3.4+) get re-identified.
// Otherwise main-process events stay anonymous until the user re-signs-in.
identifyIfSignedIn().catch((error) => {
console.error('[Analytics] Failed to identify on startup:', error);
});
registerBrowserControlService(new ElectronBrowserControlService()); registerBrowserControlService(new ElectronBrowserControlService());
setupIpcHandlers(); setupIpcHandlers();
@ -318,4 +327,7 @@ app.on("before-quit", () => {
shutdownLocalSites().catch((error) => { shutdownLocalSites().catch((error) => {
console.error('[LocalSites] Failed to shut down cleanly:', error); console.error('[LocalSites] Failed to shut down cleanly:', error);
}); });
shutdownAnalytics().catch((error) => {
console.error('[Analytics] Failed to flush on quit:', error);
});
}); });

View file

@ -12,6 +12,7 @@ import { triggerSync as triggerCalendarSync } from '@x/core/dist/knowledge/sync_
import { triggerSync as triggerFirefliesSync } from '@x/core/dist/knowledge/sync_fireflies.js'; import { triggerSync as triggerFirefliesSync } from '@x/core/dist/knowledge/sync_fireflies.js';
import { emitOAuthEvent } from './ipc.js'; import { emitOAuthEvent } from './ipc.js';
import { getBillingInfo } from '@x/core/dist/billing/billing.js'; import { getBillingInfo } from '@x/core/dist/billing/billing.js';
import { capture as analyticsCapture, identify as analyticsIdentify, reset as analyticsReset } from '@x/core/dist/analytics/posthog.js';
const REDIRECT_URI = 'http://localhost:8080/oauth/callback'; const REDIRECT_URI = 'http://localhost:8080/oauth/callback';
@ -275,16 +276,33 @@ export async function connectProvider(provider: string, credentials?: { clientId
// For Rowboat sign-in, ensure user + Stripe customer exist before // For Rowboat sign-in, ensure user + Stripe customer exist before
// notifying the renderer. Without this, parallel API calls from // notifying the renderer. Without this, parallel API calls from
// multiple renderer hooks race to create the user, causing duplicates. // multiple renderer hooks race to create the user, causing duplicates.
let signedInUserId: string | undefined;
if (provider === 'rowboat') { if (provider === 'rowboat') {
try { try {
await getBillingInfo(); const billing = await getBillingInfo();
if (billing.userId) {
signedInUserId = billing.userId;
analyticsIdentify(billing.userId, {
...(billing.userEmail ? { email: billing.userEmail } : {}),
plan: billing.subscriptionPlan,
status: billing.subscriptionStatus,
});
analyticsCapture('user_signed_in', {
plan: billing.subscriptionPlan,
status: billing.subscriptionStatus,
});
}
} catch (meError) { } catch (meError) {
console.error('[OAuth] Failed to initialize user via /v1/me:', meError); console.error('[OAuth] Failed to initialize user via /v1/me:', meError);
} }
} }
// Emit success event to renderer // Emit success event to renderer
emitOAuthEvent({ provider, success: true }); emitOAuthEvent({
provider,
success: true,
...(signedInUserId ? { userId: signedInUserId } : {}),
});
} catch (error) { } catch (error) {
console.error('OAuth token exchange failed:', error); console.error('OAuth token exchange failed:', error);
// Log cause chain for debugging (e.g. OAUTH_INVALID_RESPONSE -> OperationProcessingError) // Log cause chain for debugging (e.g. OAUTH_INVALID_RESPONSE -> OperationProcessingError)
@ -347,6 +365,10 @@ export async function disconnectProvider(provider: string): Promise<{ success: b
try { try {
const oauthRepo = getOAuthRepo(); const oauthRepo = getOAuthRepo();
await oauthRepo.delete(provider); await oauthRepo.delete(provider);
if (provider === 'rowboat') {
analyticsCapture('user_signed_out');
analyticsReset();
}
// Notify renderer so sidebar, voice, and billing re-check state // Notify renderer so sidebar, voice, and billing re-check state
emitOAuthEvent({ provider, success: false }); emitOAuthEvent({ provider, success: false });
return { success: true }; return { success: true };

View file

@ -28,6 +28,7 @@
"@tiptap/extension-image": "^3.16.0", "@tiptap/extension-image": "^3.16.0",
"@tiptap/extension-link": "^3.15.3", "@tiptap/extension-link": "^3.15.3",
"@tiptap/extension-placeholder": "^3.15.3", "@tiptap/extension-placeholder": "^3.15.3",
"@tiptap/extension-table": "^3.22.4",
"@tiptap/extension-task-item": "^3.15.3", "@tiptap/extension-task-item": "^3.15.3",
"@tiptap/extension-task-list": "^3.15.3", "@tiptap/extension-task-list": "^3.15.3",
"@tiptap/pm": "^3.15.3", "@tiptap/pm": "^3.15.3",
@ -48,6 +49,7 @@
"react": "^19.2.0", "react": "^19.2.0",
"react-dom": "^19.2.0", "react-dom": "^19.2.0",
"recharts": "^3.8.0", "recharts": "^3.8.0",
"remark-breaks": "^4.0.0",
"sonner": "^2.0.7", "sonner": "^2.0.7",
"streamdown": "^1.6.10", "streamdown": "^1.6.10",
"tailwind-merge": "^3.4.0", "tailwind-merge": "^3.4.0",

View file

@ -62,6 +62,8 @@ import { BrowserPane } from '@/components/browser-pane/BrowserPane'
import { VersionHistoryPanel } from '@/components/version-history-panel' import { VersionHistoryPanel } from '@/components/version-history-panel'
import { FileCardProvider } from '@/contexts/file-card-context' import { FileCardProvider } from '@/contexts/file-card-context'
import { MarkdownPreOverride } from '@/components/ai-elements/markdown-code-override' import { MarkdownPreOverride } from '@/components/ai-elements/markdown-code-override'
import { defaultRemarkPlugins } from 'streamdown'
import remarkBreaks from 'remark-breaks'
import { TabBar, type ChatTab, type FileTab } from '@/components/tab-bar' import { TabBar, type ChatTab, type FileTab } from '@/components/tab-bar'
import { import {
type ChatMessage, type ChatMessage,
@ -104,6 +106,11 @@ interface TreeNode extends DirEntry {
const streamdownComponents = { pre: MarkdownPreOverride } const streamdownComponents = { pre: MarkdownPreOverride }
// Render user messages with markdown so bullets, bold, links, etc. survive the
// round-trip from the input textarea. `remarkBreaks` turns single newlines
// into <br> so typed line breaks are preserved without requiring blank lines.
const userMessageRemarkPlugins = [...Object.values(defaultRemarkPlugins), remarkBreaks]
function SmoothStreamingMessage({ text, components }: { text: string; components: typeof streamdownComponents }) { function SmoothStreamingMessage({ text, components }: { text: string; components: typeof streamdownComponents }) {
const smoothText = useSmoothedText(text) const smoothText = useSmoothedText(text)
return <MessageResponse components={components}>{smoothText}</MessageResponse> return <MessageResponse components={components}>{smoothText}</MessageResponse>
@ -127,8 +134,8 @@ const TITLEBAR_BUTTON_PX = 32
const TITLEBAR_BUTTON_GAP_PX = 4 const TITLEBAR_BUTTON_GAP_PX = 4
const TITLEBAR_HEADER_GAP_PX = 8 const TITLEBAR_HEADER_GAP_PX = 8
const TITLEBAR_TOGGLE_MARGIN_LEFT_PX = 12 const TITLEBAR_TOGGLE_MARGIN_LEFT_PX = 12
const TITLEBAR_BUTTONS_COLLAPSED = 4 const TITLEBAR_BUTTONS_COLLAPSED = 1
const TITLEBAR_BUTTON_GAPS_COLLAPSED = 3 const TITLEBAR_BUTTON_GAPS_COLLAPSED = 0
const GRAPH_TAB_PATH = '__rowboat_graph_view__' const GRAPH_TAB_PATH = '__rowboat_graph_view__'
const SUGGESTED_TOPICS_TAB_PATH = '__rowboat_suggested_topics__' const SUGGESTED_TOPICS_TAB_PATH = '__rowboat_suggested_topics__'
const BASES_DEFAULT_TAB_PATH = '__rowboat_bases_default__' const BASES_DEFAULT_TAB_PATH = '__rowboat_bases_default__'
@ -506,22 +513,13 @@ function viewStatesEqual(a: ViewState, b: ViewState): boolean {
return true // both graph return true // both graph
} }
/** Sidebar toggle + utility buttons (fixed position, top-left) */ /** Sidebar toggle (fixed position, top-left) */
function FixedSidebarToggle({ function FixedSidebarToggle({
onNavigateBack,
onNavigateForward,
canNavigateBack,
canNavigateForward,
leftInsetPx, leftInsetPx,
}: { }: {
onNavigateBack: () => void
onNavigateForward: () => void
canNavigateBack: boolean
canNavigateForward: boolean
leftInsetPx: number leftInsetPx: number
}) { }) {
const { toggleSidebar, state } = useSidebar() const { toggleSidebar } = useSidebar()
const isCollapsed = state === "collapsed"
return ( return (
<div className="fixed left-0 top-0 z-50 flex h-10 items-center" style={{ WebkitAppRegion: 'no-drag' } as React.CSSProperties}> <div className="fixed left-0 top-0 z-50 flex h-10 items-center" style={{ WebkitAppRegion: 'no-drag' } as React.CSSProperties}>
<div aria-hidden="true" className="h-10 shrink-0" style={{ width: leftInsetPx }} /> <div aria-hidden="true" className="h-10 shrink-0" style={{ width: leftInsetPx }} />
@ -535,30 +533,6 @@ function FixedSidebarToggle({
> >
<PanelLeftIcon className="size-5" /> <PanelLeftIcon className="size-5" />
</button> </button>
{/* Back / Forward navigation */}
{isCollapsed && (
<>
<button
type="button"
onClick={onNavigateBack}
disabled={!canNavigateBack}
className="flex h-8 w-8 items-center justify-center rounded-md text-muted-foreground hover:bg-accent hover:text-foreground transition-colors disabled:opacity-30 disabled:pointer-events-none"
style={{ marginLeft: TITLEBAR_BUTTON_GAP_PX }}
aria-label="Go back"
>
<ChevronLeftIcon className="size-5" />
</button>
<button
type="button"
onClick={onNavigateForward}
disabled={!canNavigateForward}
className="flex h-8 w-8 items-center justify-center rounded-md text-muted-foreground hover:bg-accent hover:text-foreground transition-colors disabled:opacity-30 disabled:pointer-events-none"
aria-label="Go forward"
>
<ChevronRightIcon className="size-5" />
</button>
</>
)}
</div> </div>
) )
} }
@ -850,6 +824,7 @@ function App() {
const chatTabIdCounterRef = useRef(0) const chatTabIdCounterRef = useRef(0)
const newChatTabId = () => `chat-tab-${++chatTabIdCounterRef.current}` const newChatTabId = () => `chat-tab-${++chatTabIdCounterRef.current}`
const chatDraftsRef = useRef(new Map<string, string>()) const chatDraftsRef = useRef(new Map<string, string>())
const selectedModelByTabRef = useRef(new Map<string, { provider: string; model: string }>())
const chatScrollTopByTabRef = useRef(new Map<string, number>()) const chatScrollTopByTabRef = useRef(new Map<string, number>())
const [toolOpenByTab, setToolOpenByTab] = useState<Record<string, Record<string, boolean>>>({}) const [toolOpenByTab, setToolOpenByTab] = useState<Record<string, Record<string, boolean>>>({})
const [chatViewportAnchorByTab, setChatViewportAnchorByTab] = useState<Record<string, ChatViewportAnchorState>>({}) const [chatViewportAnchorByTab, setChatViewportAnchorByTab] = useState<Record<string, ChatViewportAnchorState>>({})
@ -2198,8 +2173,10 @@ function App() {
let isNewRun = false let isNewRun = false
let newRunCreatedAt: string | null = null let newRunCreatedAt: string | null = null
if (!currentRunId) { if (!currentRunId) {
const selected = selectedModelByTabRef.current.get(submitTabId)
const run = await window.ipc.invoke('runs:create', { const run = await window.ipc.invoke('runs:create', {
agentId, agentId,
...(selected ? { model: selected.model, provider: selected.provider } : {}),
}) })
currentRunId = run.id currentRunId = run.id
newRunCreatedAt = run.createdAt newRunCreatedAt = run.createdAt
@ -2504,6 +2481,7 @@ function App() {
return next return next
}) })
chatDraftsRef.current.delete(tabId) chatDraftsRef.current.delete(tabId)
selectedModelByTabRef.current.delete(tabId)
chatScrollTopByTabRef.current.delete(tabId) chatScrollTopByTabRef.current.delete(tabId)
setToolOpenByTab((prev) => { setToolOpenByTab((prev) => {
if (!(tabId in prev)) return prev if (!(tabId in prev)) return prev
@ -2791,6 +2769,27 @@ function App() {
return () => window.removeEventListener('rowboat:open-copilot-edit-track', handler as EventListener) return () => window.removeEventListener('rowboat:open-copilot-edit-track', handler as EventListener)
}, [submitFromPalette]) }, [submitFromPalette])
// Listener for prompt-block "Run" events
// (dispatched by apps/renderer/src/extensions/prompt-block.tsx)
useEffect(() => {
const handler = (e: Event) => {
const ev = e as CustomEvent<{
instruction?: string
filePath?: string
label?: string
}>
const instruction = ev.detail?.instruction
const filePath = ev.detail?.filePath
if (!instruction) return
const mention = filePath
? { path: filePath, displayName: filePath.split('/').pop() ?? filePath }
: null
submitFromPalette(instruction, mention)
}
window.addEventListener('rowboat:open-copilot-prompt', handler as EventListener)
return () => window.removeEventListener('rowboat:open-copilot-prompt', handler as EventListener)
}, [submitFromPalette])
const toggleKnowledgePane = useCallback(() => { const toggleKnowledgePane = useCallback(() => {
setIsRightPaneMaximized(false) setIsRightPaneMaximized(false)
setIsChatSidebarOpen(prev => !prev) setIsChatSidebarOpen(prev => !prev)
@ -3982,7 +3981,14 @@ function App() {
<ChatMessageAttachments attachments={item.attachments} /> <ChatMessageAttachments attachments={item.attachments} />
</MessageContent> </MessageContent>
{item.content && ( {item.content && (
<MessageContent>{item.content}</MessageContent> <MessageContent>
<MessageResponse
components={streamdownComponents}
remarkPlugins={userMessageRemarkPlugins}
>
{item.content}
</MessageResponse>
</MessageContent>
)} )}
</Message> </Message>
) )
@ -4003,7 +4009,12 @@ function App() {
))} ))}
</div> </div>
)} )}
{message} <MessageResponse
components={streamdownComponents}
remarkPlugins={userMessageRemarkPlugins}
>
{message}
</MessageResponse>
</MessageContent> </MessageContent>
</Message> </Message>
) )
@ -4656,6 +4667,13 @@ function App() {
runId={tabState.runId} runId={tabState.runId}
initialDraft={chatDraftsRef.current.get(tab.id)} initialDraft={chatDraftsRef.current.get(tab.id)}
onDraftChange={(text) => setChatDraftForTab(tab.id, text)} onDraftChange={(text) => setChatDraftForTab(tab.id, text)}
onSelectedModelChange={(m) => {
if (m) {
selectedModelByTabRef.current.set(tab.id, m)
} else {
selectedModelByTabRef.current.delete(tab.id)
}
}}
isRecording={isActive && isRecording} isRecording={isActive && isRecording}
recordingText={isActive ? voice.interimText : undefined} recordingText={isActive ? voice.interimText : undefined}
recordingState={isActive ? (voice.state === 'connecting' ? 'connecting' : 'listening') : undefined} recordingState={isActive ? (voice.state === 'connecting' ? 'connecting' : 'listening') : undefined}
@ -4709,6 +4727,13 @@ function App() {
onPresetMessageConsumed={() => setPresetMessage(undefined)} onPresetMessageConsumed={() => setPresetMessage(undefined)}
getInitialDraft={(tabId) => chatDraftsRef.current.get(tabId)} getInitialDraft={(tabId) => chatDraftsRef.current.get(tabId)}
onDraftChangeForTab={setChatDraftForTab} onDraftChangeForTab={setChatDraftForTab}
onSelectedModelChangeForTab={(tabId, m) => {
if (m) {
selectedModelByTabRef.current.set(tabId, m)
} else {
selectedModelByTabRef.current.delete(tabId)
}
}}
pendingAskHumanRequests={pendingAskHumanRequests} pendingAskHumanRequests={pendingAskHumanRequests}
allPermissionRequests={allPermissionRequests} allPermissionRequests={allPermissionRequests}
permissionResponses={permissionResponses} permissionResponses={permissionResponses}
@ -4735,10 +4760,6 @@ function App() {
)} )}
{/* Rendered last so its no-drag region paints over the sidebar drag region */} {/* Rendered last so its no-drag region paints over the sidebar drag region */}
<FixedSidebarToggle <FixedSidebarToggle
onNavigateBack={() => { void navigateBack() }}
onNavigateForward={() => { void navigateForward() }}
canNavigateBack={canNavigateBack}
canNavigateForward={canNavigateForward}
leftInsetPx={isMac ? MACOS_TRAFFIC_LIGHTS_RESERVED_PX : 0} leftInsetPx={isMac ? MACOS_TRAFFIC_LIGHTS_RESERVED_PX : 0}
/> />
</SidebarProvider> </SidebarProvider>

View file

@ -69,13 +69,20 @@ const providerDisplayNames: Record<string, string> = {
rowboat: 'Rowboat', rowboat: 'Rowboat',
} }
type ProviderName = "openai" | "anthropic" | "google" | "openrouter" | "aigateway" | "ollama" | "openai-compatible" | "rowboat"
interface ConfiguredModel { interface ConfiguredModel {
flavor: "openai" | "anthropic" | "google" | "openrouter" | "aigateway" | "ollama" | "openai-compatible" | "rowboat" provider: ProviderName
model: string model: string
apiKey?: string }
baseURL?: string
headers?: Record<string, string> export interface SelectedModel {
knowledgeGraphModel?: string provider: string
model: string
}
function getSelectedModelDisplayName(model: string) {
return model.split('/').pop() || model
} }
function getAttachmentIcon(kind: AttachmentIconKind) { function getAttachmentIcon(kind: AttachmentIconKind) {
@ -120,6 +127,8 @@ interface ChatInputInnerProps {
ttsMode?: 'summary' | 'full' ttsMode?: 'summary' | 'full'
onToggleTts?: () => void onToggleTts?: () => void
onTtsModeChange?: (mode: 'summary' | 'full') => void onTtsModeChange?: (mode: 'summary' | 'full') => void
/** Fired when the user picks a different model in the dropdown (only when no run exists yet). */
onSelectedModelChange?: (model: SelectedModel | null) => void
} }
function ChatInputInner({ function ChatInputInner({
@ -145,6 +154,7 @@ function ChatInputInner({
ttsMode, ttsMode,
onToggleTts, onToggleTts,
onTtsModeChange, onTtsModeChange,
onSelectedModelChange,
}: ChatInputInnerProps) { }: ChatInputInnerProps) {
const controller = usePromptInputController() const controller = usePromptInputController()
const message = controller.textInput.value const message = controller.textInput.value
@ -155,10 +165,27 @@ function ChatInputInner({
const [configuredModels, setConfiguredModels] = useState<ConfiguredModel[]>([]) const [configuredModels, setConfiguredModels] = useState<ConfiguredModel[]>([])
const [activeModelKey, setActiveModelKey] = useState('') const [activeModelKey, setActiveModelKey] = useState('')
const [lockedModel, setLockedModel] = useState<SelectedModel | null>(null)
const [searchEnabled, setSearchEnabled] = useState(false) const [searchEnabled, setSearchEnabled] = useState(false)
const [searchAvailable, setSearchAvailable] = useState(false) const [searchAvailable, setSearchAvailable] = useState(false)
const [isRowboatConnected, setIsRowboatConnected] = useState(false) const [isRowboatConnected, setIsRowboatConnected] = useState(false)
// When a run exists, freeze the dropdown to the run's resolved model+provider.
useEffect(() => {
if (!runId) {
setLockedModel(null)
return
}
let cancelled = false
window.ipc.invoke('runs:fetch', { runId }).then((run) => {
if (cancelled) return
if (run.provider && run.model) {
setLockedModel({ provider: run.provider, model: run.model })
}
}).catch(() => { /* legacy run or fetch failure — leave unlocked */ })
return () => { cancelled = true }
}, [runId])
// Check Rowboat sign-in state // Check Rowboat sign-in state
useEffect(() => { useEffect(() => {
window.ipc.invoke('oauth:getState', null).then((result) => { window.ipc.invoke('oauth:getState', null).then((result) => {
@ -176,42 +203,20 @@ function ChatInputInner({
return cleanup return cleanup
}, []) }, [])
// Load model config (gateway when signed in, local config when BYOK) // Load the list of models the user can choose from.
// Signed-in: gateway model list. Signed-out: providers configured in models.json.
const loadModelConfig = useCallback(async () => { const loadModelConfig = useCallback(async () => {
try { try {
if (isRowboatConnected) { if (isRowboatConnected) {
// Fetch gateway models
const listResult = await window.ipc.invoke('models:list', null) const listResult = await window.ipc.invoke('models:list', null)
const rowboatProvider = listResult.providers?.find( const rowboatProvider = listResult.providers?.find(
(p: { id: string }) => p.id === 'rowboat' (p: { id: string }) => p.id === 'rowboat'
) )
const models: ConfiguredModel[] = (rowboatProvider?.models || []).map( const models: ConfiguredModel[] = (rowboatProvider?.models || []).map(
(m: { id: string }) => ({ flavor: 'rowboat', model: m.id }) (m: { id: string }) => ({ provider: 'rowboat', model: m.id })
) )
// Read current default from config
let defaultModel = ''
try {
const result = await window.ipc.invoke('workspace:readFile', { path: 'config/models.json' })
const parsed = JSON.parse(result.data)
defaultModel = parsed?.model || ''
} catch { /* no config yet */ }
if (defaultModel) {
models.sort((a, b) => {
if (a.model === defaultModel) return -1
if (b.model === defaultModel) return 1
return 0
})
}
setConfiguredModels(models) setConfiguredModels(models)
const activeKey = defaultModel
? `rowboat/${defaultModel}`
: models[0] ? `rowboat/${models[0].model}` : ''
if (activeKey) setActiveModelKey(activeKey)
} else { } else {
// BYOK: read from local models.json
const result = await window.ipc.invoke('workspace:readFile', { path: 'config/models.json' }) const result = await window.ipc.invoke('workspace:readFile', { path: 'config/models.json' })
const parsed = JSON.parse(result.data) const parsed = JSON.parse(result.data)
const models: ConfiguredModel[] = [] const models: ConfiguredModel[] = []
@ -223,32 +228,12 @@ function ChatInputInner({
const allModels = modelList.length > 0 ? modelList : singleModel ? [singleModel] : [] const allModels = modelList.length > 0 ? modelList : singleModel ? [singleModel] : []
for (const model of allModels) { for (const model of allModels) {
if (model) { if (model) {
models.push({ models.push({ provider: flavor as ProviderName, model })
flavor: flavor as ConfiguredModel['flavor'],
model,
apiKey: (e.apiKey as string) || undefined,
baseURL: (e.baseURL as string) || undefined,
headers: (e.headers as Record<string, string>) || undefined,
knowledgeGraphModel: (e.knowledgeGraphModel as string) || undefined,
})
} }
} }
} }
} }
const defaultKey = parsed?.provider?.flavor && parsed?.model
? `${parsed.provider.flavor}/${parsed.model}`
: ''
models.sort((a, b) => {
const aKey = `${a.flavor}/${a.model}`
const bKey = `${b.flavor}/${b.model}`
if (aKey === defaultKey) return -1
if (bKey === defaultKey) return 1
return 0
})
setConfiguredModels(models) setConfiguredModels(models)
if (defaultKey) {
setActiveModelKey(defaultKey)
}
} }
} catch { } catch {
// No config yet // No config yet
@ -284,40 +269,15 @@ function ChatInputInner({
checkSearch() checkSearch()
}, [isActive, isRowboatConnected]) }, [isActive, isRowboatConnected])
const handleModelChange = useCallback(async (key: string) => { // Selecting a model affects only the *next* run created from this tab.
const entry = configuredModels.find((m) => `${m.flavor}/${m.model}` === key) // Once a run exists, model is frozen on the run and the dropdown is read-only.
const handleModelChange = useCallback((key: string) => {
if (lockedModel) return
const entry = configuredModels.find((m) => `${m.provider}/${m.model}` === key)
if (!entry) return if (!entry) return
setActiveModelKey(key) setActiveModelKey(key)
onSelectedModelChange?.({ provider: entry.provider, model: entry.model })
try { }, [configuredModels, lockedModel, onSelectedModelChange])
if (entry.flavor === 'rowboat') {
// Gateway model — save with valid Zod flavor, no credentials
await window.ipc.invoke('models:saveConfig', {
provider: { flavor: 'openrouter' as const },
model: entry.model,
knowledgeGraphModel: entry.knowledgeGraphModel,
})
} else {
// BYOK — preserve full provider config
const providerModels = configuredModels
.filter((m) => m.flavor === entry.flavor)
.map((m) => m.model)
await window.ipc.invoke('models:saveConfig', {
provider: {
flavor: entry.flavor,
apiKey: entry.apiKey,
baseURL: entry.baseURL,
headers: entry.headers,
},
model: entry.model,
models: providerModels,
knowledgeGraphModel: entry.knowledgeGraphModel,
})
}
} catch {
toast.error('Failed to switch model')
}
}, [configuredModels])
// Restore the tab draft when this input mounts. // Restore the tab draft when this input mounts.
useEffect(() => { useEffect(() => {
@ -555,7 +515,14 @@ function ChatInputInner({
) )
)} )}
<div className="flex-1" /> <div className="flex-1" />
{configuredModels.length > 0 && ( {lockedModel ? (
<span
className="flex h-7 shrink-0 items-center gap-1 rounded-full px-2 text-xs text-muted-foreground"
title={`${providerDisplayNames[lockedModel.provider] || lockedModel.provider} — fixed for this chat`}
>
<span className="max-w-[150px] truncate">{getSelectedModelDisplayName(lockedModel.model)}</span>
</span>
) : configuredModels.length > 0 ? (
<DropdownMenu> <DropdownMenu>
<DropdownMenuTrigger asChild> <DropdownMenuTrigger asChild>
<button <button
@ -563,7 +530,7 @@ function ChatInputInner({
className="flex h-7 shrink-0 items-center gap-1 rounded-full px-2 text-xs text-muted-foreground transition-colors hover:bg-muted hover:text-foreground" className="flex h-7 shrink-0 items-center gap-1 rounded-full px-2 text-xs text-muted-foreground transition-colors hover:bg-muted hover:text-foreground"
> >
<span className="max-w-[150px] truncate"> <span className="max-w-[150px] truncate">
{configuredModels.find((m) => `${m.flavor}/${m.model}` === activeModelKey)?.model || configuredModels[0]?.model || 'Model'} {getSelectedModelDisplayName(configuredModels.find((m) => `${m.provider}/${m.model}` === activeModelKey)?.model || configuredModels[0]?.model || 'Model')}
</span> </span>
<ChevronDown className="h-3 w-3" /> <ChevronDown className="h-3 w-3" />
</button> </button>
@ -571,18 +538,18 @@ function ChatInputInner({
<DropdownMenuContent align="end"> <DropdownMenuContent align="end">
<DropdownMenuRadioGroup value={activeModelKey} onValueChange={handleModelChange}> <DropdownMenuRadioGroup value={activeModelKey} onValueChange={handleModelChange}>
{configuredModels.map((m) => { {configuredModels.map((m) => {
const key = `${m.flavor}/${m.model}` const key = `${m.provider}/${m.model}`
return ( return (
<DropdownMenuRadioItem key={key} value={key}> <DropdownMenuRadioItem key={key} value={key}>
<span className="truncate">{m.model}</span> <span className="truncate">{m.model}</span>
<span className="ml-2 text-xs text-muted-foreground">{providerDisplayNames[m.flavor] || m.flavor}</span> <span className="ml-2 text-xs text-muted-foreground">{providerDisplayNames[m.provider] || m.provider}</span>
</DropdownMenuRadioItem> </DropdownMenuRadioItem>
) )
})} })}
</DropdownMenuRadioGroup> </DropdownMenuRadioGroup>
</DropdownMenuContent> </DropdownMenuContent>
</DropdownMenu> </DropdownMenu>
)} ) : null}
{onToggleTts && ttsAvailable && ( {onToggleTts && ttsAvailable && (
<div className="flex shrink-0 items-center"> <div className="flex shrink-0 items-center">
<Tooltip> <Tooltip>
@ -729,6 +696,7 @@ export interface ChatInputWithMentionsProps {
ttsMode?: 'summary' | 'full' ttsMode?: 'summary' | 'full'
onToggleTts?: () => void onToggleTts?: () => void
onTtsModeChange?: (mode: 'summary' | 'full') => void onTtsModeChange?: (mode: 'summary' | 'full') => void
onSelectedModelChange?: (model: SelectedModel | null) => void
} }
export function ChatInputWithMentions({ export function ChatInputWithMentions({
@ -757,6 +725,7 @@ export function ChatInputWithMentions({
ttsMode, ttsMode,
onToggleTts, onToggleTts,
onTtsModeChange, onTtsModeChange,
onSelectedModelChange,
}: ChatInputWithMentionsProps) { }: ChatInputWithMentionsProps) {
return ( return (
<PromptInputProvider knowledgeFiles={knowledgeFiles} recentFiles={recentFiles} visibleFiles={visibleFiles}> <PromptInputProvider knowledgeFiles={knowledgeFiles} recentFiles={recentFiles} visibleFiles={visibleFiles}>
@ -783,6 +752,7 @@ export function ChatInputWithMentions({
ttsMode={ttsMode} ttsMode={ttsMode}
onToggleTts={onToggleTts} onToggleTts={onToggleTts}
onTtsModeChange={onTtsModeChange} onTtsModeChange={onTtsModeChange}
onSelectedModelChange={onSelectedModelChange}
/> />
</PromptInputProvider> </PromptInputProvider>
) )

View file

@ -25,8 +25,10 @@ import { Suggestions } from '@/components/ai-elements/suggestions'
import { type PromptInputMessage, type FileMention } from '@/components/ai-elements/prompt-input' import { type PromptInputMessage, type FileMention } from '@/components/ai-elements/prompt-input'
import { FileCardProvider } from '@/contexts/file-card-context' import { FileCardProvider } from '@/contexts/file-card-context'
import { MarkdownPreOverride } from '@/components/ai-elements/markdown-code-override' import { MarkdownPreOverride } from '@/components/ai-elements/markdown-code-override'
import { defaultRemarkPlugins } from 'streamdown'
import remarkBreaks from 'remark-breaks'
import { TabBar, type ChatTab } from '@/components/tab-bar' import { TabBar, type ChatTab } from '@/components/tab-bar'
import { ChatInputWithMentions, type StagedAttachment } from '@/components/chat-input-with-mentions' import { ChatInputWithMentions, type StagedAttachment, type SelectedModel } from '@/components/chat-input-with-mentions'
import { ChatMessageAttachments } from '@/components/chat-message-attachments' import { ChatMessageAttachments } from '@/components/chat-message-attachments'
import { wikiLabel } from '@/lib/wiki-links' import { wikiLabel } from '@/lib/wiki-links'
import { import {
@ -49,6 +51,11 @@ import {
const streamdownComponents = { pre: MarkdownPreOverride } const streamdownComponents = { pre: MarkdownPreOverride }
// Render user messages with markdown so bullets, bold, links, etc. survive the
// round-trip from the input textarea. `remarkBreaks` turns single newlines
// into <br> so typed line breaks are preserved without requiring blank lines.
const userMessageRemarkPlugins = [...Object.values(defaultRemarkPlugins), remarkBreaks]
/* ─── Billing error helpers ─── */ /* ─── Billing error helpers ─── */
const BILLING_ERROR_PATTERNS = [ const BILLING_ERROR_PATTERNS = [
@ -158,6 +165,7 @@ interface ChatSidebarProps {
onPresetMessageConsumed?: () => void onPresetMessageConsumed?: () => void
getInitialDraft?: (tabId: string) => string | undefined getInitialDraft?: (tabId: string) => string | undefined
onDraftChangeForTab?: (tabId: string, text: string) => void onDraftChangeForTab?: (tabId: string, text: string) => void
onSelectedModelChangeForTab?: (tabId: string, model: SelectedModel | null) => void
pendingAskHumanRequests?: ChatTabViewState['pendingAskHumanRequests'] pendingAskHumanRequests?: ChatTabViewState['pendingAskHumanRequests']
allPermissionRequests?: ChatTabViewState['allPermissionRequests'] allPermissionRequests?: ChatTabViewState['allPermissionRequests']
permissionResponses?: ChatTabViewState['permissionResponses'] permissionResponses?: ChatTabViewState['permissionResponses']
@ -211,6 +219,7 @@ export function ChatSidebar({
onPresetMessageConsumed, onPresetMessageConsumed,
getInitialDraft, getInitialDraft,
onDraftChangeForTab, onDraftChangeForTab,
onSelectedModelChangeForTab,
pendingAskHumanRequests = new Map(), pendingAskHumanRequests = new Map(),
allPermissionRequests = new Map(), allPermissionRequests = new Map(),
permissionResponses = new Map(), permissionResponses = new Map(),
@ -351,7 +360,14 @@ export function ChatSidebar({
<ChatMessageAttachments attachments={item.attachments} /> <ChatMessageAttachments attachments={item.attachments} />
</MessageContent> </MessageContent>
{item.content && ( {item.content && (
<MessageContent>{item.content}</MessageContent> <MessageContent>
<MessageResponse
components={streamdownComponents}
remarkPlugins={userMessageRemarkPlugins}
>
{item.content}
</MessageResponse>
</MessageContent>
)} )}
</Message> </Message>
) )
@ -372,7 +388,12 @@ export function ChatSidebar({
))} ))}
</div> </div>
)} )}
{message} <MessageResponse
components={streamdownComponents}
remarkPlugins={userMessageRemarkPlugins}
>
{message}
</MessageResponse>
</MessageContent> </MessageContent>
</Message> </Message>
) )
@ -662,6 +683,7 @@ export function ChatSidebar({
runId={tabState.runId} runId={tabState.runId}
initialDraft={getInitialDraft?.(tab.id)} initialDraft={getInitialDraft?.(tab.id)}
onDraftChange={onDraftChangeForTab ? (text) => onDraftChangeForTab(tab.id, text) : undefined} onDraftChange={onDraftChangeForTab ? (text) => onDraftChangeForTab(tab.id, text) : undefined}
onSelectedModelChange={onSelectedModelChangeForTab ? (m) => onSelectedModelChangeForTab(tab.id, m) : undefined}
isRecording={isActive && isRecording} isRecording={isActive && isRecording}
recordingText={isActive ? recordingText : undefined} recordingText={isActive ? recordingText : undefined}
recordingState={isActive ? recordingState : undefined} recordingState={isActive ? recordingState : undefined}

View file

@ -7,9 +7,12 @@ import Image from '@tiptap/extension-image'
import Placeholder from '@tiptap/extension-placeholder' import Placeholder from '@tiptap/extension-placeholder'
import TaskList from '@tiptap/extension-task-list' import TaskList from '@tiptap/extension-task-list'
import TaskItem from '@tiptap/extension-task-item' import TaskItem from '@tiptap/extension-task-item'
import { TableKit, renderTableToMarkdown } from '@tiptap/extension-table'
import type { JSONContent, MarkdownRendererHelpers } from '@tiptap/react'
import { ImageUploadPlaceholderExtension, createImageUploadHandler } from '@/extensions/image-upload' import { ImageUploadPlaceholderExtension, createImageUploadHandler } from '@/extensions/image-upload'
import { TaskBlockExtension } from '@/extensions/task-block' import { TaskBlockExtension } from '@/extensions/task-block'
import { TrackBlockExtension } from '@/extensions/track-block' import { TrackBlockExtension } from '@/extensions/track-block'
import { PromptBlockExtension } from '@/extensions/prompt-block'
import { TrackTargetOpenExtension, TrackTargetCloseExtension } from '@/extensions/track-target' import { TrackTargetOpenExtension, TrackTargetCloseExtension } from '@/extensions/track-target'
import { ImageBlockExtension } from '@/extensions/image-block' import { ImageBlockExtension } from '@/extensions/image-block'
import { EmbedBlockExtension } from '@/extensions/embed-block' import { EmbedBlockExtension } from '@/extensions/embed-block'
@ -55,17 +58,22 @@ function preprocessMarkdown(markdown: string): string {
// line until a blank line terminates it, and markdown inline rules (bold, // line until a blank line terminates it, and markdown inline rules (bold,
// italics, links) don't apply inside the block. Without surrounding blank // italics, links) don't apply inside the block. Without surrounding blank
// lines, the line right after our placeholder div gets absorbed as HTML and // lines, the line right after our placeholder div gets absorbed as HTML and
// its markdown is not parsed. We consume any adjacent newlines in the match // its markdown is not parsed.
// and emit exactly `\n\n<div></div>\n\n` so the HTML block starts and ends on //
// its own line. // Consume ALL adjacent newlines (\n*, not \n?) so the emitted `\n\n…\n\n`
// is load/save stable. serializeBlocksToMarkdown emits `\n\n` between blocks
// on save; a `\n?` regex on reload would only consume one of those two
// newlines, so every cycle would add a net newline on each side of every
// marker — causing tracks running on an open note to steadily inflate the
// file with blank lines around target regions.
function preprocessTrackTargets(md: string): string { function preprocessTrackTargets(md: string): string {
return md return md
.replace( .replace(
/\n?<!--track-target:([^\s>]+)-->\n?/g, /\n*<!--track-target:([^\s>]+)-->\n*/g,
(_m, id: string) => `\n\n<div data-type="track-target-open" data-track-id="${id}"></div>\n\n`, (_m, id: string) => `\n\n<div data-type="track-target-open" data-track-id="${id}"></div>\n\n`,
) )
.replace( .replace(
/\n?<!--\/track-target:([^\s>]+)-->\n?/g, /\n*<!--\/track-target:([^\s>]+)-->\n*/g,
(_m, id: string) => `\n\n<div data-type="track-target-close" data-track-id="${id}"></div>\n\n`, (_m, id: string) => `\n\n<div data-type="track-target-close" data-track-id="${id}"></div>\n\n`,
) )
} }
@ -149,6 +157,17 @@ function serializeList(listNode: JsonNode, indent: number): string[] {
return lines return lines
} }
// Adapter for tiptap's first-party renderTableToMarkdown. Only renderChildren is
// actually invoked — the other helpers are stubs to satisfy the type.
const tableRenderHelpers: MarkdownRendererHelpers = {
renderChildren: (nodes) => {
const arr = Array.isArray(nodes) ? nodes : [nodes]
return arr.map(n => n.type === 'paragraph' ? nodeToText(n as JsonNode) : '').join('')
},
wrapInBlock: (prefix, content) => prefix + content,
indent: (content) => content,
}
// Serialize a single top-level block to its markdown string. Empty paragraphs (or blank-marker // Serialize a single top-level block to its markdown string. Empty paragraphs (or blank-marker
// paragraphs) return '' to signal "blank line slot" for the join logic in serializeBlocksToMarkdown. // paragraphs) return '' to signal "blank line slot" for the join logic in serializeBlocksToMarkdown.
function blockToMarkdown(node: JsonNode): string { function blockToMarkdown(node: JsonNode): string {
@ -168,6 +187,8 @@ function blockToMarkdown(node: JsonNode): string {
return serializeList(node, 0).join('\n') return serializeList(node, 0).join('\n')
case 'taskBlock': case 'taskBlock':
return '```task\n' + (node.attrs?.data as string || '{}') + '\n```' return '```task\n' + (node.attrs?.data as string || '{}') + '\n```'
case 'promptBlock':
return '```prompt\n' + (node.attrs?.data as string || '') + '\n```'
case 'trackBlock': case 'trackBlock':
return '```track\n' + (node.attrs?.data as string || '') + '\n```' return '```track\n' + (node.attrs?.data as string || '') + '\n```'
case 'trackTargetOpen': case 'trackTargetOpen':
@ -192,6 +213,8 @@ function blockToMarkdown(node: JsonNode): string {
return '```transcript\n' + (node.attrs?.data as string || '{}') + '\n```' return '```transcript\n' + (node.attrs?.data as string || '{}') + '\n```'
case 'mermaidBlock': case 'mermaidBlock':
return '```mermaid\n' + (node.attrs?.data as string || '') + '\n```' return '```mermaid\n' + (node.attrs?.data as string || '') + '\n```'
case 'table':
return renderTableToMarkdown(node as JSONContent, tableRenderHelpers).trim()
case 'codeBlock': { case 'codeBlock': {
const lang = (node.attrs?.language as string) || '' const lang = (node.attrs?.language as string) || ''
return '```' + lang + '\n' + nodeToText(node) + '\n```' return '```' + lang + '\n' + nodeToText(node) + '\n```'
@ -675,6 +698,7 @@ export const MarkdownEditor = forwardRef<MarkdownEditorHandle, MarkdownEditorPro
ImageUploadPlaceholderExtension, ImageUploadPlaceholderExtension,
TaskBlockExtension, TaskBlockExtension,
TrackBlockExtension.configure({ notePath }), TrackBlockExtension.configure({ notePath }),
PromptBlockExtension.configure({ notePath }),
TrackTargetOpenExtension, TrackTargetOpenExtension,
TrackTargetCloseExtension, TrackTargetCloseExtension,
ImageBlockExtension, ImageBlockExtension,
@ -697,6 +721,9 @@ export const MarkdownEditor = forwardRef<MarkdownEditorHandle, MarkdownEditorPro
TaskItem.configure({ TaskItem.configure({
nested: true, nested: true,
}), }),
TableKit.configure({
table: { resizable: false },
}),
Placeholder.configure({ Placeholder.configure({
placeholder, placeholder,
}), }),

View file

@ -59,14 +59,14 @@ export function OnboardingModal({ open, onComplete }: OnboardingModalProps) {
const [modelsCatalog, setModelsCatalog] = useState<Record<string, LlmModelOption[]>>({}) const [modelsCatalog, setModelsCatalog] = useState<Record<string, LlmModelOption[]>>({})
const [modelsLoading, setModelsLoading] = useState(false) const [modelsLoading, setModelsLoading] = useState(false)
const [modelsError, setModelsError] = useState<string | null>(null) const [modelsError, setModelsError] = useState<string | null>(null)
const [providerConfigs, setProviderConfigs] = useState<Record<LlmProviderFlavor, { apiKey: string; baseURL: string; model: string; knowledgeGraphModel: string }>>({ const [providerConfigs, setProviderConfigs] = useState<Record<LlmProviderFlavor, { apiKey: string; baseURL: string; model: string; knowledgeGraphModel: string; meetingNotesModel: string; trackBlockModel: string }>>({
openai: { apiKey: "", baseURL: "", model: "", knowledgeGraphModel: "" }, openai: { apiKey: "", baseURL: "", model: "", knowledgeGraphModel: "", meetingNotesModel: "", trackBlockModel: "" },
anthropic: { apiKey: "", baseURL: "", model: "", knowledgeGraphModel: "" }, anthropic: { apiKey: "", baseURL: "", model: "", knowledgeGraphModel: "", meetingNotesModel: "", trackBlockModel: "" },
google: { apiKey: "", baseURL: "", model: "", knowledgeGraphModel: "" }, google: { apiKey: "", baseURL: "", model: "", knowledgeGraphModel: "", meetingNotesModel: "", trackBlockModel: "" },
openrouter: { apiKey: "", baseURL: "", model: "", knowledgeGraphModel: "" }, openrouter: { apiKey: "", baseURL: "", model: "", knowledgeGraphModel: "", meetingNotesModel: "", trackBlockModel: "" },
aigateway: { apiKey: "", baseURL: "", model: "", knowledgeGraphModel: "" }, aigateway: { apiKey: "", baseURL: "", model: "", knowledgeGraphModel: "", meetingNotesModel: "", trackBlockModel: "" },
ollama: { apiKey: "", baseURL: "http://localhost:11434", model: "", knowledgeGraphModel: "" }, ollama: { apiKey: "", baseURL: "http://localhost:11434", model: "", knowledgeGraphModel: "", meetingNotesModel: "", trackBlockModel: "" },
"openai-compatible": { apiKey: "", baseURL: "http://localhost:1234/v1", model: "", knowledgeGraphModel: "" }, "openai-compatible": { apiKey: "", baseURL: "http://localhost:1234/v1", model: "", knowledgeGraphModel: "", meetingNotesModel: "", trackBlockModel: "" },
}) })
const [testState, setTestState] = useState<{ status: "idle" | "testing" | "success" | "error"; error?: string }>({ const [testState, setTestState] = useState<{ status: "idle" | "testing" | "success" | "error"; error?: string }>({
status: "idle", status: "idle",
@ -109,7 +109,7 @@ export function OnboardingModal({ open, onComplete }: OnboardingModalProps) {
const [googleCalendarConnecting, setGoogleCalendarConnecting] = useState(false) const [googleCalendarConnecting, setGoogleCalendarConnecting] = useState(false)
const updateProviderConfig = useCallback( const updateProviderConfig = useCallback(
(provider: LlmProviderFlavor, updates: Partial<{ apiKey: string; baseURL: string; model: string; knowledgeGraphModel: string }>) => { (provider: LlmProviderFlavor, updates: Partial<{ apiKey: string; baseURL: string; model: string; knowledgeGraphModel: string; meetingNotesModel: string; trackBlockModel: string }>) => {
setProviderConfigs(prev => ({ setProviderConfigs(prev => ({
...prev, ...prev,
[provider]: { ...prev[provider], ...updates }, [provider]: { ...prev[provider], ...updates },
@ -458,6 +458,8 @@ export function OnboardingModal({ open, onComplete }: OnboardingModalProps) {
const baseURL = activeConfig.baseURL.trim() || undefined const baseURL = activeConfig.baseURL.trim() || undefined
const model = activeConfig.model.trim() const model = activeConfig.model.trim()
const knowledgeGraphModel = activeConfig.knowledgeGraphModel.trim() || undefined const knowledgeGraphModel = activeConfig.knowledgeGraphModel.trim() || undefined
const meetingNotesModel = activeConfig.meetingNotesModel.trim() || undefined
const trackBlockModel = activeConfig.trackBlockModel.trim() || undefined
const providerConfig = { const providerConfig = {
provider: { provider: {
flavor: llmProvider, flavor: llmProvider,
@ -466,6 +468,8 @@ export function OnboardingModal({ open, onComplete }: OnboardingModalProps) {
}, },
model, model,
knowledgeGraphModel, knowledgeGraphModel,
meetingNotesModel,
trackBlockModel,
} }
const result = await window.ipc.invoke("models:test", providerConfig) const result = await window.ipc.invoke("models:test", providerConfig)
if (result.success) { if (result.success) {
@ -1157,6 +1161,72 @@ export function OnboardingModal({ open, onComplete }: OnboardingModalProps) {
</Select> </Select>
)} )}
</div> </div>
<div className="space-y-2">
<span className="text-xs font-medium text-muted-foreground uppercase tracking-wider">Meeting notes model</span>
{modelsLoading ? (
<div className="flex items-center gap-2 text-sm text-muted-foreground">
<Loader2 className="size-4 animate-spin" />
Loading...
</div>
) : showModelInput ? (
<Input
value={activeConfig.meetingNotesModel}
onChange={(e) => updateProviderConfig(llmProvider, { meetingNotesModel: e.target.value })}
placeholder={activeConfig.model || "Enter model"}
/>
) : (
<Select
value={activeConfig.meetingNotesModel || "__same__"}
onValueChange={(value) => updateProviderConfig(llmProvider, { meetingNotesModel: value === "__same__" ? "" : value })}
>
<SelectTrigger>
<SelectValue placeholder="Select a model" />
</SelectTrigger>
<SelectContent>
<SelectItem value="__same__">Same as assistant</SelectItem>
{modelsForProvider.map((model) => (
<SelectItem key={model.id} value={model.id}>
{model.name || model.id}
</SelectItem>
))}
</SelectContent>
</Select>
)}
</div>
<div className="space-y-2">
<span className="text-xs font-medium text-muted-foreground uppercase tracking-wider">Track block model</span>
{modelsLoading ? (
<div className="flex items-center gap-2 text-sm text-muted-foreground">
<Loader2 className="size-4 animate-spin" />
Loading...
</div>
) : showModelInput ? (
<Input
value={activeConfig.trackBlockModel}
onChange={(e) => updateProviderConfig(llmProvider, { trackBlockModel: e.target.value })}
placeholder={activeConfig.model || "Enter model"}
/>
) : (
<Select
value={activeConfig.trackBlockModel || "__same__"}
onValueChange={(value) => updateProviderConfig(llmProvider, { trackBlockModel: value === "__same__" ? "" : value })}
>
<SelectTrigger>
<SelectValue placeholder="Select a model" />
</SelectTrigger>
<SelectContent>
<SelectItem value="__same__">Same as assistant</SelectItem>
{modelsForProvider.map((model) => (
<SelectItem key={model.id} value={model.id}>
{model.name || model.id}
</SelectItem>
))}
</SelectContent>
</Select>
)}
</div>
</div> </div>
{showApiKey && ( {showApiKey && (

View file

@ -221,6 +221,76 @@ export function LlmSetupStep({ state }: LlmSetupStepProps) {
</Select> </Select>
)} )}
</div> </div>
<div className="space-y-2 min-w-0">
<label className="text-xs font-medium text-muted-foreground">
Meeting Notes Model
</label>
{modelsLoading ? (
<div className="flex items-center gap-2 text-sm text-muted-foreground">
<Loader2 className="size-4 animate-spin" />
Loading...
</div>
) : showModelInput ? (
<Input
value={activeConfig.meetingNotesModel}
onChange={(e) => updateProviderConfig(llmProvider, { meetingNotesModel: e.target.value })}
placeholder={activeConfig.model || "Enter model"}
/>
) : (
<Select
value={activeConfig.meetingNotesModel || "__same__"}
onValueChange={(value) => updateProviderConfig(llmProvider, { meetingNotesModel: value === "__same__" ? "" : value })}
>
<SelectTrigger className="w-full truncate">
<SelectValue placeholder="Select a model" />
</SelectTrigger>
<SelectContent>
<SelectItem value="__same__">Same as assistant</SelectItem>
{modelsForProvider.map((model) => (
<SelectItem key={model.id} value={model.id}>
{model.name || model.id}
</SelectItem>
))}
</SelectContent>
</Select>
)}
</div>
<div className="space-y-2 min-w-0">
<label className="text-xs font-medium text-muted-foreground">
Track Block Model
</label>
{modelsLoading ? (
<div className="flex items-center gap-2 text-sm text-muted-foreground">
<Loader2 className="size-4 animate-spin" />
Loading...
</div>
) : showModelInput ? (
<Input
value={activeConfig.trackBlockModel}
onChange={(e) => updateProviderConfig(llmProvider, { trackBlockModel: e.target.value })}
placeholder={activeConfig.model || "Enter model"}
/>
) : (
<Select
value={activeConfig.trackBlockModel || "__same__"}
onValueChange={(value) => updateProviderConfig(llmProvider, { trackBlockModel: value === "__same__" ? "" : value })}
>
<SelectTrigger className="w-full truncate">
<SelectValue placeholder="Select a model" />
</SelectTrigger>
<SelectContent>
<SelectItem value="__same__">Same as assistant</SelectItem>
{modelsForProvider.map((model) => (
<SelectItem key={model.id} value={model.id}>
{model.name || model.id}
</SelectItem>
))}
</SelectContent>
</Select>
)}
</div>
</div> </div>
{showApiKey && ( {showApiKey && (

View file

@ -29,14 +29,14 @@ export function useOnboardingState(open: boolean, onComplete: () => void) {
const [modelsCatalog, setModelsCatalog] = useState<Record<string, LlmModelOption[]>>({}) const [modelsCatalog, setModelsCatalog] = useState<Record<string, LlmModelOption[]>>({})
const [modelsLoading, setModelsLoading] = useState(false) const [modelsLoading, setModelsLoading] = useState(false)
const [modelsError, setModelsError] = useState<string | null>(null) const [modelsError, setModelsError] = useState<string | null>(null)
const [providerConfigs, setProviderConfigs] = useState<Record<LlmProviderFlavor, { apiKey: string; baseURL: string; model: string; knowledgeGraphModel: string }>>({ const [providerConfigs, setProviderConfigs] = useState<Record<LlmProviderFlavor, { apiKey: string; baseURL: string; model: string; knowledgeGraphModel: string; meetingNotesModel: string; trackBlockModel: string }>>({
openai: { apiKey: "", baseURL: "", model: "", knowledgeGraphModel: "" }, openai: { apiKey: "", baseURL: "", model: "", knowledgeGraphModel: "", meetingNotesModel: "", trackBlockModel: "" },
anthropic: { apiKey: "", baseURL: "", model: "", knowledgeGraphModel: "" }, anthropic: { apiKey: "", baseURL: "", model: "", knowledgeGraphModel: "", meetingNotesModel: "", trackBlockModel: "" },
google: { apiKey: "", baseURL: "", model: "", knowledgeGraphModel: "" }, google: { apiKey: "", baseURL: "", model: "", knowledgeGraphModel: "", meetingNotesModel: "", trackBlockModel: "" },
openrouter: { apiKey: "", baseURL: "", model: "", knowledgeGraphModel: "" }, openrouter: { apiKey: "", baseURL: "", model: "", knowledgeGraphModel: "", meetingNotesModel: "", trackBlockModel: "" },
aigateway: { apiKey: "", baseURL: "", model: "", knowledgeGraphModel: "" }, aigateway: { apiKey: "", baseURL: "", model: "", knowledgeGraphModel: "", meetingNotesModel: "", trackBlockModel: "" },
ollama: { apiKey: "", baseURL: "http://localhost:11434", model: "", knowledgeGraphModel: "" }, ollama: { apiKey: "", baseURL: "http://localhost:11434", model: "", knowledgeGraphModel: "", meetingNotesModel: "", trackBlockModel: "" },
"openai-compatible": { apiKey: "", baseURL: "http://localhost:1234/v1", model: "", knowledgeGraphModel: "" }, "openai-compatible": { apiKey: "", baseURL: "http://localhost:1234/v1", model: "", knowledgeGraphModel: "", meetingNotesModel: "", trackBlockModel: "" },
}) })
const [testState, setTestState] = useState<{ status: "idle" | "testing" | "success" | "error"; error?: string }>({ const [testState, setTestState] = useState<{ status: "idle" | "testing" | "success" | "error"; error?: string }>({
status: "idle", status: "idle",
@ -81,7 +81,7 @@ export function useOnboardingState(open: boolean, onComplete: () => void) {
const [googleCalendarConnecting, setGoogleCalendarConnecting] = useState(false) const [googleCalendarConnecting, setGoogleCalendarConnecting] = useState(false)
const updateProviderConfig = useCallback( const updateProviderConfig = useCallback(
(provider: LlmProviderFlavor, updates: Partial<{ apiKey: string; baseURL: string; model: string; knowledgeGraphModel: string }>) => { (provider: LlmProviderFlavor, updates: Partial<{ apiKey: string; baseURL: string; model: string; knowledgeGraphModel: string; meetingNotesModel: string; trackBlockModel: string }>) => {
setProviderConfigs(prev => ({ setProviderConfigs(prev => ({
...prev, ...prev,
[provider]: { ...prev[provider], ...updates }, [provider]: { ...prev[provider], ...updates },
@ -435,6 +435,8 @@ export function useOnboardingState(open: boolean, onComplete: () => void) {
const baseURL = activeConfig.baseURL.trim() || undefined const baseURL = activeConfig.baseURL.trim() || undefined
const model = activeConfig.model.trim() const model = activeConfig.model.trim()
const knowledgeGraphModel = activeConfig.knowledgeGraphModel.trim() || undefined const knowledgeGraphModel = activeConfig.knowledgeGraphModel.trim() || undefined
const meetingNotesModel = activeConfig.meetingNotesModel.trim() || undefined
const trackBlockModel = activeConfig.trackBlockModel.trim() || undefined
const providerConfig = { const providerConfig = {
provider: { provider: {
flavor: llmProvider, flavor: llmProvider,
@ -443,6 +445,8 @@ export function useOnboardingState(open: boolean, onComplete: () => void) {
}, },
model, model,
knowledgeGraphModel, knowledgeGraphModel,
meetingNotesModel,
trackBlockModel,
} }
const result = await window.ipc.invoke("models:test", providerConfig) const result = await window.ipc.invoke("models:test", providerConfig)
if (result.success) { if (result.success) {
@ -459,7 +463,7 @@ export function useOnboardingState(open: boolean, onComplete: () => void) {
setTestState({ status: "error", error: "Connection test failed" }) setTestState({ status: "error", error: "Connection test failed" })
toast.error("Connection test failed") toast.error("Connection test failed")
} }
}, [activeConfig.apiKey, activeConfig.baseURL, activeConfig.model, activeConfig.knowledgeGraphModel, canTest, llmProvider, handleNext]) }, [activeConfig.apiKey, activeConfig.baseURL, activeConfig.model, activeConfig.knowledgeGraphModel, activeConfig.meetingNotesModel, activeConfig.trackBlockModel, canTest, llmProvider, handleNext])
// Check connection status for all providers // Check connection status for all providers
const refreshAllStatuses = useCallback(async () => { const refreshAllStatuses = useCallback(async () => {

View file

@ -196,14 +196,14 @@ const defaultBaseURLs: Partial<Record<LlmProviderFlavor, string>> = {
function ModelSettings({ dialogOpen }: { dialogOpen: boolean }) { function ModelSettings({ dialogOpen }: { dialogOpen: boolean }) {
const [provider, setProvider] = useState<LlmProviderFlavor>("openai") const [provider, setProvider] = useState<LlmProviderFlavor>("openai")
const [defaultProvider, setDefaultProvider] = useState<LlmProviderFlavor | null>(null) const [defaultProvider, setDefaultProvider] = useState<LlmProviderFlavor | null>(null)
const [providerConfigs, setProviderConfigs] = useState<Record<LlmProviderFlavor, { apiKey: string; baseURL: string; models: string[]; knowledgeGraphModel: string }>>({ const [providerConfigs, setProviderConfigs] = useState<Record<LlmProviderFlavor, { apiKey: string; baseURL: string; models: string[]; knowledgeGraphModel: string; meetingNotesModel: string; trackBlockModel: string }>>({
openai: { apiKey: "", baseURL: "", models: [""], knowledgeGraphModel: "" }, openai: { apiKey: "", baseURL: "", models: [""], knowledgeGraphModel: "", meetingNotesModel: "", trackBlockModel: "" },
anthropic: { apiKey: "", baseURL: "", models: [""], knowledgeGraphModel: "" }, anthropic: { apiKey: "", baseURL: "", models: [""], knowledgeGraphModel: "", meetingNotesModel: "", trackBlockModel: "" },
google: { apiKey: "", baseURL: "", models: [""], knowledgeGraphModel: "" }, google: { apiKey: "", baseURL: "", models: [""], knowledgeGraphModel: "", meetingNotesModel: "", trackBlockModel: "" },
openrouter: { apiKey: "", baseURL: "", models: [""], knowledgeGraphModel: "" }, openrouter: { apiKey: "", baseURL: "", models: [""], knowledgeGraphModel: "", meetingNotesModel: "", trackBlockModel: "" },
aigateway: { apiKey: "", baseURL: "", models: [""], knowledgeGraphModel: "" }, aigateway: { apiKey: "", baseURL: "", models: [""], knowledgeGraphModel: "", meetingNotesModel: "", trackBlockModel: "" },
ollama: { apiKey: "", baseURL: "http://localhost:11434", models: [""], knowledgeGraphModel: "" }, ollama: { apiKey: "", baseURL: "http://localhost:11434", models: [""], knowledgeGraphModel: "", meetingNotesModel: "", trackBlockModel: "" },
"openai-compatible": { apiKey: "", baseURL: "http://localhost:1234/v1", models: [""], knowledgeGraphModel: "" }, "openai-compatible": { apiKey: "", baseURL: "http://localhost:1234/v1", models: [""], knowledgeGraphModel: "", meetingNotesModel: "", trackBlockModel: "" },
}) })
const [modelsCatalog, setModelsCatalog] = useState<Record<string, LlmModelOption[]>>({}) const [modelsCatalog, setModelsCatalog] = useState<Record<string, LlmModelOption[]>>({})
const [modelsLoading, setModelsLoading] = useState(false) const [modelsLoading, setModelsLoading] = useState(false)
@ -229,7 +229,7 @@ function ModelSettings({ dialogOpen }: { dialogOpen: boolean }) {
(!requiresBaseURL || activeConfig.baseURL.trim().length > 0) (!requiresBaseURL || activeConfig.baseURL.trim().length > 0)
const updateConfig = useCallback( const updateConfig = useCallback(
(prov: LlmProviderFlavor, updates: Partial<{ apiKey: string; baseURL: string; models: string[]; knowledgeGraphModel: string }>) => { (prov: LlmProviderFlavor, updates: Partial<{ apiKey: string; baseURL: string; models: string[]; knowledgeGraphModel: string; meetingNotesModel: string; trackBlockModel: string }>) => {
setProviderConfigs(prev => ({ setProviderConfigs(prev => ({
...prev, ...prev,
[prov]: { ...prev[prov], ...updates }, [prov]: { ...prev[prov], ...updates },
@ -302,6 +302,8 @@ function ModelSettings({ dialogOpen }: { dialogOpen: boolean }) {
baseURL: e.baseURL || (defaultBaseURLs[key as LlmProviderFlavor] || ""), baseURL: e.baseURL || (defaultBaseURLs[key as LlmProviderFlavor] || ""),
models: savedModels, models: savedModels,
knowledgeGraphModel: e.knowledgeGraphModel || "", knowledgeGraphModel: e.knowledgeGraphModel || "",
meetingNotesModel: e.meetingNotesModel || "",
trackBlockModel: e.trackBlockModel || "",
}; };
} }
} }
@ -318,6 +320,8 @@ function ModelSettings({ dialogOpen }: { dialogOpen: boolean }) {
baseURL: parsed.provider.baseURL || (defaultBaseURLs[flavor] || ""), baseURL: parsed.provider.baseURL || (defaultBaseURLs[flavor] || ""),
models: activeModels.length > 0 ? activeModels : [""], models: activeModels.length > 0 ? activeModels : [""],
knowledgeGraphModel: parsed.knowledgeGraphModel || "", knowledgeGraphModel: parsed.knowledgeGraphModel || "",
meetingNotesModel: parsed.meetingNotesModel || "",
trackBlockModel: parsed.trackBlockModel || "",
}; };
} }
return next; return next;
@ -391,6 +395,8 @@ function ModelSettings({ dialogOpen }: { dialogOpen: boolean }) {
model: allModels[0] || "", model: allModels[0] || "",
models: allModels, models: allModels,
knowledgeGraphModel: activeConfig.knowledgeGraphModel.trim() || undefined, knowledgeGraphModel: activeConfig.knowledgeGraphModel.trim() || undefined,
meetingNotesModel: activeConfig.meetingNotesModel.trim() || undefined,
trackBlockModel: activeConfig.trackBlockModel.trim() || undefined,
} }
const result = await window.ipc.invoke("models:test", providerConfig) const result = await window.ipc.invoke("models:test", providerConfig)
if (result.success) { if (result.success) {
@ -423,6 +429,8 @@ function ModelSettings({ dialogOpen }: { dialogOpen: boolean }) {
model: allModels[0], model: allModels[0],
models: allModels, models: allModels,
knowledgeGraphModel: config.knowledgeGraphModel.trim() || undefined, knowledgeGraphModel: config.knowledgeGraphModel.trim() || undefined,
meetingNotesModel: config.meetingNotesModel.trim() || undefined,
trackBlockModel: config.trackBlockModel.trim() || undefined,
}) })
setDefaultProvider(prov) setDefaultProvider(prov)
window.dispatchEvent(new Event('models-config-changed')) window.dispatchEvent(new Event('models-config-changed'))
@ -452,6 +460,8 @@ function ModelSettings({ dialogOpen }: { dialogOpen: boolean }) {
parsed.model = defModels[0] || "" parsed.model = defModels[0] || ""
parsed.models = defModels parsed.models = defModels
parsed.knowledgeGraphModel = defConfig.knowledgeGraphModel.trim() || undefined parsed.knowledgeGraphModel = defConfig.knowledgeGraphModel.trim() || undefined
parsed.meetingNotesModel = defConfig.meetingNotesModel.trim() || undefined
parsed.trackBlockModel = defConfig.trackBlockModel.trim() || undefined
} }
await window.ipc.invoke("workspace:writeFile", { await window.ipc.invoke("workspace:writeFile", {
path: "config/models.json", path: "config/models.json",
@ -459,7 +469,7 @@ function ModelSettings({ dialogOpen }: { dialogOpen: boolean }) {
}) })
setProviderConfigs(prev => ({ setProviderConfigs(prev => ({
...prev, ...prev,
[prov]: { apiKey: "", baseURL: defaultBaseURLs[prov] || "", models: [""], knowledgeGraphModel: "" }, [prov]: { apiKey: "", baseURL: defaultBaseURLs[prov] || "", models: [""], knowledgeGraphModel: "", meetingNotesModel: "", trackBlockModel: "" },
})) }))
setTestState({ status: "idle" }) setTestState({ status: "idle" })
window.dispatchEvent(new Event('models-config-changed')) window.dispatchEvent(new Event('models-config-changed'))
@ -649,6 +659,74 @@ function ModelSettings({ dialogOpen }: { dialogOpen: boolean }) {
</Select> </Select>
)} )}
</div> </div>
{/* Meeting notes model */}
<div className="space-y-2">
<span className="text-xs font-medium text-muted-foreground uppercase tracking-wider">Meeting notes model</span>
{modelsLoading ? (
<div className="flex items-center gap-2 text-sm text-muted-foreground">
<Loader2 className="size-4 animate-spin" />
Loading...
</div>
) : showModelInput ? (
<Input
value={activeConfig.meetingNotesModel}
onChange={(e) => updateConfig(provider, { meetingNotesModel: e.target.value })}
placeholder={primaryModel || "Enter model"}
/>
) : (
<Select
value={activeConfig.meetingNotesModel || "__same__"}
onValueChange={(value) => updateConfig(provider, { meetingNotesModel: value === "__same__" ? "" : value })}
>
<SelectTrigger>
<SelectValue placeholder="Select a model" />
</SelectTrigger>
<SelectContent>
<SelectItem value="__same__">Same as assistant</SelectItem>
{modelsForProvider.map((m) => (
<SelectItem key={m.id} value={m.id}>
{m.name || m.id}
</SelectItem>
))}
</SelectContent>
</Select>
)}
</div>
{/* Track block model */}
<div className="space-y-2">
<span className="text-xs font-medium text-muted-foreground uppercase tracking-wider">Track block model</span>
{modelsLoading ? (
<div className="flex items-center gap-2 text-sm text-muted-foreground">
<Loader2 className="size-4 animate-spin" />
Loading...
</div>
) : showModelInput ? (
<Input
value={activeConfig.trackBlockModel}
onChange={(e) => updateConfig(provider, { trackBlockModel: e.target.value })}
placeholder={primaryModel || "Enter model"}
/>
) : (
<Select
value={activeConfig.trackBlockModel || "__same__"}
onValueChange={(value) => updateConfig(provider, { trackBlockModel: value === "__same__" ? "" : value })}
>
<SelectTrigger>
<SelectValue placeholder="Select a model" />
</SelectTrigger>
<SelectContent>
<SelectItem value="__same__">Same as assistant</SelectItem>
{modelsForProvider.map((m) => (
<SelectItem key={m.id} value={m.id}>
{m.name || m.id}
</SelectItem>
))}
</SelectContent>
</Select>
)}
</div>
</div> </div>
{/* API Key */} {/* API Key */}

View file

@ -156,6 +156,8 @@ export function TrackModal() {
const lastRunAt = track?.lastRunAt ?? '' const lastRunAt = track?.lastRunAt ?? ''
const lastRunId = track?.lastRunId ?? '' const lastRunId = track?.lastRunId ?? ''
const lastRunSummary = track?.lastRunSummary ?? '' const lastRunSummary = track?.lastRunSummary ?? ''
const model = track?.model ?? ''
const provider = track?.provider ?? ''
const scheduleSummary = useMemo(() => summarizeSchedule(schedule), [schedule]) const scheduleSummary = useMemo(() => summarizeSchedule(schedule), [schedule])
const triggerType: 'scheduled' | 'event' | 'manual' = const triggerType: 'scheduled' | 'event' | 'manual' =
schedule ? 'scheduled' : eventMatchCriteria ? 'event' : 'manual' schedule ? 'scheduled' : eventMatchCriteria ? 'event' : 'manual'
@ -393,6 +395,12 @@ export function TrackModal() {
<dt>Track ID</dt><dd><code>{trackId}</code></dd> <dt>Track ID</dt><dd><code>{trackId}</code></dd>
<dt>File</dt><dd><code>{detail.filePath}</code></dd> <dt>File</dt><dd><code>{detail.filePath}</code></dd>
<dt>Status</dt><dd>{active ? 'Active' : 'Paused'}</dd> <dt>Status</dt><dd>{active ? 'Active' : 'Paused'}</dd>
{model && (<>
<dt>Model</dt><dd><code>{model}</code></dd>
</>)}
{provider && (<>
<dt>Provider</dt><dd><code>{provider}</code></dd>
</>)}
{lastRunAt && (<> {lastRunAt && (<>
<dt>Last run</dt><dd>{formatDateTime(lastRunAt)}</dd> <dt>Last run</dt><dd>{formatDateTime(lastRunAt)}</dd>
</>)} </>)}

View file

@ -0,0 +1,145 @@
import { z } from 'zod'
import { useMemo } from 'react'
import { mergeAttributes, Node } from '@tiptap/react'
import { ReactNodeViewRenderer, NodeViewWrapper } from '@tiptap/react'
import { Sparkles } from 'lucide-react'
import { parse as parseYaml } from 'yaml'
import { PromptBlockSchema } from '@x/shared/dist/prompt-block.js'
import { Button } from '@/components/ui/button'
function truncate(text: string, maxLen: number): string {
const clean = text.replace(/\s+/g, ' ').trim()
if (clean.length <= maxLen) return clean
return clean.slice(0, maxLen).trimEnd() + '…'
}
function PromptBlockView({ node, extension }: {
node: { attrs: Record<string, unknown> }
extension: { options: { notePath?: string } }
}) {
const raw = node.attrs.data as string
const prompt = useMemo<z.infer<typeof PromptBlockSchema> | null>(() => {
try {
return PromptBlockSchema.parse(parseYaml(raw))
} catch { return null }
}, [raw])
const notePath = extension.options.notePath
const handleRun = (e: React.MouseEvent) => {
e.stopPropagation()
if (!prompt) return
window.dispatchEvent(new CustomEvent('rowboat:open-copilot-prompt', {
detail: {
instruction: prompt.instruction,
label: prompt.label,
filePath: notePath,
},
}))
}
const handleKey = (e: React.KeyboardEvent) => {
if (e.key === 'Enter' || e.key === ' ') {
e.preventDefault()
handleRun(e as unknown as React.MouseEvent)
}
}
if (!prompt) {
return (
<NodeViewWrapper data-type="prompt-block">
<div className="my-2 rounded-xl border border-destructive/40 bg-destructive/5 p-3 text-sm text-destructive">
Invalid prompt block expected YAML with <code>label</code> and <code>instruction</code>.
</div>
</NodeViewWrapper>
)
}
return (
<NodeViewWrapper data-type="prompt-block">
<div
role="button"
tabIndex={0}
onClick={handleRun}
onKeyDown={handleKey}
onMouseDown={(e) => e.stopPropagation()}
title={prompt.instruction}
className="flex items-center gap-3 rounded-xl border border-border bg-card p-3 pr-4 text-left transition-colors hover:bg-accent/50 cursor-pointer w-full my-2"
>
<div className="flex h-10 w-10 shrink-0 items-center justify-center rounded-lg bg-muted">
<Sparkles className="h-5 w-5 text-muted-foreground" />
</div>
<div className="flex-1 min-w-0">
<div className="truncate text-sm font-medium">{prompt.label}</div>
<div className="truncate text-xs text-muted-foreground">{truncate(prompt.instruction, 80)}</div>
</div>
<Button variant="outline" size="sm" className="shrink-0 text-xs h-8 rounded-lg pointer-events-none">
Run
</Button>
</div>
</NodeViewWrapper>
)
}
export const PromptBlockExtension = Node.create({
name: 'promptBlock',
group: 'block',
atom: true,
selectable: true,
draggable: false,
addOptions() {
return {
notePath: undefined as string | undefined,
}
},
addAttributes() {
return {
data: {
default: '',
},
}
},
parseHTML() {
return [
{
tag: 'pre',
priority: 60,
getAttrs(element) {
const code = element.querySelector('code')
if (!code) return false
const cls = code.className || ''
if (cls.includes('language-prompt')) {
return { data: code.textContent || '' }
}
return false
},
},
]
},
renderHTML({ HTMLAttributes }: { HTMLAttributes: Record<string, unknown> }) {
return ['div', mergeAttributes(HTMLAttributes, { 'data-type': 'prompt-block' })]
},
addNodeView() {
return ReactNodeViewRenderer(PromptBlockView)
},
addStorage() {
return {
markdown: {
serialize(state: { write: (text: string) => void; closeBlock: (node: unknown) => void }, node: { attrs: { data: string } }) {
state.write('```prompt\n' + node.attrs.data + '\n```')
state.closeBlock(node)
},
parse: {
// handled by parseHTML
},
},
}
},
})

View file

@ -36,11 +36,12 @@ function TrackBlockView({ node, deleteNode, extension }: {
extension: { options: { notePath?: string } } extension: { options: { notePath?: string } }
}) { }) {
const raw = node.attrs.data as string const raw = node.attrs.data as string
const cleaned = raw.replace(/[\u200B-\u200D\uFEFF]/g, "");
const track = useMemo<z.infer<typeof TrackBlockSchema> | null>(() => { const track = useMemo<z.infer<typeof TrackBlockSchema> | null>(() => {
try { try {
return TrackBlockSchema.parse(parseYaml(raw)) return TrackBlockSchema.parse(parseYaml(cleaned))
} catch { return null } } catch(error) { console.error('error', error); return null }
}, [raw]) as z.infer<typeof TrackBlockSchema> | null; }, [raw]) as z.infer<typeof TrackBlockSchema> | null;
const trackId = track?.trackId ?? '' const trackId = track?.trackId ?? ''

View file

@ -58,15 +58,29 @@ export function useAnalyticsIdentity() {
// Listen for OAuth connect/disconnect events to update identity // Listen for OAuth connect/disconnect events to update identity
useEffect(() => { useEffect(() => {
const cleanup = window.ipc.on('oauth:didConnect', (event) => { const cleanup = window.ipc.on('oauth:didConnect', (event) => {
if (!event.success) return if (event.provider !== 'rowboat') {
// Other providers: just toggle the connection flag
// If Rowboat provider connected, identify user if (event.success) {
if (event.provider === 'rowboat' && event.userId) { posthog.people.set({ [`${event.provider}_connected`]: true })
posthog.identify(event.userId) }
posthog.people.set({ signed_in: true }) return
} }
posthog.people.set({ [`${event.provider}_connected`]: true }) // Rowboat sign-in
if (event.success) {
if (event.userId) {
posthog.identify(event.userId)
}
posthog.people.set({ signed_in: true, rowboat_connected: true })
posthog.capture('user_signed_in')
return
}
// Rowboat sign-out — flip flags, capture, and reset distinct_id so
// future events on this device don't get attributed to the prior user.
posthog.people.set({ signed_in: false, rowboat_connected: false })
posthog.capture('user_signed_out')
posthog.reset()
}) })
return cleanup return cleanup

View file

@ -2,20 +2,45 @@ import { StrictMode } from 'react'
import { createRoot } from 'react-dom/client' import { createRoot } from 'react-dom/client'
import './index.css' import './index.css'
import App from './App.tsx' import App from './App.tsx'
import posthog from 'posthog-js'
import { PostHogProvider } from 'posthog-js/react' import { PostHogProvider } from 'posthog-js/react'
import { ThemeProvider } from '@/contexts/theme-context' import { ThemeProvider } from '@/contexts/theme-context'
const options = { // Fetch the stable installation ID from main so renderer + main share one
api_host: import.meta.env.VITE_PUBLIC_POSTHOG_HOST, // PostHog distinct_id. Falls back to PostHog's auto-generated anonymous ID
defaults: '2025-11-30', // if the IPC call fails (rare — main is always up before renderer).
} as const async function bootstrap() {
let installationId: string | undefined
let apiUrl: string | undefined
try {
const result = await window.ipc.invoke('analytics:bootstrap', null)
installationId = result.installationId
apiUrl = result.apiUrl
} catch (err) {
console.error('[Analytics] Failed to bootstrap from main:', err)
}
createRoot(document.getElementById('root')!).render( const options = {
<StrictMode> api_host: import.meta.env.VITE_PUBLIC_POSTHOG_HOST,
<PostHogProvider apiKey={import.meta.env.VITE_PUBLIC_POSTHOG_KEY} options={options}> defaults: '2025-11-30',
<ThemeProvider defaultTheme="system"> ...(installationId ? { bootstrap: { distinctID: installationId } } : {}),
<App /> } as const
</ThemeProvider>
</PostHogProvider> createRoot(document.getElementById('root')!).render(
</StrictMode>, <StrictMode>
) <PostHogProvider apiKey={import.meta.env.VITE_PUBLIC_POSTHOG_KEY} options={options}>
<ThemeProvider defaultTheme="system">
<App />
</ThemeProvider>
</PostHogProvider>
</StrictMode>,
)
// Tag the active person record with api_url so anonymous users are also
// segmentable by environment.
if (apiUrl) {
posthog.people.set({ api_url: apiUrl })
}
}
bootstrap()

View file

@ -146,6 +146,48 @@
color: #eb5757; color: #eb5757;
} }
/* Native GFM tables (distinct from the custom tableBlock above) */
.tiptap-editor .ProseMirror .tableWrapper {
overflow-x: auto;
margin: 8px 0;
}
.tiptap-editor .ProseMirror table {
width: 100%;
border-collapse: collapse;
table-layout: fixed;
font-size: 13px;
margin: 8px 0;
}
.tiptap-editor .ProseMirror table th,
.tiptap-editor .ProseMirror table td {
border: 1px solid var(--border);
padding: 6px 10px;
vertical-align: top;
box-sizing: border-box;
position: relative;
min-width: 60px;
}
.tiptap-editor .ProseMirror table th {
background: color-mix(in srgb, var(--foreground) 4%, transparent);
font-weight: 600;
text-align: left;
}
.tiptap-editor .ProseMirror table p {
margin: 0;
}
.tiptap-editor .ProseMirror table .selectedCell::after {
content: '';
position: absolute;
inset: 0;
background: color-mix(in srgb, var(--foreground) 8%, transparent);
pointer-events: none;
}
/* Divider */ /* Divider */
.tiptap-editor .ProseMirror hr { .tiptap-editor .ProseMirror hr {
border: none; border: none;

View file

@ -37,6 +37,7 @@
"openid-client": "^6.8.1", "openid-client": "^6.8.1",
"papaparse": "^5.5.3", "papaparse": "^5.5.3",
"pdf-parse": "^2.4.5", "pdf-parse": "^2.4.5",
"posthog-node": "^4.18.0",
"react": "^19.2.3", "react": "^19.2.3",
"xlsx": "^0.18.5", "xlsx": "^0.18.5",
"yaml": "^2.8.2", "yaml": "^2.8.2",

View file

@ -8,6 +8,7 @@ import { IMonotonicallyIncreasingIdGenerator } from "../application/lib/id-gen.j
import { AgentScheduleConfig, AgentScheduleEntry } from "@x/shared/dist/agent-schedule.js"; import { AgentScheduleConfig, AgentScheduleEntry } from "@x/shared/dist/agent-schedule.js";
import { AgentScheduleState, AgentScheduleStateEntry } from "@x/shared/dist/agent-schedule-state.js"; import { AgentScheduleState, AgentScheduleStateEntry } from "@x/shared/dist/agent-schedule-state.js";
import { MessageEvent } from "@x/shared/dist/runs.js"; import { MessageEvent } from "@x/shared/dist/runs.js";
import { createRun } from "../runs/runs.js";
import z from "zod"; import z from "zod";
const DEFAULT_STARTING_MESSAGE = "go"; const DEFAULT_STARTING_MESSAGE = "go";
@ -162,8 +163,12 @@ async function runAgent(
}); });
try { try {
// Create a new run // Create a new run via core (resolves agent + default model+provider).
const run = await runsRepo.create({ agentId: agentName }); const run = await createRun({
agentId: agentName,
useCase: 'copilot_chat',
subUseCase: 'scheduled',
});
console.log(`[AgentRunner] Created run ${run.id} for agent ${agentName}`); console.log(`[AgentRunner] Created run ${run.id} for agent ${agentName}`);
// Add the starting message as a user message // Add the starting message as a user message

View file

@ -16,8 +16,7 @@ import { isBlocked, extractCommandNames } from "../application/lib/command-execu
import container from "../di/container.js"; import container from "../di/container.js";
import { IModelConfigRepo } from "../models/repo.js"; import { IModelConfigRepo } from "../models/repo.js";
import { createProvider } from "../models/models.js"; import { createProvider } from "../models/models.js";
import { isSignedIn } from "../account/account.js"; import { resolveProviderConfig } from "../models/defaults.js";
import { getGatewayProvider } from "../models/gateway.js";
import { IAgentsRepo } from "./repo.js"; import { IAgentsRepo } from "./repo.js";
import { IMonotonicallyIncreasingIdGenerator } from "../application/lib/id-gen.js"; import { IMonotonicallyIncreasingIdGenerator } from "../application/lib/id-gen.js";
import { IBus } from "../application/lib/bus.js"; import { IBus } from "../application/lib/bus.js";
@ -27,6 +26,8 @@ import { IRunsLock } from "../runs/lock.js";
import { IAbortRegistry } from "../runs/abort-registry.js"; import { IAbortRegistry } from "../runs/abort-registry.js";
import { PrefixLogger } from "@x/shared"; import { PrefixLogger } from "@x/shared";
import { parse } from "yaml"; import { parse } from "yaml";
import { captureLlmUsage } from "../analytics/usage.js";
import { enterUseCase, type UseCase } from "../analytics/use_case.js";
import { getRaw as getNoteCreationRaw } from "../knowledge/note_creation.js"; import { getRaw as getNoteCreationRaw } from "../knowledge/note_creation.js";
import { getRaw as getLabelingAgentRaw } from "../knowledge/labeling_agent.js"; import { getRaw as getLabelingAgentRaw } from "../knowledge/labeling_agent.js";
import { getRaw as getNoteTaggingAgentRaw } from "../knowledge/note_tagging_agent.js"; import { getRaw as getNoteTaggingAgentRaw } from "../knowledge/note_tagging_agent.js";
@ -194,6 +195,19 @@ export class AgentRuntime implements IAgentRuntime {
await this.runsRepo.appendEvents(runId, [stoppedEvent]); await this.runsRepo.appendEvents(runId, [stoppedEvent]);
await this.bus.publish(stoppedEvent); await this.bus.publish(stoppedEvent);
} }
} catch (error) {
console.error(`Run ${runId} failed:`, error);
const message = error instanceof Error
? (error.stack || error.message || error.name)
: typeof error === "string" ? error : JSON.stringify(error);
const errorEvent: z.infer<typeof RunEvent> = {
runId,
type: "error",
error: message,
subflow: [],
};
await this.runsRepo.appendEvents(runId, [errorEvent]);
await this.bus.publish(errorEvent);
} finally { } finally {
this.abortRegistry.cleanup(runId); this.abortRegistry.cleanup(runId);
await this.runsLock.release(runId); await this.runsLock.release(runId);
@ -636,6 +650,10 @@ export class AgentState {
runId: string | null = null; runId: string | null = null;
agent: z.infer<typeof Agent> | null = null; agent: z.infer<typeof Agent> | null = null;
agentName: string | null = null; agentName: string | null = null;
runModel: string | null = null;
runProvider: string | null = null;
runUseCase: UseCase | null = null;
runSubUseCase: string | null = null;
messages: z.infer<typeof MessageList> = []; messages: z.infer<typeof MessageList> = [];
lastAssistantMsg: z.infer<typeof AssistantMessage> | null = null; lastAssistantMsg: z.infer<typeof AssistantMessage> | null = null;
subflowStates: Record<string, AgentState> = {}; subflowStates: Record<string, AgentState> = {};
@ -749,13 +767,22 @@ export class AgentState {
case "start": case "start":
this.runId = event.runId; this.runId = event.runId;
this.agentName = event.agentName; this.agentName = event.agentName;
this.runModel = event.model;
this.runProvider = event.provider;
this.runUseCase = event.useCase ?? null;
this.runSubUseCase = event.subUseCase ?? null;
break; break;
case "spawn-subflow": case "spawn-subflow":
// Seed the subflow state with its agent so downstream loadAgent works. // Seed the subflow state with its agent so downstream loadAgent works.
// Subflows inherit the parent run's model+provider — there's one pair per run.
if (!this.subflowStates[event.toolCallId]) { if (!this.subflowStates[event.toolCallId]) {
this.subflowStates[event.toolCallId] = new AgentState(); this.subflowStates[event.toolCallId] = new AgentState();
} }
this.subflowStates[event.toolCallId].agentName = event.agentName; this.subflowStates[event.toolCallId].agentName = event.agentName;
this.subflowStates[event.toolCallId].runModel = this.runModel;
this.subflowStates[event.toolCallId].runProvider = this.runProvider;
this.subflowStates[event.toolCallId].runUseCase = this.runUseCase;
this.subflowStates[event.toolCallId].runSubUseCase = this.runSubUseCase;
break; break;
case "message": case "message":
this.messages.push(event.message); this.messages.push(event.message);
@ -844,35 +871,31 @@ export async function* streamAgent({
yield event; yield event;
} }
const modelConfig = await modelConfigRepo.getConfig();
if (!modelConfig) {
throw new Error("Model config not found");
}
// set up agent // set up agent
const agent = await loadAgent(state.agentName!); const agent = await loadAgent(state.agentName!);
// set up tools // set up tools
const tools = await buildTools(agent); const tools = await buildTools(agent);
// set up provider + model // model+provider were resolved and frozen on the run at runs:create time.
const signedIn = await isSignedIn(); // Look up the named provider's current credentials from models.json and
const provider = signedIn // instantiate the LLM client. No selection happens here.
? await getGatewayProvider() if (!state.runModel || !state.runProvider) {
: createProvider(modelConfig.provider); throw new Error(`Run ${runId} is missing model/provider on its start event`);
const knowledgeGraphAgents = ["note_creation", "email-draft", "meeting-prep", "labeling_agent", "note_tagging_agent", "agent_notes_agent"]; }
const isKgAgent = knowledgeGraphAgents.includes(state.agentName!); const modelId = state.runModel;
const isInlineTaskAgent = state.agentName === "inline_task_agent"; const providerConfig = await resolveProviderConfig(state.runProvider);
const defaultModel = signedIn ? "gpt-5.4" : modelConfig.model; const provider = createProvider(providerConfig);
const defaultKgModel = signedIn ? "anthropic/claude-haiku-4.5" : defaultModel;
const defaultInlineTaskModel = signedIn ? "anthropic/claude-sonnet-4.6" : defaultModel;
const modelId = isInlineTaskAgent
? defaultInlineTaskModel
: (isKgAgent && modelConfig.knowledgeGraphModel)
? modelConfig.knowledgeGraphModel
: isKgAgent ? defaultKgModel : defaultModel;
const model = provider.languageModel(modelId); const model = provider.languageModel(modelId);
logger.log(`using model: ${modelId}`); logger.log(`using model: ${modelId} (provider: ${state.runProvider})`);
// Install use-case context for tool-internal LLM calls (e.g. parseFile)
// so they can tag their `llm_usage` events with the parent run's category.
enterUseCase({
useCase: state.runUseCase ?? "copilot_chat",
...(state.runSubUseCase ? { subUseCase: state.runSubUseCase } : {}),
...(state.agentName ? { agentName: state.agentName } : {}),
});
let loopCounter = 0; let loopCounter = 0;
let voiceInput = false; let voiceInput = false;
@ -942,27 +965,40 @@ export async function* streamAgent({
subflow: [], subflow: [],
}); });
let result: unknown = null; let result: unknown = null;
if (agent.tools![toolCall.toolName].type === "agent") { try {
const subflowState = state.subflowStates[toolCallId]; if (agent.tools![toolCall.toolName].type === "agent") {
for await (const event of streamAgent({ const subflowState = state.subflowStates[toolCallId];
state: subflowState, for await (const event of streamAgent({
idGenerator, state: subflowState,
runId, idGenerator,
messageQueue, runId,
modelConfigRepo, messageQueue,
signal, modelConfigRepo,
abortRegistry, signal,
})) { abortRegistry,
yield* processEvent({ })) {
...event, yield* processEvent({
subflow: [toolCallId, ...event.subflow], ...event,
}); subflow: [toolCallId, ...event.subflow],
});
}
if (!subflowState.getPendingAskHumans().length && !subflowState.getPendingPermissions().length) {
result = subflowState.finalResponse();
}
} else {
result = await execTool(agent.tools![toolCall.toolName], toolCall.arguments, { runId, signal, abortRegistry });
} }
if (!subflowState.getPendingAskHumans().length && !subflowState.getPendingPermissions().length) { } catch (error) {
result = subflowState.finalResponse(); if ((error instanceof Error && error.name === "AbortError") || signal.aborted) {
throw error;
} }
} else { const message = error instanceof Error ? (error.message || error.name) : String(error);
result = await execTool(agent.tools![toolCall.toolName], toolCall.arguments, { runId, signal, abortRegistry }); _logger.log('tool failed', message);
result = {
success: false,
error: message,
toolName: toolCall.toolName,
};
} }
const resultPayload = result === undefined ? null : result; const resultPayload = result === undefined ? null : result;
const resultMsg: z.infer<typeof ToolMessage> = { const resultMsg: z.infer<typeof ToolMessage> = {
@ -1094,6 +1130,13 @@ export async function* streamAgent({
instructionsWithDateTime, instructionsWithDateTime,
tools, tools,
signal, signal,
{
useCase: state.runUseCase ?? "copilot_chat",
...(state.runSubUseCase ? { subUseCase: state.runSubUseCase } : {}),
agentName: state.agentName ?? undefined,
modelId,
providerName: state.runProvider!,
},
)) { )) {
messageBuilder.ingest(event); messageBuilder.ingest(event);
yield* processEvent({ yield* processEvent({
@ -1181,12 +1224,21 @@ export async function* streamAgent({
} }
} }
interface StreamLlmAnalytics {
useCase: UseCase;
subUseCase?: string;
agentName?: string;
modelId: string;
providerName: string;
}
async function* streamLlm( async function* streamLlm(
model: LanguageModel, model: LanguageModel,
messages: z.infer<typeof MessageList>, messages: z.infer<typeof MessageList>,
instructions: string, instructions: string,
tools: ToolSet, tools: ToolSet,
signal?: AbortSignal, signal?: AbortSignal,
analytics?: StreamLlmAnalytics,
): AsyncGenerator<z.infer<typeof LlmStepStreamEvent>, void, unknown> { ): AsyncGenerator<z.infer<typeof LlmStepStreamEvent>, void, unknown> {
const converted = convertFromMessages(messages); const converted = convertFromMessages(messages);
console.log(`! SENDING payload to model: `, JSON.stringify(converted)) console.log(`! SENDING payload to model: `, JSON.stringify(converted))
@ -1257,6 +1309,16 @@ async function* streamLlm(
}; };
break; break;
case "finish-step": case "finish-step":
if (analytics) {
captureLlmUsage({
useCase: analytics.useCase,
...(analytics.subUseCase ? { subUseCase: analytics.subUseCase } : {}),
...(analytics.agentName ? { agentName: analytics.agentName } : {}),
model: analytics.modelId,
provider: analytics.providerName,
usage: event.usage,
});
}
yield { yield {
type: "finish-step", type: "finish-step",
usage: event.usage, usage: event.usage,

View file

@ -0,0 +1,23 @@
import { isSignedIn } from '../account/account.js';
import { getBillingInfo } from '../billing/billing.js';
import { identify } from './posthog.js';
/**
* If the user has rowboat OAuth tokens, fetch their billing info and
* call posthog.identify(). Idempotent safe to call on every app start.
* Catches all errors so analytics never blocks app launch.
*/
export async function identifyIfSignedIn(): Promise<void> {
try {
if (!(await isSignedIn())) return;
const billing = await getBillingInfo();
if (!billing.userId) return;
identify(billing.userId, {
...(billing.userEmail ? { email: billing.userEmail } : {}),
plan: billing.subscriptionPlan,
status: billing.subscriptionStatus,
});
} catch (err) {
console.error('[Analytics] startup identify failed:', err);
}
}

View file

@ -0,0 +1,37 @@
import fs from 'node:fs';
import path from 'node:path';
import { randomUUID } from 'node:crypto';
import { WorkDir } from '../config/config.js';
const INSTALLATION_PATH = path.join(WorkDir, 'config', 'installation.json');
let cached: string | null = null;
export function getInstallationId(): string {
if (cached) return cached;
try {
if (fs.existsSync(INSTALLATION_PATH)) {
const raw = fs.readFileSync(INSTALLATION_PATH, 'utf-8');
const parsed = JSON.parse(raw) as { installationId?: string };
if (parsed.installationId && typeof parsed.installationId === 'string') {
cached = parsed.installationId;
return cached;
}
}
} catch (err) {
console.error('[Analytics] Failed to read installation.json:', err);
}
const id = randomUUID();
try {
const dir = path.dirname(INSTALLATION_PATH);
if (!fs.existsSync(dir)) {
fs.mkdirSync(dir, { recursive: true });
}
fs.writeFileSync(INSTALLATION_PATH, JSON.stringify({ installationId: id }, null, 2));
} catch (err) {
console.error('[Analytics] Failed to write installation.json:', err);
}
cached = id;
return id;
}

View file

@ -0,0 +1,90 @@
import { PostHog } from 'posthog-node';
import { getInstallationId } from './installation.js';
import { API_URL } from '../config/env.js';
// Build-time injected via esbuild `define` (apps/main/bundle.mjs).
// In dev/tsc, fall back to process.env so local runs work too.
const POSTHOG_KEY = process.env.POSTHOG_KEY ?? process.env.VITE_PUBLIC_POSTHOG_KEY ?? '';
const POSTHOG_HOST = process.env.POSTHOG_HOST ?? process.env.VITE_PUBLIC_POSTHOG_HOST ?? 'https://us.i.posthog.com';
let client: PostHog | null = null;
let initAttempted = false;
let identifiedUserId: string | null = null;
function getClient(): PostHog | null {
if (initAttempted) return client;
initAttempted = true;
if (!POSTHOG_KEY) {
console.log('[Analytics] POSTHOG_KEY not set; analytics disabled');
return null;
}
try {
client = new PostHog(POSTHOG_KEY, {
host: POSTHOG_HOST,
flushAt: 20,
flushInterval: 10_000,
});
// Tag the install with api_url as a person property up-front,
// so anonymous users are also segmentable by environment (api_url
// distinguishes prod / staging / custom — meaning is assigned in PostHog).
client.identify({
distinctId: getInstallationId(),
properties: { api_url: API_URL },
});
} catch (err) {
console.error('[Analytics] Failed to init PostHog:', err);
client = null;
}
return client;
}
function activeDistinctId(): string {
return identifiedUserId ?? getInstallationId();
}
export function capture(event: string, properties?: Record<string, unknown>): void {
const ph = getClient();
if (!ph) return;
try {
ph.capture({
distinctId: activeDistinctId(),
event,
properties,
});
} catch (err) {
console.error('[Analytics] capture failed:', err);
}
}
export function identify(userId: string, properties?: Record<string, unknown>): void {
const ph = getClient();
if (!ph) return;
try {
// Alias the anonymous installation ID to the rowboat user ID so historical
// anonymous events are linked to the identified user.
ph.alias({ distinctId: userId, alias: getInstallationId() });
ph.identify({
distinctId: userId,
properties: {
...properties,
api_url: API_URL,
},
});
identifiedUserId = userId;
} catch (err) {
console.error('[Analytics] identify failed:', err);
}
}
export function reset(): void {
identifiedUserId = null;
}
export async function shutdown(): Promise<void> {
if (!client) return;
try {
await client.shutdown();
} catch (err) {
console.error('[Analytics] shutdown failed:', err);
}
}

View file

@ -0,0 +1,38 @@
import { capture } from './posthog.js';
import type { UseCase } from './use_case.js';
// Shape compatible with ai-sdk v5 `LanguageModelUsage`.
// All fields are optional because providers report subsets.
export interface LlmUsageInput {
inputTokens?: number;
outputTokens?: number;
totalTokens?: number;
reasoningTokens?: number;
cachedInputTokens?: number;
}
export interface CaptureLlmUsageArgs {
useCase: UseCase;
subUseCase?: string;
agentName?: string;
model: string;
provider: string;
usage: LlmUsageInput | undefined;
}
export function captureLlmUsage(args: CaptureLlmUsageArgs): void {
const usage = args.usage ?? {};
const properties: Record<string, unknown> = {
use_case: args.useCase,
model: args.model,
provider: args.provider,
input_tokens: usage.inputTokens ?? 0,
output_tokens: usage.outputTokens ?? 0,
total_tokens: usage.totalTokens ?? (usage.inputTokens ?? 0) + (usage.outputTokens ?? 0),
};
if (args.subUseCase) properties.sub_use_case = args.subUseCase;
if (args.agentName) properties.agent_name = args.agentName;
if (usage.cachedInputTokens != null) properties.cached_input_tokens = usage.cachedInputTokens;
if (usage.reasoningTokens != null) properties.reasoning_tokens = usage.reasoningTokens;
capture('llm_usage', properties);
}

View file

@ -0,0 +1,28 @@
import { AsyncLocalStorage } from 'node:async_hooks';
export type UseCase = 'copilot_chat' | 'track_block' | 'meeting_note' | 'knowledge_sync';
export interface UseCaseContext {
useCase: UseCase;
subUseCase?: string;
agentName?: string;
}
const storage = new AsyncLocalStorage<UseCaseContext>();
export function withUseCase<T>(ctx: UseCaseContext, fn: () => T): T {
return storage.run(ctx, fn);
}
/**
* Permanently install a use-case context for the current async chain.
* Use inside generator functions where wrapping with `withUseCase()` doesn't
* compose. Child async work (e.g. tool execution) will inherit it.
*/
export function enterUseCase(ctx: UseCaseContext): void {
storage.enterWith(ctx);
}
export function getCurrentUseCase(): UseCaseContext | undefined {
return storage.getStore();
}

View file

@ -4,11 +4,40 @@ import { TrackBlockSchema } from '@x/shared/dist/track-block.js';
const schemaYaml = stringifyYaml(z.toJSONSchema(TrackBlockSchema)).trimEnd(); const schemaYaml = stringifyYaml(z.toJSONSchema(TrackBlockSchema)).trimEnd();
const richBlockMenu = `**5. Rich block render — when the data has a natural visual form.**
The track agent can emit *rich blocks* special fenced blocks the editor renders as styled UI (charts, calendars, embedded iframes, etc.). When the data fits one of these shapes, instruct the agent explicitly so it doesn't fall back to plain markdown:
- \`table\` — multi-row data, scoreboards, leaderboards. *"Render as a \`table\` block with columns Rank, Title, Points, Comments."*
- \`chart\` — time series, breakdowns, share-of-total. *"Render as a \`chart\` block (line, bar, or pie) with x=date, y=rate."*
- \`mermaid\` — flowcharts, sequence/relationship diagrams, gantt charts. *"Render as a \`mermaid\` diagram."*
- \`calendar\` — upcoming events / agenda. *"Render as a \`calendar\` block."*
- \`email\` — single email thread digest (subject, from, summary, latest body, optional draft). *"Render the most important unanswered thread as an \`email\` block."*
- \`image\` — single image with caption. *"Render as an \`image\` block."*
- \`embed\` — YouTube or Figma. *"Render as an \`embed\` block."*
- \`iframe\` — live dashboards, status pages, anything that benefits from being live not snapshotted. *"Render as an \`iframe\` block pointing to <url>."*
- \`transcript\` — long meeting transcripts (collapsible). *"Render as a \`transcript\` block."*
- \`prompt\` — a "next step" Copilot card the user can click to start a chat. *"End with a \`prompt\` block labeled '<short label>' that runs '<longer prompt to send to Copilot>'."*
You **do not** need to write the block body yourself describe the desired output in the instruction and the track agent will format it (it knows each block's exact schema). Avoid \`track\` and \`task\` block types — those are user-authored input, not agent output.
- Good: "Show today's calendar events. Render as a \`calendar\` block with \`showJoinButton: true\`."
- Good: "Plot USD/INR over the last 7 days as a \`chart\` block — line chart, x=date, y=rate."
- Bad: "Show today's calendar." (vague agent may produce a markdown bullet list when the user wants the rich block)`;
export const skill = String.raw` export const skill = String.raw`
# Tracks Skill # Tracks Skill
You are helping the user create and manage **track blocks** YAML-fenced, auto-updating content blocks embedded in notes. Load this skill whenever the user wants to track, monitor, watch, or keep an eye on something in a note, asks for recurring/auto-refreshing content ("every morning...", "show current...", "pin live X here"), or presses Cmd+K and requests auto-updating content at the cursor. You are helping the user create and manage **track blocks** YAML-fenced, auto-updating content blocks embedded in notes. Load this skill whenever the user wants to track, monitor, watch, or keep an eye on something in a note, asks for recurring/auto-refreshing content ("every morning...", "show current...", "pin live X here"), or presses Cmd+K and requests auto-updating content at the cursor.
## First: Just Do It Do Not Ask About Edit Mode
Track creation and editing are **action-first**. When the user asks to track, monitor, watch, or pin auto-updating content, you proceed directly read the file, construct the block, ` + "`" + `workspace-edit` + "`" + ` it in. Do not ask "Should I make edits directly, or show you changes first for approval?" that prompt belongs to generic document editing, not to tracks.
- If another skill or an earlier turn already asked about edit mode and is waiting, treat the user's track request as implicit "direct mode" and proceed.
- You may still ask **one** short clarifying question when genuinely ambiguous (e.g. which note to add it to). Not about permission to edit.
- The Suggested Topics flow below is the one first-turn-confirmation exception leave it intact.
## What Is a Track Block ## What Is a Track Block
A track block is a scheduled, agent-run block embedded directly inside a markdown note. Each block has: A track block is a scheduled, agent-run block embedded directly inside a markdown note. Each block has:
@ -19,7 +48,8 @@ A track block is a scheduled, agent-run block embedded directly inside a markdow
` + "```" + `track ` + "```" + `track
trackId: chicago-time trackId: chicago-time
instruction: Show the current time in Chicago, IL in 12-hour format. instruction: |
Show the current time in Chicago, IL in 12-hour format.
active: true active: true
schedule: schedule:
type: cron type: cron
@ -57,6 +87,23 @@ ${schemaYaml}
**Runtime-managed fields never write these yourself:** ` + "`" + `lastRunAt` + "`" + `, ` + "`" + `lastRunId` + "`" + `, ` + "`" + `lastRunSummary` + "`" + `. **Runtime-managed fields never write these yourself:** ` + "`" + `lastRunAt` + "`" + `, ` + "`" + `lastRunId` + "`" + `, ` + "`" + `lastRunSummary` + "`" + `.
## Do Not Set ` + "`" + `model` + "`" + ` or ` + "`" + `provider` + "`" + ` (almost always)
The schema includes optional ` + "`" + `model` + "`" + ` and ` + "`" + `provider` + "`" + ` fields. **Omit them.** A user-configurable global default already picks the right model and provider for tracks; setting per-track values bypasses that and is almost always wrong.
The only time these belong on a track:
- The user **explicitly** named a model or provider for *this specific track* in their request ("use Claude Opus for this one", "force this track onto OpenAI"). Quote the user's wording back when confirming.
Things that are **not** reasons to set these:
- "Tracks should be fast" / "I want a small model" that's a global preference, not a per-track one. Leave it; the global default exists.
- "This track is complex" write a clearer instruction; don't reach for a different model.
- "Just to be safe" / "in case it matters" this is the antipattern. Leave them out.
- The user changed their main chat model that has nothing to do with tracks. Leave them out.
When in doubt: omit both fields. Never volunteer them. Never include them in a starter template you suggest. If you find yourself adding them as a sensible default, stop you're wrong.
## Choosing a trackId ## Choosing a trackId
- Kebab-case, short, descriptive: ` + "`" + `chicago-time` + "`" + `, ` + "`" + `sfo-weather` + "`" + `, ` + "`" + `hn-top5` + "`" + `, ` + "`" + `btc-usd` + "`" + `. - Kebab-case, short, descriptive: ` + "`" + `chicago-time` + "`" + `, ` + "`" + `sfo-weather` + "`" + `, ` + "`" + `hn-top5` + "`" + `, ` + "`" + `btc-usd` + "`" + `.
@ -68,16 +115,118 @@ ${schemaYaml}
## Writing a Good Instruction ## Writing a Good Instruction
### The Frame: This Is a Personal Knowledge Tracker
Track output lives in a personal knowledge base the user scans frequently. Aim for data-forward, scannable output the answer to "what's current / what changed?" in the fewest words that carry real information. Not prose. Not decoration.
### Core Rules
- **Specific and actionable.** State exactly what to fetch or compute. - **Specific and actionable.** State exactly what to fetch or compute.
- **Single-focus.** One block = one purpose. Split "weather + news + stocks" into three blocks, don't bundle. - **Single-focus.** One block = one purpose. Split "weather + news + stocks" into three blocks, don't bundle.
- **Imperative voice, 1-3 sentences.** - **Imperative voice, 1-3 sentences.**
- **Mention output style** if it matters ("markdown bullet list", "one sentence", "table with 5 rows"). - **Specify output shape.** Describe it concretely: "one line: ` + "`" + `<temp>°F, <conditions>` + "`" + `", "3-column markdown table", "bulleted digest of 5 items".
Good: ### Self-Sufficiency (critical)
> Fetch the current temperature, feels-like, and conditions for Chicago, IL in Fahrenheit. Return as a single line: "72°F (feels like 70°F), partly cloudy".
Bad: The instruction runs later, in a background scheduler, with **no chat context and no memory of this conversation**. It must stand alone.
> Tell me about Chicago.
**Never use phrases that depend on prior conversation or prior runs:**
- "as before", "same style as before", "like last time"
- "keep the format we discussed", "matching the previous output"
- "continue from where you left off" (without stating the state)
If you want consistent style across runs, **describe the style inline** (e.g. "a 3-column markdown table with headers ` + "`" + `Location` + "`" + `, ` + "`" + `Local Time` + "`" + `, ` + "`" + `Offset` + "`" + `"; "a one-line status: HH:MM, conditions, temp"). The track agent only sees your instruction not this chat, not what you produced last time.
### Output Patterns Match the Data
Pick a shape that fits what the user is tracking. Five common patterns the first four are plain markdown; the fifth is a rich rendered block:
**1. Single metric / status line.**
- Good: "Fetch USD/INR. Return one line: ` + "`" + `USD/INR: <rate> (as of <HH:MM IST>)` + "`" + `."
- Bad: "Give me a nice update about the dollar rate."
**2. Compact table.**
- Good: "Show current local time for India, Chicago, Indianapolis as a 3-column markdown table: ` + "`" + `Location | Local Time | Offset vs India` + "`" + `. One row per location, no prose."
- Bad: "Show a polished, table-first world clock with a pleasant layout."
**3. Rolling digest.**
- Good: "Summarize the top 5 HN front-page stories as bullets: ` + "`" + `- <title> (<points> pts, <comments> comments)` + "`" + `. No commentary."
- Bad: "Give me the top HN stories with thoughtful takeaways."
**4. Status / threshold watch.**
- Good: "Check https://status.example.com. Return one line: ` + "`" + ` All systems operational` + "`" + ` or ` + "`" + ` <component>: <status>` + "`" + `. If degraded, add one bullet per affected component."
- Bad: "Keep an eye on the status page and tell me how it looks."
${richBlockMenu}
### Anti-Patterns
- **Decorative adjectives** describing the output: "polished", "clean", "beautiful", "pleasant", "nicely formatted" they tell the agent nothing concrete.
- **References to past state** without a mechanism to access it ("as before", "same as last time").
- **Bundling multiple purposes** into one instruction split into separate track blocks.
- **Open-ended prose requests** ("tell me about X", "give me thoughts on X").
- **Output-shape words without a concrete shape** ("dashboard-like", "report-style").
## YAML String Style (critical read before writing any ` + "`" + `instruction` + "`" + ` or ` + "`" + `eventMatchCriteria` + "`" + `)
The two free-form fields ` + "`" + `instruction` + "`" + ` and ` + "`" + `eventMatchCriteria` + "`" + ` are where YAML parsing usually breaks. The runner re-emits the full YAML block every time it writes ` + "`" + `lastRunAt` + "`" + `, ` + "`" + `lastRunSummary` + "`" + `, etc., and the YAML library may re-flow long plain (unquoted) strings onto multiple lines. Once that happens, any ` + "`" + `:` + "`" + ` **followed by a space** inside the value silently corrupts the block: YAML interprets the ` + "`" + `:` + "`" + ` as a new key/value separator and the instruction gets truncated.
Real failure seen in the wild an instruction containing the phrase ` + "`" + `"polished UI style as before: clean, compact..."` + "`" + ` was written as a plain scalar, got re-emitted across multiple lines on the next run, and the ` + "`" + `as before:` + "`" + ` became a phantom key. The block parsed as garbage after that.
### The rule: always use a safe scalar style
**Default to the literal block scalar (` + "`" + `|` + "`" + `) for ` + "`" + `instruction` + "`" + ` and ` + "`" + `eventMatchCriteria` + "`" + `, every time.** It is the only style that is robust across the full range of punctuation these fields typically contain, and it is safe even if the content later grows to multiple lines.
### Preferred: literal block scalar (` + "`" + `|` + "`" + `)
` + "```" + `yaml
instruction: |
Show current local time for India, Chicago, and Indianapolis as a
3-column markdown table: Location | Local Time | Offset vs India.
One row per location, 24-hour time (HH:MM), no extra prose.
Note: when a location is in DST, reflect that in the offset column.
eventMatchCriteria: |
Emails from the finance team about Q3 budget or OKRs.
` + "```" + `
- ` + "`" + `|` + "`" + ` preserves line breaks verbatim. Colons, ` + "`" + `#` + "`" + `, quotes, leading ` + "`" + `-` + "`" + `, percent signs all literal. No escaping needed.
- **Indent every content line by 2 spaces** relative to the key (` + "`" + `instruction:` + "`" + `). Use spaces, never tabs.
- Leave a real newline after ` + "`" + `|` + "`" + ` content starts on the next line, not the same line.
- Default chomping (no modifier) is fine. Do **not** add ` + "`" + `-` + "`" + ` or ` + "`" + `+` + "`" + ` unless you know you need them.
- A ` + "`" + `|` + "`" + ` block is terminated by a line indented less than the content typically the next sibling key (` + "`" + `active:` + "`" + `, ` + "`" + `schedule:` + "`" + `).
### Acceptable alternative: double-quoted on a single line
Fine for short single-sentence fields with no newline needs:
` + "```" + `yaml
instruction: "Show the current time in Chicago, IL in 12-hour format."
eventMatchCriteria: "Emails about Q3 planning, OKRs, or roadmap decisions."
` + "```" + `
- Escape ` + "`" + `"` + "`" + ` as ` + "`" + `\"` + "`" + ` and backslash as ` + "`" + `\\` + "`" + `.
- Prefer ` + "`" + `|` + "`" + ` the moment the string needs two sentences or a newline.
### Single-quoted on a single line (only if double-quoted would require heavy escaping)
` + "```" + `yaml
instruction: 'He said "hi" at 9:00.'
` + "```" + `
- A literal single quote is escaped by doubling it: ` + "`" + `'it''s fine'` + "`" + `.
- No other escape sequences work.
### Do NOT use plain (unquoted) scalars for these two fields
Even if the current value looks safe, a future edit (by you or the user) may introduce a ` + "`" + `:` + "`" + ` or ` + "`" + `#` + "`" + `, and a future re-emit may fold the line. The ` + "`" + `|` + "`" + ` style is safe under **all** future edits plain scalars are not.
### Editing an existing track
If you ` + "`" + `workspace-edit` + "`" + ` an existing track's ` + "`" + `instruction` + "`" + ` or ` + "`" + `eventMatchCriteria` + "`" + ` and find it is still a plain scalar, **upgrade it to ` + "`" + `|` + "`" + `** in the same edit. Don't leave a plain scalar behind that the next run will corrupt.
### Never-hand-write fields
` + "`" + `lastRunAt` + "`" + `, ` + "`" + `lastRunId` + "`" + `, ` + "`" + `lastRunSummary` + "`" + ` are owned by the runner. Don't touch them — don't even try to style them. If your ` + "`" + `workspace-edit` + "`" + `'s ` + "`" + `oldString` + "`" + ` happens to include these lines, copy them byte-for-byte into ` + "`" + `newString` + "`" + ` unchanged.
## Schedules ## Schedules
@ -132,9 +281,12 @@ In addition to manual and scheduled, a track can be triggered by **events** —
` + "```" + `track ` + "```" + `track
trackId: q3-planning-emails trackId: q3-planning-emails
instruction: Maintain a running summary of decisions and open questions about Q3 planning, drawn from emails on the topic. instruction: |
Maintain a running summary of decisions and open questions about Q3
planning, drawn from emails on the topic.
active: true active: true
eventMatchCriteria: Emails about Q3 planning, roadmap decisions, or quarterly OKRs eventMatchCriteria: |
Emails about Q3 planning, roadmap decisions, or quarterly OKRs.
` + "```" + ` ` + "```" + `
How it works: How it works:
@ -155,6 +307,8 @@ Tracks **without** ` + "`" + `eventMatchCriteria` + "`" + ` opt out of events en
## Insertion Workflow ## Insertion Workflow
**Reminder:** once you have enough to act, act. Do not pause to ask about edit mode.
### Cmd+K with cursor context ### Cmd+K with cursor context
When the user invokes Cmd+K, the context includes an attachment mention like: When the user invokes Cmd+K, the context includes an attachment mention like:
@ -201,7 +355,8 @@ Write it verbatim like this (including the blank line between fence and target):
` + "```" + `track ` + "```" + `track
trackId: <id> trackId: <id>
instruction: <instruction> instruction: |
<instruction, indented 2 spaces, may span multiple lines>
active: true active: true
schedule: schedule:
type: cron type: cron
@ -214,6 +369,7 @@ schedule:
**Rules:** **Rules:**
- One blank line between the closing ` + "`" + "```" + `" + " fence and the ` + "`" + `<!--track-target:ID-->` + "`" + `. - One blank line between the closing ` + "`" + "```" + `" + " fence and the ` + "`" + `<!--track-target:ID-->` + "`" + `.
- Target pair is **empty on creation**. The runner fills it on the first run. - Target pair is **empty on creation**. The runner fills it on the first run.
- **Always use the literal block scalar (` + "`" + `|` + "`" + `)** for ` + "`" + `instruction` + "`" + ` and ` + "`" + `eventMatchCriteria` + "`" + `, indented 2 spaces. Never a plain (unquoted) scalar see the YAML String Style section above for why.
- **Always quote cron expressions** in YAML they contain spaces and ` + "`" + `*` + "`" + `. - **Always quote cron expressions** in YAML they contain spaces and ` + "`" + `*` + "`" + `.
- Use 2-space YAML indent. No tabs. - Use 2-space YAML indent. No tabs.
- Top-level markdown only never inside a code fence, blockquote, or table. - Top-level markdown only never inside a code fence, blockquote, or table.
@ -317,7 +473,8 @@ Minimal template:
` + "```" + `track ` + "```" + `track
trackId: <kebab-id> trackId: <kebab-id>
instruction: <what to produce> instruction: |
<what to produce always use ` + "`" + `|` + "`" + `, indented 2 spaces>
active: true active: true
schedule: schedule:
type: cron type: cron
@ -328,6 +485,8 @@ schedule:
<!--/track-target:<kebab-id>--> <!--/track-target:<kebab-id>-->
Top cron expressions: ` + "`" + `"0 * * * *"` + "`" + ` (hourly), ` + "`" + `"0 8 * * *"` + "`" + ` (daily 8am), ` + "`" + `"0 9 * * 1-5"` + "`" + ` (weekdays 9am), ` + "`" + `"*/15 * * * *"` + "`" + ` (every 15m). Top cron expressions: ` + "`" + `"0 * * * *"` + "`" + ` (hourly), ` + "`" + `"0 8 * * *"` + "`" + ` (daily 8am), ` + "`" + `"0 9 * * 1-5"` + "`" + ` (weekdays 9am), ` + "`" + `"*/15 * * * *"` + "`" + ` (every 15m).
YAML style reminder: ` + "`" + `instruction` + "`" + ` and ` + "`" + `eventMatchCriteria` + "`" + ` are **always** ` + "`" + `|` + "`" + ` block scalars. Never plain. Never leave a plain scalar in place when editing.
`; `;
export default skill; export default skill;

View file

@ -21,9 +21,10 @@ import { BrowserControlInputSchema, type BrowserControlInput } from "@x/shared/d
import type { ToolContext } from "./exec-tool.js"; import type { ToolContext } from "./exec-tool.js";
import { generateText } from "ai"; import { generateText } from "ai";
import { createProvider } from "../../models/models.js"; import { createProvider } from "../../models/models.js";
import { IModelConfigRepo } from "../../models/repo.js"; import { getDefaultModelAndProvider, resolveProviderConfig } from "../../models/defaults.js";
import { captureLlmUsage } from "../../analytics/usage.js";
import { getCurrentUseCase } from "../../analytics/use_case.js";
import { isSignedIn } from "../../account/account.js"; import { isSignedIn } from "../../account/account.js";
import { getGatewayProvider } from "../../models/gateway.js";
import { getAccessToken } from "../../auth/tokens.js"; import { getAccessToken } from "../../auth/tokens.js";
import { API_URL } from "../../config/env.js"; import { API_URL } from "../../config/env.js";
import { updateContent, updateTrackBlock } from "../../knowledge/track/fileops.js"; import { updateContent, updateTrackBlock } from "../../knowledge/track/fileops.js";
@ -746,13 +747,9 @@ export const BuiltinTools: z.infer<typeof BuiltinToolsSchema> = {
const base64 = buffer.toString('base64'); const base64 = buffer.toString('base64');
// Resolve model config from DI container const { model: modelId, provider: providerName } = await getDefaultModelAndProvider();
const modelConfigRepo = container.resolve<IModelConfigRepo>('modelConfigRepo'); const providerConfig = await resolveProviderConfig(providerName);
const modelConfig = await modelConfigRepo.getConfig(); const model = createProvider(providerConfig).languageModel(modelId);
const provider = await isSignedIn()
? await getGatewayProvider()
: createProvider(modelConfig.provider);
const model = provider.languageModel(modelConfig.model);
const userPrompt = prompt || 'Convert this file to well-structured markdown.'; const userPrompt = prompt || 'Convert this file to well-structured markdown.';
@ -769,6 +766,16 @@ export const BuiltinTools: z.infer<typeof BuiltinToolsSchema> = {
], ],
}); });
const ctx = getCurrentUseCase();
captureLlmUsage({
useCase: ctx?.useCase ?? 'copilot_chat',
subUseCase: 'file_parse',
...(ctx?.agentName ? { agentName: ctx.agentName } : {}),
model: modelId,
provider: providerName,
usage: response.usage,
});
return { return {
success: true, success: true,
fileName, fileName,

View file

@ -216,12 +216,15 @@ export async function refreshTokens(
return tokens; return tokens;
} }
const EXPIRY_MARGIN_SECONDS = 60;
/** /**
* Check if tokens are expired * Check if tokens are expired. Treats tokens as expired EXPIRY_MARGIN_SECONDS
* before the real expiry to absorb clock skew and in-flight request latency.
*/ */
export function isTokenExpired(tokens: OAuthTokens): boolean { export function isTokenExpired(tokens: OAuthTokens): boolean {
const now = Math.floor(Date.now() / 1000); const now = Math.floor(Date.now() / 1000);
return tokens.expires_at <= now; return tokens.expires_at <= now + EXPIRY_MARGIN_SECONDS;
} }
/** /**

View file

@ -3,18 +3,12 @@ import { IOAuthRepo } from './repo.js';
import { IClientRegistrationRepo } from './client-repo.js'; import { IClientRegistrationRepo } from './client-repo.js';
import { getProviderConfig } from './providers.js'; import { getProviderConfig } from './providers.js';
import * as oauthClient from './oauth-client.js'; import * as oauthClient from './oauth-client.js';
import { OAuthTokens } from './types.js';
export async function getAccessToken(): Promise<string> { let refreshInFlight: Promise<OAuthTokens> | null = null;
const oauthRepo = container.resolve<IOAuthRepo>('oauthRepo');
const { tokens } = await oauthRepo.read('rowboat');
if (!tokens) {
throw new Error('Not signed into Rowboat');
}
if (!oauthClient.isTokenExpired(tokens)) {
return tokens.access_token;
}
async function performRefresh(tokens: OAuthTokens): Promise<OAuthTokens> {
console.log("Refreshing rowboat access token");
if (!tokens.refresh_token) { if (!tokens.refresh_token) {
throw new Error('Rowboat token expired and no refresh token available. Please sign in again.'); throw new Error('Rowboat token expired and no refresh token available. Please sign in again.');
} }
@ -40,7 +34,29 @@ export async function getAccessToken(): Promise<string> {
tokens.refresh_token, tokens.refresh_token,
tokens.scopes, tokens.scopes,
); );
const oauthRepo = container.resolve<IOAuthRepo>('oauthRepo');
await oauthRepo.upsert('rowboat', { tokens: refreshed }); await oauthRepo.upsert('rowboat', { tokens: refreshed });
return refreshed;
}
export async function getAccessToken(): Promise<string> {
const oauthRepo = container.resolve<IOAuthRepo>('oauthRepo');
const { tokens } = await oauthRepo.read('rowboat');
if (!tokens) {
throw new Error('Not signed into Rowboat');
}
if (!oauthClient.isTokenExpired(tokens)) {
return tokens.access_token;
}
if (!refreshInFlight) {
refreshInFlight = performRefresh(tokens).finally(() => {
refreshInFlight = null;
});
}
const refreshed = await refreshInFlight;
return refreshed.access_token; return refreshed.access_token;
} }

View file

@ -3,6 +3,7 @@ import path from 'path';
import { google } from 'googleapis'; import { google } from 'googleapis';
import { WorkDir } from '../config/config.js'; import { WorkDir } from '../config/config.js';
import { createRun, createMessage } from '../runs/runs.js'; import { createRun, createMessage } from '../runs/runs.js';
import { getKgModel } from '../models/defaults.js';
import { waitForRunCompletion } from '../agents/utils.js'; import { waitForRunCompletion } from '../agents/utils.js';
import { serviceLogger } from '../services/service_logger.js'; import { serviceLogger } from '../services/service_logger.js';
import { loadUserConfig, updateUserEmail } from '../config/user_config.js'; import { loadUserConfig, updateUserEmail } from '../config/user_config.js';
@ -305,7 +306,12 @@ async function processAgentNotes(): Promise<void> {
const timestamp = new Date().toISOString(); const timestamp = new Date().toISOString();
const message = `Current timestamp: ${timestamp}\n\nProcess the following source material and update the Agent Notes folder accordingly.\n\n${messageParts.join('\n\n')}`; const message = `Current timestamp: ${timestamp}\n\nProcess the following source material and update the Agent Notes folder accordingly.\n\n${messageParts.join('\n\n')}`;
const agentRun = await createRun({ agentId: AGENT_ID }); const agentRun = await createRun({
agentId: AGENT_ID,
model: await getKgModel(),
useCase: 'knowledge_sync',
subUseCase: 'agent_notes',
});
await createMessage(agentRun.id, message); await createMessage(agentRun.id, message);
await waitForRunCompletion(agentRun.id); await waitForRunCompletion(agentRun.id);

View file

@ -38,6 +38,7 @@ const SOURCE_FOLDERS = [
'gmail_sync', 'gmail_sync',
path.join('knowledge', 'Meetings', 'fireflies'), path.join('knowledge', 'Meetings', 'fireflies'),
path.join('knowledge', 'Meetings', 'granola'), path.join('knowledge', 'Meetings', 'granola'),
path.join('knowledge', 'Meetings', 'rowboat'),
]; ];
// Voice memos are now created directly in knowledge/Voice Memos/<date>/ // Voice memos are now created directly in knowledge/Voice Memos/<date>/
@ -251,6 +252,8 @@ async function createNotesFromBatch(
// Create a run for the note creation agent // Create a run for the note creation agent
const run = await createRun({ const run = await createRun({
agentId: NOTE_CREATION_AGENT, agentId: NOTE_CREATION_AGENT,
useCase: 'knowledge_sync',
subUseCase: 'build_graph',
}); });
const suggestedTopicsContent = readSuggestedTopicsFile(); const suggestedTopicsContent = readSuggestedTopicsFile();

View file

@ -1,44 +1,157 @@
import path from 'path'; import path from 'path';
import fs from 'fs'; import fs from 'fs';
import { stringify as stringifyYaml } from 'yaml';
import { TrackBlockSchema } from '@x/shared/dist/track-block.js';
import { WorkDir } from '../config/config.js'; import { WorkDir } from '../config/config.js';
import z from 'zod';
const KNOWLEDGE_DIR = path.join(WorkDir, 'knowledge'); const KNOWLEDGE_DIR = path.join(WorkDir, 'knowledge');
const DAILY_NOTE_PATH = path.join(KNOWLEDGE_DIR, 'Today.md'); const DAILY_NOTE_PATH = path.join(KNOWLEDGE_DIR, 'Today.md');
const TARGET_ID = 'dailybrief';
interface Section {
heading: string;
track: z.infer<typeof TrackBlockSchema>;
}
const SECTIONS: Section[] = [
{
heading: '## ⏱ Up Next',
track: {
trackId: 'up-next',
instruction:
`Write 1-3 sentences of plain markdown giving the user a shoulder-tap about what's next on their calendar today.
This section refreshes on calendar changes, not on a clock tick do NOT promise live minute countdowns. Frame urgency in buckets based on the event's start time relative to now:
- Start time is in the past or within roughly half an hour imminent: name the meeting and say it's starting soon (e.g. "Standup is starting — join link in the Calendar section below.").
- Start time is later this morning or this afternoon upcoming: name the meeting and roughly when (e.g. "Design review later this morning." / "1:1 with Sam this afternoon.").
- Start time is several hours out or nothing before then focus block: frame the gap (e.g. "Next up is the all-hands at 3pm — good long focus block until then.").
Use the event's start time of day ("at 3pm", "this afternoon") rather than a countdown ("in 40 minutes"). Countdowns go stale between syncs.
Data: read today's events from calendar_sync/ (workspace-readdir, then workspace-readFile each .json file). Filter to events whose start datetime is today and hasn't ended yet for finding the next event, pick the earliest upcoming one; if all have passed, treat as clear.
If you find quick context in knowledge/ that's genuinely useful, add one short clause ("Ramnique pushed the OAuth PR yesterday — might come up"). Use workspace-grep / workspace-readFile conservatively; don't stall on deep research.
If nothing remains today, output exactly: Clear for the rest of the day.
Plain markdown prose only no calendar block, no email block, no headings.`,
eventMatchCriteria:
`Calendar event changes affecting today — new meetings, reschedules, cancellations, meetings starting soon. Skip changes to events on other days.`,
active: true,
},
},
{
heading: '## 📅 Calendar',
track: {
trackId: 'calendar',
instruction:
`Emit today's meetings as a calendar block titled "Today's Meetings".
Data: read calendar_sync/ via workspace-readdir, then workspace-readFile each .json event file. Filter to events occurring today. After 10am local time, drop meetings that have already ended only include meetings that haven't ended yet.
This section refreshes on calendar changes, not on a clock tick the "drop ended meetings" rule applies on each refresh, so an ended meeting disappears the next time any calendar event changes (not exactly on the clock hour). That's fine.
Always emit the calendar block, even when there are no remaining events (in that case use events: [] and showJoinButton: false). Set showJoinButton: true whenever any event has a conferenceLink.
After the block, you MAY add one short markdown line per event giving useful prep context pulled from knowledge/ ("Design review: last week we agreed to revisit the type-picker UX."). Keep it tight one line each, only when meaningful. Skip routine/recurring meetings.`,
eventMatchCriteria:
`Calendar event changes affecting today — additions, updates, cancellations, reschedules.`,
active: true,
},
},
{
heading: '## 📧 Emails',
track: {
trackId: 'emails',
instruction:
`Maintain a digest of email threads worth the user's attention today, rendered as zero or more email blocks (one per thread).
Event-driven path (primary): the agent message will include a "Gmail sync update" digest payload describing one or more freshly-synced threads from a single sync run. The digest lists each thread with its subject, sender, date, threadId, and body. Iterate over every thread in the payload and decide per thread whether it warrants surfacing. Skip marketing, auto-notifications, closed-out threads, and other low-signal mail. For threads that are attention-worthy, integrate them into the existing digest: add a new email block for a new threadId, or update the existing block if the threadId is already shown. If NONE of the threads in the payload are attention-worthy, skip the update do NOT call update-track-content. Emit at most one update-track-content call that covers the full set of changes from this event.
Manual path (fallback): with no event payload, scan gmail_sync/ via workspace-readdir (skip sync_state.json and attachments/). Read threads with workspace-readFile. Prioritize threads whose frontmatter action field is "reply" or "respond", plus other high-signal recent threads.
Each email block should include threadId, subject, from, date, summary, and latest_email. For threads that need a reply, add a draft_response written in the user's voice direct, informal, no fluff. For FYI threads, omit draft_response.
If there is genuinely nothing to surface, output the single line: No new emails.
Do NOT re-list threads the user has already seen unless their state changed (new reply, status flip).`,
eventMatchCriteria:
`New or updated email threads that may need the user's attention today — drafts to send, replies to write, urgent requests, time-sensitive info. Skip marketing, newsletters, auto-notifications, and chatter on closed threads.`,
active: true,
},
},
{
heading: '## 📰 What You Missed',
track: {
trackId: 'what-you-missed',
instruction:
`Short markdown summary of what happened yesterday that matters this morning.
Data sources:
- knowledge/Meetings/<source>/<YYYY-MM-DD>/meeting-<timestamp>.md use workspace-readdir with recursive: true on knowledge/Meetings, filter for folders matching yesterday's date (compute yesterday from the current local date), read each matching file. Pull out: decisions made, action items assigned, blockers raised, commitments.
- gmail_sync/ skim for threads from yesterday that went unresolved or still need a reply.
Skip recurring/routine events (standups, weekly syncs) unless something unusual happened in them.
Write concise markdown a few bullets or a short paragraph, whichever reads better. Lead with anything that shifts the user's priorities today.
If nothing notable happened, output exactly: Quiet day yesterday nothing to flag.
Do NOT manufacture content to fill the section.`,
active: true,
schedule: {
type: 'cron',
expression: '0 7 * * *',
},
},
},
{
heading: '## ✅ Today\'s Priorities',
track: {
trackId: 'priorities',
instruction:
`Ranked markdown list of the real, actionable items the user should focus on today.
Data sources:
- Yesterday's meeting notes under knowledge/Meetings/<source>/<YYYY-MM-DD>/ action items assigned to the user are often the most important source.
- knowledge/ use workspace-grep for "- [ ]" checkboxes, explicit action items, deadlines, follow-ups.
- Optional: workspace-readFile on knowledge/Today.md for the current "What You Missed" section useful for alignment.
Rules:
- Do NOT list calendar events as tasks they're already in the Calendar section.
- Do NOT list trivial admin (filing small invoices, archiving spam).
- Rank by importance. Lead with the most critical item. Note time-sensitivity when it exists ("needs to go out before the 3pm review").
- Add a brief reason for each item when it's not self-evident.
If nothing genuinely needs attention, output exactly: No pressing tasks today good day to make progress on bigger items.
Do NOT invent busywork.`,
active: true,
schedule: {
type: 'cron',
expression: '30 7 * * *',
},
},
},
];
function buildDailyNoteContent(): string { function buildDailyNoteContent(): string {
const now = new Date(); const parts: string[] = ['# Today', ''];
const startDate = now.toISOString(); for (const { heading, track } of SECTIONS) {
const endDate = new Date(now.getTime() + 365 * 24 * 60 * 60 * 1000).toISOString(); const yaml = stringifyYaml(track, { lineWidth: 0, blockQuote: 'literal' }).trimEnd();
parts.push(
const instruction = 'Create a daily brief for me'; heading,
'',
const taskBlock = JSON.stringify({ '```track',
instruction, yaml,
schedule: { '```',
type: 'cron', '',
expression: '*/15 * * * *', `<!--track-target:${track.trackId}-->`,
startDate, `<!--/track-target:${track.trackId}-->`,
endDate, '',
}, );
'schedule-label': 'runs every 15 minutes', }
targetId: TARGET_ID, return parts.join('\n');
});
return [
'---',
'live_note: true',
'---',
'# Today',
'',
'```task',
taskBlock,
'```',
'',
`<!--task-target:${TARGET_ID}-->`,
`<!--/task-target:${TARGET_ID}-->`,
'',
].join('\n');
} }
export function ensureDailyNote(): void { export function ensureDailyNote(): void {

View file

@ -0,0 +1,18 @@
const locks = new Map<string, Promise<void>>();
export async function withFileLock<T>(absPath: string, fn: () => Promise<T>): Promise<T> {
const prev = locks.get(absPath) ?? Promise.resolve();
let release!: () => void;
const gate = new Promise<void>((r) => { release = r; });
const myTail = prev.then(() => gate);
locks.set(absPath, myTail);
try {
await prev;
return await fn();
} finally {
release();
if (locks.get(absPath) === myTail) {
locks.delete(absPath);
}
}
}

View file

@ -13,7 +13,6 @@ export function getRaw(): string {
const defaultEndISO = defaultEnd.toISOString(); const defaultEndISO = defaultEnd.toISOString();
return `--- return `---
model: gpt-5.2
tools: tools:
${toolEntries} ${toolEntries}
--- ---

View file

@ -4,11 +4,13 @@ import { CronExpressionParser } from 'cron-parser';
import { generateText } from 'ai'; import { generateText } from 'ai';
import { WorkDir } from '../config/config.js'; import { WorkDir } from '../config/config.js';
import { createRun, createMessage, fetchRun } from '../runs/runs.js'; import { createRun, createMessage, fetchRun } from '../runs/runs.js';
import { getKgModel } from '../models/defaults.js';
import container from '../di/container.js'; import container from '../di/container.js';
import type { IModelConfigRepo } from '../models/repo.js'; import type { IModelConfigRepo } from '../models/repo.js';
import { createProvider } from '../models/models.js'; import { createProvider } from '../models/models.js';
import { inlineTask } from '@x/shared'; import { inlineTask } from '@x/shared';
import { extractAgentResponse, waitForRunCompletion } from '../agents/utils.js'; import { extractAgentResponse, waitForRunCompletion } from '../agents/utils.js';
import { captureLlmUsage } from '../analytics/usage.js';
const SYNC_INTERVAL_MS = 15 * 1000; // 15 seconds const SYNC_INTERVAL_MS = 15 * 1000; // 15 seconds
const INLINE_TASK_AGENT = 'inline_task_agent'; const INLINE_TASK_AGENT = 'inline_task_agent';
@ -467,7 +469,12 @@ async function processInlineTasks(): Promise<void> {
console.log(`[InlineTasks] Running task: "${task.instruction.slice(0, 80)}..."`); console.log(`[InlineTasks] Running task: "${task.instruction.slice(0, 80)}..."`);
try { try {
const run = await createRun({ agentId: INLINE_TASK_AGENT }); const run = await createRun({
agentId: INLINE_TASK_AGENT,
model: await getKgModel(),
useCase: 'knowledge_sync',
subUseCase: 'inline_task_run',
});
const message = [ const message = [
`Execute the following instruction from the note "${relativePath}":`, `Execute the following instruction from the note "${relativePath}":`,
@ -547,7 +554,12 @@ export async function processRowboatInstruction(
scheduleLabel: string | null; scheduleLabel: string | null;
response: string | null; response: string | null;
}> { }> {
const run = await createRun({ agentId: INLINE_TASK_AGENT }); const run = await createRun({
agentId: INLINE_TASK_AGENT,
model: await getKgModel(),
useCase: 'knowledge_sync',
subUseCase: 'inline_task_run',
});
const message = [ const message = [
`Process the following @rowboat instruction from the note "${notePath}":`, `Process the following @rowboat instruction from the note "${notePath}":`,
@ -658,6 +670,14 @@ Respond with ONLY valid JSON: either a schedule object or null. No other text.`;
prompt: instruction, prompt: instruction,
}); });
captureLlmUsage({
useCase: 'knowledge_sync',
subUseCase: 'inline_task_classify',
model: config.model,
provider: config.provider.flavor,
usage: result.usage,
});
let text = result.text.trim(); let text = result.text.trim();
console.log('[classifySchedule] LLM response:', text); console.log('[classifySchedule] LLM response:', text);
// Strip markdown code fences if the LLM wraps the JSON // Strip markdown code fences if the LLM wraps the JSON

View file

@ -2,6 +2,7 @@ import fs from 'fs';
import path from 'path'; import path from 'path';
import { WorkDir } from '../config/config.js'; import { WorkDir } from '../config/config.js';
import { createRun, createMessage } from '../runs/runs.js'; import { createRun, createMessage } from '../runs/runs.js';
import { getKgModel } from '../models/defaults.js';
import { bus } from '../runs/bus.js'; import { bus } from '../runs/bus.js';
import { waitForRunCompletion } from '../agents/utils.js'; import { waitForRunCompletion } from '../agents/utils.js';
import { serviceLogger } from '../services/service_logger.js'; import { serviceLogger } from '../services/service_logger.js';
@ -71,6 +72,9 @@ async function labelEmailBatch(
): Promise<{ runId: string; filesEdited: Set<string> }> { ): Promise<{ runId: string; filesEdited: Set<string> }> {
const run = await createRun({ const run = await createRun({
agentId: LABELING_AGENT, agentId: LABELING_AGENT,
model: await getKgModel(),
useCase: 'knowledge_sync',
subUseCase: 'label_emails',
}); });
let message = `Label the following ${files.length} email files by prepending YAML frontmatter.\n\n`; let message = `Label the following ${files.length} email files by prepending YAML frontmatter.\n\n`;

View file

@ -2,7 +2,6 @@ import { renderTagSystemForEmails } from './tag_system.js';
export function getRaw(): string { export function getRaw(): string {
return `--- return `---
model: gpt-5.2
tools: tools:
workspace-readFile: workspace-readFile:
type: builtin type: builtin

View file

@ -3,7 +3,6 @@ import { renderNoteEffectRules } from './tag_system.js';
export function getRaw(): string { export function getRaw(): string {
return `--- return `---
model: gpt-5.2
tools: tools:
workspace-writeFile: workspace-writeFile:
type: builtin type: builtin

View file

@ -2,7 +2,6 @@ import { renderTagSystemForNotes } from './tag_system.js';
export function getRaw(): string { export function getRaw(): string {
return `--- return `---
model: gpt-5.2
tools: tools:
workspace-readFile: workspace-readFile:
type: builtin type: builtin

View file

@ -1,12 +1,10 @@
import fs from 'fs'; import fs from 'fs';
import path from 'path'; import path from 'path';
import { generateText } from 'ai'; import { generateText } from 'ai';
import container from '../di/container.js';
import type { IModelConfigRepo } from '../models/repo.js';
import { createProvider } from '../models/models.js'; import { createProvider } from '../models/models.js';
import { isSignedIn } from '../account/account.js'; import { getDefaultModelAndProvider, getMeetingNotesModel, resolveProviderConfig } from '../models/defaults.js';
import { getGatewayProvider } from '../models/gateway.js';
import { WorkDir } from '../config/config.js'; import { WorkDir } from '../config/config.js';
import { captureLlmUsage } from '../analytics/usage.js';
const CALENDAR_SYNC_DIR = path.join(WorkDir, 'calendar_sync'); const CALENDAR_SYNC_DIR = path.join(WorkDir, 'calendar_sync');
@ -138,15 +136,10 @@ function loadCalendarEventContext(calendarEventJson: string): string {
} }
export async function summarizeMeeting(transcript: string, meetingStartTime?: string, calendarEventJson?: string): Promise<string> { export async function summarizeMeeting(transcript: string, meetingStartTime?: string, calendarEventJson?: string): Promise<string> {
const repo = container.resolve<IModelConfigRepo>('modelConfigRepo'); const modelId = await getMeetingNotesModel();
const config = await repo.getConfig(); const { provider: providerName } = await getDefaultModelAndProvider();
const signedIn = await isSignedIn(); const providerConfig = await resolveProviderConfig(providerName);
const provider = signedIn const model = createProvider(providerConfig).languageModel(modelId);
? await getGatewayProvider()
: createProvider(config.provider);
const modelId = config.meetingNotesModel
|| (signedIn ? "gpt-5.4" : config.model);
const model = provider.languageModel(modelId);
// If a specific calendar event was linked, use it directly. // If a specific calendar event was linked, use it directly.
// Otherwise fall back to scanning events within ±3 hours. // Otherwise fall back to scanning events within ±3 hours.
@ -165,5 +158,12 @@ export async function summarizeMeeting(transcript: string, meetingStartTime?: st
prompt, prompt,
}); });
captureLlmUsage({
useCase: 'meeting_note',
model: modelId,
provider: providerName,
usage: result.usage,
});
return result.text.trim(); return result.text.trim();
} }

View file

@ -15,8 +15,52 @@ import { createEvent } from './track/events.js';
const SYNC_DIR = path.join(WorkDir, 'gmail_sync'); const SYNC_DIR = path.join(WorkDir, 'gmail_sync');
const SYNC_INTERVAL_MS = 5 * 60 * 1000; // Check every 5 minutes const SYNC_INTERVAL_MS = 5 * 60 * 1000; // Check every 5 minutes
const REQUIRED_SCOPE = 'https://www.googleapis.com/auth/gmail.readonly'; const REQUIRED_SCOPE = 'https://www.googleapis.com/auth/gmail.readonly';
const MAX_THREADS_IN_DIGEST = 10;
const nhm = new NodeHtmlMarkdown(); const nhm = new NodeHtmlMarkdown();
interface SyncedThread {
threadId: string;
markdown: string;
}
function summarizeGmailSync(threads: SyncedThread[]): string {
const lines: string[] = [
`# Gmail sync update`,
``,
`${threads.length} new/updated thread${threads.length === 1 ? '' : 's'}.`,
``,
];
const shown = threads.slice(0, MAX_THREADS_IN_DIGEST);
const hidden = threads.length - shown.length;
if (shown.length > 0) {
lines.push(`## Threads`, ``);
for (const { markdown } of shown) {
lines.push(markdown.trimEnd(), ``, `---`, ``);
}
if (hidden > 0) {
lines.push(`_…and ${hidden} more thread(s) omitted from digest._`, ``);
}
}
return lines.join('\n');
}
async function publishGmailSyncEvent(threads: SyncedThread[]): Promise<void> {
if (threads.length === 0) return;
try {
await createEvent({
source: 'gmail',
type: 'email.synced',
createdAt: new Date().toISOString(),
payload: summarizeGmailSync(threads),
});
} catch (err) {
console.error('[Gmail] Failed to publish sync event:', err);
}
}
// --- Wake Signal for Immediate Sync Trigger --- // --- Wake Signal for Immediate Sync Trigger ---
let wakeResolve: (() => void) | null = null; let wakeResolve: (() => void) | null = null;
@ -113,14 +157,14 @@ async function saveAttachment(gmail: gmail.Gmail, userId: string, msgId: string,
// --- Sync Logic --- // --- Sync Logic ---
async function processThread(auth: OAuth2Client, threadId: string, syncDir: string, attachmentsDir: string) { async function processThread(auth: OAuth2Client, threadId: string, syncDir: string, attachmentsDir: string): Promise<SyncedThread | null> {
const gmail = google.gmail({ version: 'v1', auth }); const gmail = google.gmail({ version: 'v1', auth });
try { try {
const res = await gmail.users.threads.get({ userId: 'me', id: threadId }); const res = await gmail.users.threads.get({ userId: 'me', id: threadId });
const thread = res.data; const thread = res.data;
const messages = thread.messages; const messages = thread.messages;
if (!messages || messages.length === 0) return; if (!messages || messages.length === 0) return null;
// Subject from first message // Subject from first message
const firstHeader = messages[0].payload?.headers; const firstHeader = messages[0].payload?.headers;
@ -173,15 +217,11 @@ async function processThread(auth: OAuth2Client, threadId: string, syncDir: stri
fs.writeFileSync(path.join(syncDir, `${threadId}.md`), mdContent); fs.writeFileSync(path.join(syncDir, `${threadId}.md`), mdContent);
console.log(`Synced Thread: ${subject} (${threadId})`); console.log(`Synced Thread: ${subject} (${threadId})`);
await createEvent({ return { threadId, markdown: mdContent };
source: 'gmail',
type: 'email.synced',
createdAt: new Date().toISOString(),
payload: mdContent,
});
} catch (error) { } catch (error) {
console.error(`Error processing thread ${threadId}:`, error); console.error(`Error processing thread ${threadId}:`, error);
return null;
} }
} }
@ -262,10 +302,14 @@ async function fullSync(auth: OAuth2Client, syncDir: string, attachmentsDir: str
truncated: limitedThreads.truncated, truncated: limitedThreads.truncated,
}); });
const synced: SyncedThread[] = [];
for (const threadId of threadIds) { for (const threadId of threadIds) {
await processThread(auth, threadId, syncDir, attachmentsDir); const result = await processThread(auth, threadId, syncDir, attachmentsDir);
if (result) synced.push(result);
} }
await publishGmailSyncEvent(synced);
saveState(currentHistoryId, stateFile); saveState(currentHistoryId, stateFile);
await serviceLogger.log({ await serviceLogger.log({
type: 'run_complete', type: 'run_complete',
@ -365,10 +409,14 @@ async function partialSync(auth: OAuth2Client, startHistoryId: string, syncDir:
truncated: limitedThreads.truncated, truncated: limitedThreads.truncated,
}); });
const synced: SyncedThread[] = [];
for (const tid of threadIdList) { for (const tid of threadIdList) {
await processThread(auth, tid, syncDir, attachmentsDir); const result = await processThread(auth, tid, syncDir, attachmentsDir);
if (result) synced.push(result);
} }
await publishGmailSyncEvent(synced);
const profile = await gmail.users.getProfile({ userId: 'me' }); const profile = await gmail.users.getProfile({ userId: 'me' });
saveState(profile.data.historyId!, stateFile); saveState(profile.data.historyId!, stateFile);
await serviceLogger.log({ await serviceLogger.log({
@ -565,7 +613,12 @@ function extractBodyFromPayload(payload: Record<string, unknown>): string {
return ''; return '';
} }
async function processThreadComposio(connectedAccountId: string, threadId: string, syncDir: string): Promise<string | null> { interface ComposioThreadResult {
synced: SyncedThread | null;
newestIsoPlusOne: string | null;
}
async function processThreadComposio(connectedAccountId: string, threadId: string, syncDir: string): Promise<ComposioThreadResult> {
let threadResult; let threadResult;
try { try {
threadResult = await executeAction( threadResult = await executeAction(
@ -579,40 +632,34 @@ async function processThreadComposio(connectedAccountId: string, threadId: strin
); );
} catch (error) { } catch (error) {
console.warn(`[Gmail] Skipping thread ${threadId} (fetch failed):`, error instanceof Error ? error.message : error); console.warn(`[Gmail] Skipping thread ${threadId} (fetch failed):`, error instanceof Error ? error.message : error);
return null; return { synced: null, newestIsoPlusOne: null };
} }
if (!threadResult.successful || !threadResult.data) { if (!threadResult.successful || !threadResult.data) {
console.error(`[Gmail] Failed to fetch thread ${threadId}:`, threadResult.error); console.error(`[Gmail] Failed to fetch thread ${threadId}:`, threadResult.error);
return null; return { synced: null, newestIsoPlusOne: null };
} }
const data = threadResult.data as Record<string, unknown>; const data = threadResult.data as Record<string, unknown>;
const messages = data.messages as Array<Record<string, unknown>> | undefined; const messages = data.messages as Array<Record<string, unknown>> | undefined;
let newestDate: Date | null = null; let newestDate: Date | null = null;
let mdContent: string;
let subjectForLog: string;
if (!messages || messages.length === 0) { if (!messages || messages.length === 0) {
const parsed = parseMessageData(data); const parsed = parseMessageData(data);
const mdContent = `# ${parsed.subject}\n\n` + mdContent = `# ${parsed.subject}\n\n` +
`**Thread ID:** ${threadId}\n` + `**Thread ID:** ${threadId}\n` +
`**Message Count:** 1\n\n---\n\n` + `**Message Count:** 1\n\n---\n\n` +
`### From: ${parsed.from}\n` + `### From: ${parsed.from}\n` +
`**Date:** ${parsed.date}\n\n` + `**Date:** ${parsed.date}\n\n` +
`${parsed.body}\n\n---\n\n`; `${parsed.body}\n\n---\n\n`;
subjectForLog = parsed.subject;
fs.writeFileSync(path.join(syncDir, `${cleanFilename(threadId)}.md`), mdContent);
console.log(`[Gmail] Synced Thread: ${parsed.subject} (${threadId})`);
await createEvent({
source: 'gmail',
type: 'email.synced',
createdAt: new Date().toISOString(),
payload: mdContent,
});
newestDate = tryParseDate(parsed.date); newestDate = tryParseDate(parsed.date);
} else { } else {
const firstParsed = parseMessageData(messages[0]); const firstParsed = parseMessageData(messages[0]);
let mdContent = `# ${firstParsed.subject}\n\n`; mdContent = `# ${firstParsed.subject}\n\n`;
mdContent += `**Thread ID:** ${threadId}\n`; mdContent += `**Thread ID:** ${threadId}\n`;
mdContent += `**Message Count:** ${messages.length}\n\n---\n\n`; mdContent += `**Message Count:** ${messages.length}\n\n---\n\n`;
@ -628,19 +675,14 @@ async function processThreadComposio(connectedAccountId: string, threadId: strin
newestDate = msgDate; newestDate = msgDate;
} }
} }
subjectForLog = firstParsed.subject;
fs.writeFileSync(path.join(syncDir, `${cleanFilename(threadId)}.md`), mdContent);
console.log(`[Gmail] Synced Thread: ${firstParsed.subject} (${threadId})`);
await createEvent({
source: 'gmail',
type: 'email.synced',
createdAt: new Date().toISOString(),
payload: mdContent,
});
} }
if (!newestDate) return null; fs.writeFileSync(path.join(syncDir, `${cleanFilename(threadId)}.md`), mdContent);
return new Date(newestDate.getTime() + 1000).toISOString(); console.log(`[Gmail] Synced Thread: ${subjectForLog} (${threadId})`);
const newestIsoPlusOne = newestDate ? new Date(newestDate.getTime() + 1000).toISOString() : null;
return { synced: { threadId, markdown: mdContent }, newestIsoPlusOne };
} }
async function performSyncComposio() { async function performSyncComposio() {
@ -751,19 +793,22 @@ async function performSyncComposio() {
let highWaterMark: string | null = state?.last_sync ?? null; let highWaterMark: string | null = state?.last_sync ?? null;
let processedCount = 0; let processedCount = 0;
const synced: SyncedThread[] = [];
for (const threadId of allThreadIds) { for (const threadId of allThreadIds) {
// Re-check connection in case user disconnected mid-sync // Re-check connection in case user disconnected mid-sync
if (!composioAccountsRepo.isConnected('gmail')) { if (!composioAccountsRepo.isConnected('gmail')) {
console.log('[Gmail] Account disconnected during sync. Stopping.'); console.log('[Gmail] Account disconnected during sync. Stopping.');
return; break;
} }
try { try {
const newestInThread = await processThreadComposio(connectedAccountId, threadId, SYNC_DIR); const result = await processThreadComposio(connectedAccountId, threadId, SYNC_DIR);
processedCount++; processedCount++;
if (newestInThread) { if (result.synced) synced.push(result.synced);
if (!highWaterMark || new Date(newestInThread) > new Date(highWaterMark)) {
highWaterMark = newestInThread; if (result.newestIsoPlusOne) {
if (!highWaterMark || new Date(result.newestIsoPlusOne) > new Date(highWaterMark)) {
highWaterMark = result.newestIsoPlusOne;
} }
saveComposioState(STATE_FILE, highWaterMark); saveComposioState(STATE_FILE, highWaterMark);
} }
@ -772,6 +817,8 @@ async function performSyncComposio() {
} }
} }
await publishGmailSyncEvent(synced);
await serviceLogger.log({ await serviceLogger.log({
type: 'run_complete', type: 'run_complete',
service: run!.service, service: run!.service,

View file

@ -2,6 +2,7 @@ import fs from 'fs';
import path from 'path'; import path from 'path';
import { WorkDir } from '../config/config.js'; import { WorkDir } from '../config/config.js';
import { createRun, createMessage } from '../runs/runs.js'; import { createRun, createMessage } from '../runs/runs.js';
import { getKgModel } from '../models/defaults.js';
import { bus } from '../runs/bus.js'; import { bus } from '../runs/bus.js';
import { waitForRunCompletion } from '../agents/utils.js'; import { waitForRunCompletion } from '../agents/utils.js';
import { serviceLogger } from '../services/service_logger.js'; import { serviceLogger } from '../services/service_logger.js';
@ -84,6 +85,9 @@ async function tagNoteBatch(
): Promise<{ runId: string; filesEdited: Set<string> }> { ): Promise<{ runId: string; filesEdited: Set<string> }> {
const run = await createRun({ const run = await createRun({
agentId: NOTE_TAGGING_AGENT, agentId: NOTE_TAGGING_AGENT,
model: await getKgModel(),
useCase: 'knowledge_sync',
subUseCase: 'tag_notes',
}); });
let message = `Tag the following ${files.length} knowledge notes by prepending YAML frontmatter with appropriate tags.\n\n`; let message = `Tag the following ${files.length} knowledge notes by prepending YAML frontmatter with appropriate tags.\n\n`;

View file

@ -5,6 +5,7 @@ import { parse as parseYaml, stringify as stringifyYaml } from 'yaml';
import { WorkDir } from '../../config/config.js'; import { WorkDir } from '../../config/config.js';
import { TrackBlockSchema } from '@x/shared/dist/track-block.js'; import { TrackBlockSchema } from '@x/shared/dist/track-block.js';
import { TrackStateSchema } from './types.js'; import { TrackStateSchema } from './types.js';
import { withFileLock } from '../file-lock.js';
const KNOWLEDGE_DIR = path.join(WorkDir, 'knowledge'); const KNOWLEDGE_DIR = path.join(WorkDir, 'knowledge');
@ -81,42 +82,46 @@ export async function fetchYaml(filePath: string, trackId: string): Promise<stri
} }
export async function updateContent(filePath: string, trackId: string, newContent: string): Promise<void> { export async function updateContent(filePath: string, trackId: string, newContent: string): Promise<void> {
let content = await fs.readFile(absPath(filePath), 'utf-8'); return withFileLock(absPath(filePath), async () => {
const openTag = `<!--track-target:${trackId}-->`; let content = await fs.readFile(absPath(filePath), 'utf-8');
const closeTag = `<!--/track-target:${trackId}-->`; const openTag = `<!--track-target:${trackId}-->`;
const openIdx = content.indexOf(openTag); const closeTag = `<!--/track-target:${trackId}-->`;
const closeIdx = content.indexOf(closeTag); const openIdx = content.indexOf(openTag);
if (openIdx !== -1 && closeIdx !== -1 && closeIdx > openIdx) { const closeIdx = content.indexOf(closeTag);
content = content.slice(0, openIdx + openTag.length) + '\n' + newContent + '\n' + content.slice(closeIdx); if (openIdx !== -1 && closeIdx !== -1 && closeIdx > openIdx) {
} else { content = content.slice(0, openIdx + openTag.length) + '\n' + newContent + '\n' + content.slice(closeIdx);
const block = await fetch(filePath, trackId); } else {
if (!block) { const block = await fetch(filePath, trackId);
throw new Error(`Track ${trackId} not found in ${filePath}`); if (!block) {
throw new Error(`Track ${trackId} not found in ${filePath}`);
}
const lines = content.split('\n');
const insertAt = Math.min(block.fenceEnd + 1, lines.length);
const contentFence = [openTag, newContent, closeTag];
lines.splice(insertAt, 0, ...contentFence);
content = lines.join('\n');
} }
const lines = content.split('\n'); await fs.writeFile(absPath(filePath), content, 'utf-8');
const insertAt = Math.min(block.fenceEnd + 1, lines.length); });
const contentFence = [openTag, newContent, closeTag];
lines.splice(insertAt, 0, ...contentFence);
content = lines.join('\n');
}
await fs.writeFile(absPath(filePath), content, 'utf-8');
} }
export async function updateTrackBlock(filepath: string, trackId: string, updates: Partial<z.infer<typeof TrackBlockSchema>>): Promise<void> { export async function updateTrackBlock(filepath: string, trackId: string, updates: Partial<z.infer<typeof TrackBlockSchema>>): Promise<void> {
const block = await fetch(filepath, trackId); return withFileLock(absPath(filepath), async () => {
if (!block) { const block = await fetch(filepath, trackId);
throw new Error(`Track ${trackId} not found in ${filepath}`); if (!block) {
} throw new Error(`Track ${trackId} not found in ${filepath}`);
block.track = { ...block.track, ...updates }; }
block.track = { ...block.track, ...updates };
// read file contents // read file contents
let content = await fs.readFile(absPath(filepath), 'utf-8'); let content = await fs.readFile(absPath(filepath), 'utf-8');
const lines = content.split('\n'); const lines = content.split('\n');
const yaml = stringifyYaml(block.track).trimEnd(); const yaml = stringifyYaml(block.track).trimEnd();
const yamlLines = yaml ? yaml.split('\n') : []; const yamlLines = yaml ? yaml.split('\n') : [];
lines.splice(block.fenceStart, block.fenceEnd - block.fenceStart + 1, '```track', ...yamlLines, '```'); lines.splice(block.fenceStart, block.fenceEnd - block.fenceStart + 1, '```track', ...yamlLines, '```');
content = lines.join('\n'); content = lines.join('\n');
await fs.writeFile(absPath(filepath), content, 'utf-8'); await fs.writeFile(absPath(filepath), content, 'utf-8');
});
} }
/** /**
@ -127,64 +132,68 @@ export async function updateTrackBlock(filepath: string, trackId: string, update
* otherwise the write is rejected. * otherwise the write is rejected.
*/ */
export async function replaceTrackBlockYaml(filePath: string, trackId: string, newYaml: string): Promise<void> { export async function replaceTrackBlockYaml(filePath: string, trackId: string, newYaml: string): Promise<void> {
const block = await fetch(filePath, trackId); return withFileLock(absPath(filePath), async () => {
if (!block) { const block = await fetch(filePath, trackId);
throw new Error(`Track ${trackId} not found in ${filePath}`); if (!block) {
} throw new Error(`Track ${trackId} not found in ${filePath}`);
const parsed = TrackBlockSchema.safeParse(parseYaml(newYaml)); }
if (!parsed.success) { const parsed = TrackBlockSchema.safeParse(parseYaml(newYaml));
throw new Error(`Invalid track YAML: ${parsed.error.message}`); if (!parsed.success) {
} throw new Error(`Invalid track YAML: ${parsed.error.message}`);
if (parsed.data.trackId !== trackId) { }
throw new Error(`trackId cannot be changed (was "${trackId}", got "${parsed.data.trackId}")`); if (parsed.data.trackId !== trackId) {
} throw new Error(`trackId cannot be changed (was "${trackId}", got "${parsed.data.trackId}")`);
}
const content = await fs.readFile(absPath(filePath), 'utf-8'); const content = await fs.readFile(absPath(filePath), 'utf-8');
const lines = content.split('\n'); const lines = content.split('\n');
const yamlLines = newYaml.trimEnd().split('\n'); const yamlLines = newYaml.trimEnd().split('\n');
lines.splice(block.fenceStart, block.fenceEnd - block.fenceStart + 1, '```track', ...yamlLines, '```'); lines.splice(block.fenceStart, block.fenceEnd - block.fenceStart + 1, '```track', ...yamlLines, '```');
await fs.writeFile(absPath(filePath), lines.join('\n'), 'utf-8'); await fs.writeFile(absPath(filePath), lines.join('\n'), 'utf-8');
});
} }
/** /**
* Remove a track block and its sibling target region from the file. * Remove a track block and its sibling target region from the file.
*/ */
export async function deleteTrackBlock(filePath: string, trackId: string): Promise<void> { export async function deleteTrackBlock(filePath: string, trackId: string): Promise<void> {
const block = await fetch(filePath, trackId); return withFileLock(absPath(filePath), async () => {
if (!block) { const block = await fetch(filePath, trackId);
// Already gone — treat as success. if (!block) {
return; // Already gone — treat as success.
} return;
const content = await fs.readFile(absPath(filePath), 'utf-8');
const lines = content.split('\n');
const openTag = `<!--track-target:${trackId}-->`;
const closeTag = `<!--/track-target:${trackId}-->`;
// Find target region (may not exist)
let targetStart = -1;
let targetEnd = -1;
for (let i = 0; i < lines.length; i++) {
if (lines[i].includes(openTag)) { targetStart = i; }
if (targetStart !== -1 && lines[i].includes(closeTag)) { targetEnd = i; break; }
}
// Build a list of [start, end] ranges to remove, sorted descending so
// indices stay valid as we splice.
const ranges: Array<[number, number]> = [];
ranges.push([block.fenceStart, block.fenceEnd]);
if (targetStart !== -1 && targetEnd !== -1 && targetEnd >= targetStart) {
ranges.push([targetStart, targetEnd]);
}
ranges.sort((a, b) => b[0] - a[0]);
for (const [start, end] of ranges) {
lines.splice(start, end - start + 1);
// Also drop a trailing blank line if the removal left two in a row.
if (start < lines.length && lines[start].trim() === '' && start > 0 && lines[start - 1].trim() === '') {
lines.splice(start, 1);
} }
}
await fs.writeFile(absPath(filePath), lines.join('\n'), 'utf-8'); const content = await fs.readFile(absPath(filePath), 'utf-8');
const lines = content.split('\n');
const openTag = `<!--track-target:${trackId}-->`;
const closeTag = `<!--/track-target:${trackId}-->`;
// Find target region (may not exist)
let targetStart = -1;
let targetEnd = -1;
for (let i = 0; i < lines.length; i++) {
if (lines[i].includes(openTag)) { targetStart = i; }
if (targetStart !== -1 && lines[i].includes(closeTag)) { targetEnd = i; break; }
}
// Build a list of [start, end] ranges to remove, sorted descending so
// indices stay valid as we splice.
const ranges: Array<[number, number]> = [];
ranges.push([block.fenceStart, block.fenceEnd]);
if (targetStart !== -1 && targetEnd !== -1 && targetEnd >= targetStart) {
ranges.push([targetStart, targetEnd]);
}
ranges.sort((a, b) => b[0] - a[0]);
for (const [start, end] of ranges) {
lines.splice(start, end - start + 1);
// Also drop a trailing blank line if the removal left two in a row.
if (start < lines.length && lines[start].trim() === '' && start > 0 && lines[start - 1].trim() === '') {
lines.splice(start, 1);
}
}
await fs.writeFile(absPath(filePath), lines.join('\n'), 'utf-8');
});
} }

View file

@ -1,11 +1,9 @@
import { generateObject } from 'ai'; import { generateObject } from 'ai';
import { trackBlock, PrefixLogger } from '@x/shared'; import { trackBlock, PrefixLogger } from '@x/shared';
import type { KnowledgeEvent } from '@x/shared/dist/track-block.js'; import type { KnowledgeEvent } from '@x/shared/dist/track-block.js';
import container from '../../di/container.js';
import type { IModelConfigRepo } from '../../models/repo.js';
import { createProvider } from '../../models/models.js'; import { createProvider } from '../../models/models.js';
import { isSignedIn } from '../../account/account.js'; import { getDefaultModelAndProvider, getTrackBlockModel, resolveProviderConfig } from '../../models/defaults.js';
import { getGatewayProvider } from '../../models/gateway.js'; import { captureLlmUsage } from '../../analytics/usage.js';
const log = new PrefixLogger('TrackRouting'); const log = new PrefixLogger('TrackRouting');
@ -37,15 +35,14 @@ Rules:
- For each candidate, return BOTH trackId and filePath exactly as given. trackIds are not globally unique.`; - For each candidate, return BOTH trackId and filePath exactly as given. trackIds are not globally unique.`;
async function resolveModel() { async function resolveModel() {
const repo = container.resolve<IModelConfigRepo>('modelConfigRepo'); const modelId = await getTrackBlockModel();
const config = await repo.getConfig(); const { provider } = await getDefaultModelAndProvider();
const signedIn = await isSignedIn(); const config = await resolveProviderConfig(provider);
const provider = signedIn return {
? await getGatewayProvider() model: createProvider(config).languageModel(modelId),
: createProvider(config.provider); modelId,
const modelId = config.knowledgeGraphModel providerName: provider,
|| (signedIn ? 'gpt-5.4' : config.model); };
return provider.languageModel(modelId);
} }
function buildRoutingPrompt(event: KnowledgeEvent, batch: ParsedTrack[]): string { function buildRoutingPrompt(event: KnowledgeEvent, batch: ParsedTrack[]): string {
@ -92,19 +89,26 @@ export async function findCandidates(
log.log(`Routing event ${event.id} against ${filtered.length} track(s)`); log.log(`Routing event ${event.id} against ${filtered.length} track(s)`);
const model = await resolveModel(); const { model, modelId, providerName } = await resolveModel();
const candidateKeys = new Set<string>(); const candidateKeys = new Set<string>();
for (let i = 0; i < filtered.length; i += BATCH_SIZE) { for (let i = 0; i < filtered.length; i += BATCH_SIZE) {
const batch = filtered.slice(i, i + BATCH_SIZE); const batch = filtered.slice(i, i + BATCH_SIZE);
try { try {
const { object } = await generateObject({ const result = await generateObject({
model, model,
system: ROUTING_SYSTEM_PROMPT, system: ROUTING_SYSTEM_PROMPT,
prompt: buildRoutingPrompt(event, batch), prompt: buildRoutingPrompt(event, batch),
schema: trackBlock.Pass1OutputSchema, schema: trackBlock.Pass1OutputSchema,
}); });
for (const c of object.candidates) { captureLlmUsage({
useCase: 'track_block',
subUseCase: 'routing',
model: modelId,
provider: providerName,
usage: result.usage,
});
for (const c of result.object.candidates) {
candidateKeys.add(trackKey(c.trackId, c.filePath)); candidateKeys.add(trackKey(c.trackId, c.filePath));
} }
} catch (err) { } catch (err) {

View file

@ -3,50 +3,301 @@ import { Agent, ToolAttachment } from '@x/shared/dist/agent.js';
import { BuiltinTools } from '../../application/lib/builtin-tools.js'; import { BuiltinTools } from '../../application/lib/builtin-tools.js';
import { WorkDir } from '../../config/config.js'; import { WorkDir } from '../../config/config.js';
const TRACK_RUN_INSTRUCTIONS = `You are a track block runner — a background agent that updates a specific section of a knowledge note. const TRACK_RUN_INSTRUCTIONS = `You are a track block runner — a background agent that keeps a live section of a user's personal knowledge note up to date.
You will receive a message containing a track instruction, the current content of the target region, and optionally some context. Your job is to follow the instruction and produce updated content. Your goal on each run: produce the most useful, up-to-date version of that section given the track's instruction. The user is maintaining a personal knowledge base and will glance at this output alongside many others optimize for **information density and scannability**, not conversational prose.
# Background Mode # Background Mode
You are running as a background task there is no user present. You are running as a scheduled or event-triggered background task **there is no user present** to clarify, approve, or watch.
- Do NOT ask clarifying questions make reasonable assumptions - Do NOT ask clarifying questions make the most reasonable interpretation of the instruction and proceed.
- Be concise and action-oriented just do the work - Do NOT hedge or preamble ("I'll now...", "Let me..."). Just do the work.
- Do NOT produce chat-style output. The user sees only the content you write into the target region plus your final summary line.
# Message Anatomy
Every run message has this shape:
Update track **<trackId>** in \`<filePath>\`.
**Time:** <localized datetime> (<timezone>)
**Instruction:**
<the user-authored track instruction usually 1-3 sentences describing what to produce>
**Current content:**
<the existing contents of the target region, or "(empty — first run)">
Use \`update-track-content\` with filePath=\`<filePath>\` and trackId=\`<trackId>\`.
For **manual** runs, an optional trailing block may appear:
**Context:**
<extra one-run-only guidance a backfill hint, a focus window, extra data>
Apply context for this run only it is not a permanent edit to the instruction.
For **event-triggered** runs, a trailing block appears instead:
**Trigger:** Event match (a Pass 1 routing classifier flagged this track as potentially relevant)
**Event match criteria for this track:** <from the track's YAML>
**Event payload:** <the event body e.g., an email>
**Decision:** ... skip if not relevant ...
On event runs you are the Pass 2 judge see "The No-Update Decision" below.
# What Good Output Looks Like
This is a personal knowledge tracker. The user scans many such blocks across their notes. Write for a reader who wants the answer to "what's current / what changed?" in the fewest words that carry real information.
- **Data-forward.** Tables, bullet lists, one-line statuses. Not paragraphs.
- **Format follows the instruction.** If the instruction specifies a shape ("3-column markdown table: Location | Local Time | Offset"), use exactly that shape. The instruction is authoritative do not improvise a different layout.
- **No decoration.** No adjectives like "polished", "beautiful". No framing prose ("Here's your update:"). No emoji unless the instruction asks.
- **No commentary or caveats** unless the data itself is genuinely uncertain in a way the user needs to know.
- **No self-reference.** Do not write "I updated this at X" the system records timestamps separately.
If the instruction does not specify a format, pick the tightest shape that fits: a single line for a single metric, a small table for 2+ parallel items, a short bulleted list for a digest, or one of the **rich block types below** when the data has a natural visual form (events \`calendar\`, time series → \`chart\`, relationships → \`mermaid\`, etc.).
# Output Block Types
The note renderer turns specially-tagged fenced code blocks into styled UI: tables, charts, calendars, embeds, and more. Reach for these when the data has structure that benefits from a visual treatment; stay with plain markdown when prose, a markdown table, or bullets carry the meaning just as well. Pick **at most one block per output region** unless the instruction asks for a multi-section layout and follow the exact fence language and shape, since anything unparseable renders as a small "Invalid X block" error card.
Do **not** emit \`track\` or \`task\` blocks — those are user-authored input mechanisms, not agent outputs.
## \`table\` — tabular data (JSON)
Use for: scoreboards, leaderboards, comparisons, multi-row status digests.
\`\`\`table
{
"title": "Top stories on Hacker News",
"columns": ["Rank", "Title", "Points", "Comments"],
"data": [
{"Rank": 1, "Title": "Show HN: ...", "Points": 842, "Comments": 312},
{"Rank": 2, "Title": "...", "Points": 530, "Comments": 144}
]
}
\`\`\`
Required: \`columns\` (string[]), \`data\` (array of objects keyed by column name). Optional: \`title\`.
## \`chart\` — line / bar / pie chart (JSON)
Use for: time series, categorical breakdowns, share-of-total. Skip if a single sentence carries the meaning.
\`\`\`chart
{
"chart": "line",
"title": "USD/INR — last 7 days",
"x": "date",
"y": "rate",
"data": [
{"date": "2026-04-13", "rate": 83.41},
{"date": "2026-04-14", "rate": 83.38}
]
}
\`\`\`
Required: \`chart\` ("line" | "bar" | "pie"), \`x\` (field name on each row), \`y\` (field name on each row), and **either** \`data\` (inline array of objects) **or** \`source\` (workspace path to a JSON-array file). Optional: \`title\`.
## \`mermaid\` — diagrams (raw Mermaid source)
Use for: relationship maps, flowcharts, sequence diagrams, gantt charts, mind maps.
\`\`\`mermaid
graph LR
A[Project Alpha] --> B[Sarah Chen]
A --> C[Acme Corp]
B --> D[Q3 Launch]
\`\`\`
Body is plain Mermaid source no JSON wrapper.
## \`calendar\` — list of events (JSON)
Use for: upcoming meetings, agenda digests, day/week views.
\`\`\`calendar
{
"title": "Today",
"events": [
{
"summary": "1:1 with Sarah",
"start": {"dateTime": "2026-04-20T10:00:00-07:00"},
"end": {"dateTime": "2026-04-20T10:30:00-07:00"},
"location": "Zoom",
"conferenceLink": "https://zoom.us/j/..."
}
]
}
\`\`\`
Required: \`events\` (array). Each event optionally has \`summary\`, \`start\`/\`end\` (object with \`dateTime\` ISO string OR \`date\` "YYYY-MM-DD" for all-day), \`location\`, \`htmlLink\`, \`conferenceLink\`, \`source\`. Optional top-level: \`title\`, \`showJoinButton\` (bool).
## \`email\` — single email or thread digest (JSON)
Use for: surfacing one important thread latest message body, summary of prior context, optional draft reply.
\`\`\`email
{
"subject": "Q3 launch readiness",
"from": "sarah@acme.com",
"date": "2026-04-19T16:42:00Z",
"summary": "Sarah confirms timeline; flagged blocker on infra capacity.",
"latest_email": "Hey — quick update on Q3...\\n\\nThanks,\\nSarah"
}
\`\`\`
Required: \`latest_email\` (string). Optional: \`threadId\`, \`summary\`, \`subject\`, \`from\`, \`to\`, \`date\`, \`past_summary\`, \`draft_response\`, \`response_mode\` ("inline" | "assistant" | "both").
For digests of **many** threads, prefer a \`table\` (Subject | From | Snippet) — \`email\` is for one thread at a time.
## \`image\` — single image (JSON)
Use for: charts, screenshots, photos you have a URL or workspace path for.
\`\`\`image
{
"src": "https://example.com/forecast.png",
"alt": "Weather forecast",
"caption": "Bay Area · April 20"
}
\`\`\`
Required: \`src\` (URL or workspace path). Optional: \`alt\`, \`caption\`.
## \`embed\` — YouTube / Figma embed (JSON)
Use for: linking to a video or design that should render inline.
\`\`\`embed
{
"provider": "youtube",
"url": "https://www.youtube.com/watch?v=dQw4w9WgXcQ",
"caption": "Latest demo"
}
\`\`\`
Required: \`provider\` ("youtube" | "figma" | "generic"), \`url\`. Optional: \`caption\`. The renderer rewrites known URLs to their embed form.
## \`iframe\` — arbitrary embedded webpage (JSON)
Use for: live dashboards, status pages, trackers anything that has its own webpage and benefits from being live, not snapshotted.
\`\`\`iframe
{
"url": "https://status.example.com",
"title": "Service status",
"height": 600
}
\`\`\`
Required: \`url\` (must be \`https://\` or \`http://localhost\`). Optional: \`title\`, \`caption\`, \`height\` (2401600), \`allow\` (Permissions-Policy string).
## \`transcript\` — long transcript (JSON)
Use for: meeting transcripts, voice-note dumps bodies that benefit from a collapsible UI.
\`\`\`transcript
{"transcript": "[00:00] Speaker A: Welcome everyone..."}
\`\`\`
Required: \`transcript\` (string).
## \`prompt\` — starter Copilot prompt (YAML)
Use for: end-of-output "next step" cards. The user clicks **Run** and the chat sidebar opens with the underlying instruction submitted to Copilot, with this note attached as a file mention.
\`\`\`prompt
label: Draft replies to today's emails
instruction: |
For each unanswered email in the digest above, draft a 2-line reply
in my voice and present them as a checklist for me to approve.
\`\`\`
Required: \`label\` (short title shown on the card), \`instruction\` (the longer prompt). Note: this block uses **YAML**, not JSON.
# Interpreting the Instruction
The instruction was authored in a prior conversation you cannot see. Treat it as a **self-contained spec**. If ambiguous, pick what a reasonable user of a knowledge tracker would expect:
- "Top 5" is a target fewer is acceptable if that's all that exists.
- "Current" means as of now (use the **Time** block).
- Unspecified units standard for the domain (USD for US markets, metric for scientific, the user's locale if inferable from the timezone).
- Unspecified sources your best reliable source (web-search for public data, workspace for user data).
Do **not** invent parts of the instruction the user did not write ("also include a fun fact", "summarize trends") these are decoration.
# Current Content Handling
The **Current content** block shows what lives in the target region right now. Three cases:
1. **"(empty — first run)"** produce the content from scratch.
2. **Content that matches the instruction's format** — this is a previous run's output. Usually produce a fresh complete replacement. Only preserve parts of it if the instruction says to **accumulate** (e.g., "maintain a running log of..."), or if discarding would lose information the instruction intended to keep.
3. **Content that does NOT match the instruction's format** the instruction may have changed, or the user edited the block by hand. Regenerate fresh to the current instruction. Do not try to patch.
You always write a **complete** replacement, not a diff.
# The No-Update Decision
You may finish a run without calling \`update-track-content\`. Two legitimate cases:
1. **Event-triggered run, event is not actually relevant.** The Pass 1 classifier is liberal by design. On closer reading, if the event does not genuinely add or change information that should be in this track, skip the update.
2. **Scheduled/manual run, no meaningful change.** If you fetch fresh data and the result would be identical to the current content, you may skip the write. The system will record "no update" automatically.
When skipping, still end with a summary line (see "Final Summary" below) so the system records *why*.
# Writing the Result
Call \`update-track-content\` **at most once per run**:
- Pass \`filePath\` and \`trackId\` exactly as given in the message.
- Pass the **complete** new content as \`content\` — the entire replacement for the target region.
- Do **not** include the track-target HTML comments (\`<!--track-target:...-->\`) — the tool manages those.
- Do **not** modify the track's YAML configuration or any other part of the note. Your surface area is the target region only.
# Tools
You have the full workspace toolkit. Quick reference for common cases:
- **\`web-search\`** — the public web (news, prices, status pages, documentation). Use when the instruction needs information beyond the workspace.
- **\`workspace-readFile\`, \`workspace-grep\`, \`workspace-glob\`, \`workspace-readdir\`** — read and search the user's knowledge graph and synced data.
- **\`parseFile\`, \`LLMParse\`** — parse PDFs, spreadsheets, Word docs if a track aggregates from attached files.
- **\`composio-*\`, \`listMcpTools\`, \`executeMcpTool\`** — user-connected integrations (Gmail, Calendar, etc.). Prefer these when a track needs structured data from a connected service the user has authorized.
- **\`browser-control\`** — only when a required source has no API / search alternative and requires JS rendering.
# The Knowledge Graph # The Knowledge Graph
The knowledge graph is stored as plain markdown in \`${WorkDir}/knowledge/\` (inside the workspace). It's organized into: The user's knowledge graph is plain markdown in \`${WorkDir}/knowledge/\`, organized into:
- **People/** Notes on individuals - **People/** individuals
- **Organizations/** Notes on companies - **Organizations/** companies
- **Projects/** Notes on initiatives - **Projects/** initiatives
- **Topics/** Notes on recurring themes - **Topics/** recurring themes
Use workspace tools to search and read the knowledge graph for context. Synced external data often sits alongside under \`gmail_sync/\`, \`calendar_sync/\`, \`granola_sync/\`, \`fireflies_sync/\` — consult these when an instruction references emails, meetings, or calendar events.
# How to Access the Knowledge Graph **CRITICAL:** Always include the folder prefix in paths. Never pass an empty path or the workspace root.
**CRITICAL:** Always include \`knowledge/\` in paths.
- \`workspace-grep({ pattern: "Acme", path: "knowledge/" })\` - \`workspace-grep({ pattern: "Acme", path: "knowledge/" })\`
- \`workspace-readFile("knowledge/People/Sarah Chen.md")\` - \`workspace-readFile("knowledge/People/Sarah Chen.md")\`
- \`workspace-readdir("knowledge/People")\` - \`workspace-readdir("gmail_sync/")\`
**NEVER** use an empty path or root path. # Failure & Fallback
# How to Write Your Result If you cannot complete the instruction (network failure, missing data source, unparseable response, disconnected integration):
- Do **not** fabricate or speculate.
- Do **not** write partial or placeholder content into the target region leave existing content intact by not calling \`update-track-content\`.
- Explain the failure in the summary line.
Use the \`update-track-content\` tool to write your result. The message will tell you the file path and track ID. # Final Summary
- Produce the COMPLETE replacement content (not a diff) End your response with **one line** (1-2 short sentences). The system stores this as \`lastRunSummary\` and surfaces it in the UI.
- Preserve existing content that's still relevant
- Write in a clear, concise style appropriate for personal notes
# Web Search State the action and the substance. Good examples:
- "Updated — 3 new HN stories, top is 'Show HN: …' at 842 pts."
- "Updated — USD/INR 83.42 as of 14:05 IST."
- "No change — status page shows all operational."
- "Skipped — event was a calendar invite unrelated to Q3 planning."
- "Failed — web-search returned no results for the query."
You have access to \`web-search\` for tracks that need external information (news, trends, current events). Use it when the track instruction requires information beyond the knowledge graph. Avoid: "I updated the track.", "Done!", "Here is the update:". The summary is a data point, not a sign-off.
# After You're Done
End your response with a brief summary of what you did (1-2 sentences).
`; `;
export function buildTrackRunAgent(): z.infer<typeof Agent> { export function buildTrackRunAgent(): z.infer<typeof Agent> {

View file

@ -1,6 +1,7 @@
import z from 'zod'; import z from 'zod';
import { fetchAll, updateTrackBlock } from './fileops.js'; import { fetchAll, updateTrackBlock } from './fileops.js';
import { createRun, createMessage } from '../../runs/runs.js'; import { createRun, createMessage } from '../../runs/runs.js';
import { getTrackBlockModel } from '../../models/defaults.js';
import { extractAgentResponse, waitForRunCompletion } from '../../agents/utils.js'; import { extractAgentResponse, waitForRunCompletion } from '../../agents/utils.js';
import { trackBus } from './bus.js'; import { trackBus } from './bus.js';
import type { TrackStateSchema } from './types.js'; import type { TrackStateSchema } from './types.js';
@ -101,8 +102,17 @@ export async function triggerTrackUpdate(
const contentBefore = track.content; const contentBefore = track.content;
// Emit start event — runId is set after agent run is created // Per-track model/provider overrides win when set; otherwise fall back
const agentRun = await createRun({ agentId: 'track-run' }); // to the configured trackBlockModel default and the run-creation
// provider default (signed-in: rowboat; BYOK: active provider).
const model = track.track.model ?? await getTrackBlockModel();
const agentRun = await createRun({
agentId: 'track-run',
model,
...(track.track.provider ? { provider: track.track.provider } : {}),
useCase: 'track_block',
subUseCase: 'run',
});
// Set lastRunAt and lastRunId immediately (before agent executes) so // Set lastRunAt and lastRunId immediately (before agent executes) so
// the scheduler's next poll won't re-trigger this track. // the scheduler's next poll won't re-trigger this track.

View file

@ -0,0 +1,88 @@
import z from "zod";
import { LlmProvider } from "@x/shared/dist/models.js";
import { IModelConfigRepo } from "./repo.js";
import { isSignedIn } from "../account/account.js";
import container from "../di/container.js";
const SIGNED_IN_DEFAULT_MODEL = "gpt-5.4";
const SIGNED_IN_DEFAULT_PROVIDER = "rowboat";
const SIGNED_IN_KG_MODEL = "anthropic/claude-haiku-4.5";
const SIGNED_IN_TRACK_BLOCK_MODEL = "anthropic/claude-haiku-4.5";
/**
* The single source of truth for "what model+provider should we use when
* the caller didn't specify and the agent didn't declare". Returns names only.
* This is the only place that branches on signed-in state.
*/
export async function getDefaultModelAndProvider(): Promise<{ model: string; provider: string }> {
if (await isSignedIn()) {
return { model: SIGNED_IN_DEFAULT_MODEL, provider: SIGNED_IN_DEFAULT_PROVIDER };
}
const repo = container.resolve<IModelConfigRepo>("modelConfigRepo");
const cfg = await repo.getConfig();
return { model: cfg.model, provider: cfg.provider.flavor };
}
/**
* Resolve a provider name (as stored on a run, an agent, or returned by
* getDefaultModelAndProvider) into the full LlmProvider config that
* createProvider expects (apiKey/baseURL/headers).
*
* - "rowboat" gateway provider (auth via OAuth bearer; no creds field).
* - other names look up models.json's `providers[name]` map.
* - fallback: if the name matches the active default's flavor (legacy
* single-provider configs that didn't write to the providers map yet).
*/
export async function resolveProviderConfig(name: string): Promise<z.infer<typeof LlmProvider>> {
if (name === "rowboat") {
return { flavor: "rowboat" };
}
const repo = container.resolve<IModelConfigRepo>("modelConfigRepo");
const cfg = await repo.getConfig();
const entry = cfg.providers?.[name];
if (entry) {
return LlmProvider.parse({
flavor: name,
apiKey: entry.apiKey,
baseURL: entry.baseURL,
headers: entry.headers,
});
}
if (cfg.provider.flavor === name) {
return cfg.provider;
}
throw new Error(`Provider '${name}' is referenced but not configured`);
}
/**
* Model used by knowledge-graph agents (note_creation, labeling_agent, etc.)
* when they're the top-level of a run. Signed-in: curated default.
* BYOK: user override (`knowledgeGraphModel`) or assistant model.
*/
export async function getKgModel(): Promise<string> {
if (await isSignedIn()) return SIGNED_IN_KG_MODEL;
const cfg = await container.resolve<IModelConfigRepo>("modelConfigRepo").getConfig();
return cfg.knowledgeGraphModel ?? cfg.model;
}
/**
* Model used by track-block runner + routing classifier.
* Signed-in: curated default. BYOK: user override (`trackBlockModel`) or
* assistant model.
*/
export async function getTrackBlockModel(): Promise<string> {
if (await isSignedIn()) return SIGNED_IN_TRACK_BLOCK_MODEL;
const cfg = await container.resolve<IModelConfigRepo>("modelConfigRepo").getConfig();
return cfg.trackBlockModel ?? cfg.model;
}
/**
* Model used by the meeting-notes summarizer. No special signed-in default
* historically meetings used the assistant model. BYOK: user override
* (`meetingNotesModel`) or assistant model.
*/
export async function getMeetingNotesModel(): Promise<string> {
if (await isSignedIn()) return SIGNED_IN_DEFAULT_MODEL;
const cfg = await container.resolve<IModelConfigRepo>("modelConfigRepo").getConfig();
return cfg.meetingNotesModel ?? cfg.model;
}

View file

@ -3,11 +3,18 @@ import { createOpenRouter } from '@openrouter/ai-sdk-provider';
import { getAccessToken } from '../auth/tokens.js'; import { getAccessToken } from '../auth/tokens.js';
import { API_URL } from '../config/env.js'; import { API_URL } from '../config/env.js';
export async function getGatewayProvider(): Promise<ProviderV2> { const authedFetch: typeof fetch = async (input, init) => {
const accessToken = await getAccessToken(); const token = await getAccessToken();
const headers = new Headers(init?.headers);
headers.set('Authorization', `Bearer ${token}`);
return fetch(input, { ...init, headers });
};
export function getGatewayProvider(): ProviderV2 {
return createOpenRouter({ return createOpenRouter({
baseURL: `${API_URL}/v1/llm`, baseURL: `${API_URL}/v1/llm`,
apiKey: accessToken, apiKey: 'managed-by-rowboat',
fetch: authedFetch,
}); });
} }

View file

@ -8,7 +8,6 @@ import { createOpenRouter } from '@openrouter/ai-sdk-provider';
import { createOpenAICompatible } from '@ai-sdk/openai-compatible'; import { createOpenAICompatible } from '@ai-sdk/openai-compatible';
import { LlmModelConfig, LlmProvider } from "@x/shared/dist/models.js"; import { LlmModelConfig, LlmProvider } from "@x/shared/dist/models.js";
import z from "zod"; import z from "zod";
import { isSignedIn } from "../account/account.js";
import { getGatewayProvider } from "./gateway.js"; import { getGatewayProvider } from "./gateway.js";
export const Provider = LlmProvider; export const Provider = LlmProvider;
@ -65,6 +64,8 @@ export function createProvider(config: z.infer<typeof Provider>): ProviderV2 {
baseURL, baseURL,
headers, headers,
}) as unknown as ProviderV2; }) as unknown as ProviderV2;
case "rowboat":
return getGatewayProvider();
default: default:
throw new Error(`Unsupported provider flavor: ${config.flavor}`); throw new Error(`Unsupported provider flavor: ${config.flavor}`);
} }
@ -80,9 +81,7 @@ export async function testModelConnection(
const controller = new AbortController(); const controller = new AbortController();
const timeout = setTimeout(() => controller.abort(), effectiveTimeout); const timeout = setTimeout(() => controller.abort(), effectiveTimeout);
try { try {
const provider = await isSignedIn() const provider = createProvider(providerConfig);
? await getGatewayProvider()
: createProvider(providerConfig);
const languageModel = provider.languageModel(model); const languageModel = provider.languageModel(model);
await generateText({ await generateText({
model: languageModel, model: languageModel,

View file

@ -52,6 +52,7 @@ export class FSModelConfigRepo implements IModelConfigRepo {
models: config.models, models: config.models,
knowledgeGraphModel: config.knowledgeGraphModel, knowledgeGraphModel: config.knowledgeGraphModel,
meetingNotesModel: config.meetingNotesModel, meetingNotesModel: config.meetingNotesModel,
trackBlockModel: config.trackBlockModel,
}; };
const toWrite = { ...config, providers: existingProviders }; const toWrite = { ...config, providers: existingProviders };

View file

@ -1,5 +1,4 @@
--- ---
model: gpt-4.1
tools: tools:
workspace-readFile: workspace-readFile:
type: builtin type: builtin

View file

@ -1,5 +1,4 @@
--- ---
model: gpt-4.1
tools: tools:
workspace-readFile: workspace-readFile:
type: builtin type: builtin

View file

@ -2,6 +2,7 @@ import fs from 'fs';
import path from 'path'; import path from 'path';
import { WorkDir } from '../config/config.js'; import { WorkDir } from '../config/config.js';
import { createRun, createMessage } from '../runs/runs.js'; import { createRun, createMessage } from '../runs/runs.js';
import { getKgModel } from '../models/defaults.js';
import { waitForRunCompletion } from '../agents/utils.js'; import { waitForRunCompletion } from '../agents/utils.js';
import { import {
loadConfig, loadConfig,
@ -41,6 +42,9 @@ async function runAgent(agentName: string): Promise<void> {
// The agent file is expected to be in the agents directory with the same name // The agent file is expected to be in the agents directory with the same name
const run = await createRun({ const run = await createRun({
agentId: agentName, agentId: agentName,
model: await getKgModel(),
useCase: 'knowledge_sync',
subUseCase: 'pre_built',
}); });
// Build trigger message with user context // Build trigger message with user context

View file

@ -5,10 +5,35 @@ import path from "path";
import fsp from "fs/promises"; import fsp from "fs/promises";
import fs from "fs"; import fs from "fs";
import readline from "readline"; import readline from "readline";
import { Run, RunEvent, StartEvent, CreateRunOptions, ListRunsResponse, MessageEvent } from "@x/shared/dist/runs.js"; import { Run, RunEvent, StartEvent, ListRunsResponse, MessageEvent, UseCase } from "@x/shared/dist/runs.js";
import { getDefaultModelAndProvider } from "../models/defaults.js";
/**
* Reading-only schemas: extend the canonical `StartEvent` / `RunEvent` to
* accept legacy run files written before `model`/`provider` were required.
*
* `RunEvent.or(LegacyStartEvent)` works because zod unions try left-to-right:
* for any non-start event RunEvent matches first; for a strict start event
* RunEvent still matches; only a legacy start event falls through and parses
* as LegacyStartEvent. New event types stay maintained in one place
* (`@x/shared/dist/runs.js`) the lenient form just adds one fallback variant.
*/
const LegacyStartEvent = StartEvent.extend({
model: z.string().optional(),
provider: z.string().optional(),
});
const ReadRunEvent = RunEvent.or(LegacyStartEvent);
export type CreateRunRepoOptions = {
agentId: string;
model: string;
provider: string;
useCase: z.infer<typeof UseCase>;
subUseCase?: string;
};
export interface IRunsRepo { export interface IRunsRepo {
create(options: z.infer<typeof CreateRunOptions>): Promise<z.infer<typeof Run>>; create(options: CreateRunRepoOptions): Promise<z.infer<typeof Run>>;
fetch(id: string): Promise<z.infer<typeof Run>>; fetch(id: string): Promise<z.infer<typeof Run>>;
list(cursor?: string): Promise<z.infer<typeof ListRunsResponse>>; list(cursor?: string): Promise<z.infer<typeof ListRunsResponse>>;
appendEvents(runId: string, events: z.infer<typeof RunEvent>[]): Promise<void>; appendEvents(runId: string, events: z.infer<typeof RunEvent>[]): Promise<void>;
@ -69,16 +94,19 @@ export class FSRunsRepo implements IRunsRepo {
/** /**
* Read file line-by-line using streams, stopping early once we have * Read file line-by-line using streams, stopping early once we have
* the start event and title (or determine there's no title). * the start event and title (or determine there's no title).
*
* Parses the start event with `LegacyStartEvent` so runs written before
* `model`/`provider` were required still surface in the list view.
*/ */
private async readRunMetadata(filePath: string): Promise<{ private async readRunMetadata(filePath: string): Promise<{
start: z.infer<typeof StartEvent>; start: z.infer<typeof LegacyStartEvent>;
title: string | undefined; title: string | undefined;
} | null> { } | null> {
return new Promise((resolve) => { return new Promise((resolve) => {
const stream = fs.createReadStream(filePath, { encoding: 'utf8' }); const stream = fs.createReadStream(filePath, { encoding: 'utf8' });
const rl = readline.createInterface({ input: stream, crlfDelay: Infinity }); const rl = readline.createInterface({ input: stream, crlfDelay: Infinity });
let start: z.infer<typeof StartEvent> | null = null; let start: z.infer<typeof LegacyStartEvent> | null = null;
let title: string | undefined; let title: string | undefined;
let lineIndex = 0; let lineIndex = 0;
@ -88,11 +116,10 @@ export class FSRunsRepo implements IRunsRepo {
try { try {
if (lineIndex === 0) { if (lineIndex === 0) {
// First line should be the start event start = LegacyStartEvent.parse(JSON.parse(trimmed));
start = StartEvent.parse(JSON.parse(trimmed));
} else { } else {
// Subsequent lines - look for first user message or assistant response // Subsequent lines - look for first user message or assistant response
const event = RunEvent.parse(JSON.parse(trimmed)); const event = ReadRunEvent.parse(JSON.parse(trimmed));
if (event.type === 'message') { if (event.type === 'message') {
const msg = event.message; const msg = event.message;
if (msg.role === 'user') { if (msg.role === 'user') {
@ -157,13 +184,17 @@ export class FSRunsRepo implements IRunsRepo {
); );
} }
async create(options: z.infer<typeof CreateRunOptions>): Promise<z.infer<typeof Run>> { async create(options: CreateRunRepoOptions): Promise<z.infer<typeof Run>> {
const runId = await this.idGenerator.next(); const runId = await this.idGenerator.next();
const ts = new Date().toISOString(); const ts = new Date().toISOString();
const start: z.infer<typeof StartEvent> = { const start: z.infer<typeof StartEvent> = {
type: "start", type: "start",
runId, runId,
agentName: options.agentId, agentName: options.agentId,
model: options.model,
provider: options.provider,
useCase: options.useCase,
...(options.subUseCase ? { subUseCase: options.subUseCase } : {}),
subflow: [], subflow: [],
ts, ts,
}; };
@ -172,24 +203,45 @@ export class FSRunsRepo implements IRunsRepo {
id: runId, id: runId,
createdAt: ts, createdAt: ts,
agentId: options.agentId, agentId: options.agentId,
model: options.model,
provider: options.provider,
useCase: options.useCase,
...(options.subUseCase ? { subUseCase: options.subUseCase } : {}),
log: [start], log: [start],
}; };
} }
async fetch(id: string): Promise<z.infer<typeof Run>> { async fetch(id: string): Promise<z.infer<typeof Run>> {
const contents = await fsp.readFile(path.join(WorkDir, 'runs', `${id}.jsonl`), 'utf8'); const contents = await fsp.readFile(path.join(WorkDir, 'runs', `${id}.jsonl`), 'utf8');
const events = contents.split('\n') // Parse with the lenient schema so legacy start events (no model/provider) load.
const rawEvents = contents.split('\n')
.filter(line => line.trim() !== '') .filter(line => line.trim() !== '')
.map(line => RunEvent.parse(JSON.parse(line))); .map(line => ReadRunEvent.parse(JSON.parse(line)));
if (events.length === 0 || events[0].type !== 'start') { if (rawEvents.length === 0 || rawEvents[0].type !== 'start') {
throw new Error('Corrupt run data'); throw new Error('Corrupt run data');
} }
// Backfill model/provider on the start event from current defaults if missing,
// then promote to the canonical strict types for callers.
const rawStart = rawEvents[0];
const defaults = (!rawStart.model || !rawStart.provider)
? await getDefaultModelAndProvider()
: null;
const start: z.infer<typeof StartEvent> = {
...rawStart,
model: rawStart.model ?? defaults!.model,
provider: rawStart.provider ?? defaults!.provider,
};
const events: z.infer<typeof RunEvent>[] = [start, ...rawEvents.slice(1) as z.infer<typeof RunEvent>[]];
const title = this.extractTitle(events); const title = this.extractTitle(events);
return { return {
id, id,
title, title,
createdAt: events[0].ts!, createdAt: start.ts!,
agentId: events[0].agentName, agentId: start.agentName,
model: start.model,
provider: start.provider,
...(start.useCase ? { useCase: start.useCase } : {}),
...(start.subUseCase ? { subUseCase: start.subUseCase } : {}),
log: events, log: events,
}; };
} }

View file

@ -10,11 +10,28 @@ import { IRunsLock } from "./lock.js";
import { forceCloseAllMcpClients } from "../mcp/mcp.js"; import { forceCloseAllMcpClients } from "../mcp/mcp.js";
import { extractCommandNames } from "../application/lib/command-executor.js"; import { extractCommandNames } from "../application/lib/command-executor.js";
import { addToSecurityConfig } from "../config/security.js"; import { addToSecurityConfig } from "../config/security.js";
import { loadAgent } from "../agents/runtime.js";
import { getDefaultModelAndProvider } from "../models/defaults.js";
export async function createRun(opts: z.infer<typeof CreateRunOptions>): Promise<z.infer<typeof Run>> { export async function createRun(opts: z.infer<typeof CreateRunOptions>): Promise<z.infer<typeof Run>> {
const repo = container.resolve<IRunsRepo>('runsRepo'); const repo = container.resolve<IRunsRepo>('runsRepo');
const bus = container.resolve<IBus>('bus'); const bus = container.resolve<IBus>('bus');
const run = await repo.create(opts);
// Resolve model+provider once at creation: opts > agent declaration > defaults.
// Both fields are plain strings (provider is a name, looked up at runtime).
const agent = await loadAgent(opts.agentId);
const defaults = await getDefaultModelAndProvider();
const model = opts.model ?? agent.model ?? defaults.model;
const provider = opts.provider ?? agent.provider ?? defaults.provider;
const useCase = opts.useCase ?? "copilot_chat";
const run = await repo.create({
agentId: opts.agentId,
model,
provider,
useCase,
...(opts.subUseCase ? { subUseCase: opts.subUseCase } : {}),
});
await bus.publish(run.log[0]); await bus.publish(run.log[0]);
return run; return run;
} }

View file

@ -7,6 +7,7 @@ import { RemoveOptions, WriteFileOptions, WriteFileResult } from 'packages/share
import { WorkDir } from '../config/config.js'; import { WorkDir } from '../config/config.js';
import { rewriteWikiLinksForRenamedKnowledgeFile } from './wiki-link-rewrite.js'; import { rewriteWikiLinksForRenamedKnowledgeFile } from './wiki-link-rewrite.js';
import { commitAll } from '../knowledge/version_history.js'; import { commitAll } from '../knowledge/version_history.js';
import { withFileLock } from '../knowledge/file-lock.js';
// ============================================================================ // ============================================================================
// Path Utilities // Path Utilities
@ -249,38 +250,42 @@ export async function writeFile(
await fs.mkdir(path.dirname(filePath), { recursive: true }); await fs.mkdir(path.dirname(filePath), { recursive: true });
} }
// Check expectedEtag if provided (conflict detection) const result = await withFileLock(filePath, async () => {
if (opts?.expectedEtag) { // Check expectedEtag if provided (conflict detection)
const existingStats = await fs.lstat(filePath); if (opts?.expectedEtag) {
const existingEtag = computeEtag(existingStats.size, existingStats.mtimeMs); const existingStats = await fs.lstat(filePath);
if (existingEtag !== opts.expectedEtag) { const existingEtag = computeEtag(existingStats.size, existingStats.mtimeMs);
throw new Error('File was modified (ETag mismatch)'); if (existingEtag !== opts.expectedEtag) {
throw new Error('File was modified (ETag mismatch)');
}
} }
}
// Convert data to buffer based on encoding // Convert data to buffer based on encoding
let buffer: Buffer; let buffer: Buffer;
if (encoding === 'utf8') { if (encoding === 'utf8') {
buffer = Buffer.from(data, 'utf8'); buffer = Buffer.from(data, 'utf8');
} else if (encoding === 'base64') { } else if (encoding === 'base64') {
buffer = Buffer.from(data, 'base64'); buffer = Buffer.from(data, 'base64');
} else { } else {
// binary: assume data is base64-encoded // binary: assume data is base64-encoded
buffer = Buffer.from(data, 'base64'); buffer = Buffer.from(data, 'base64');
} }
if (atomic) { if (atomic) {
// Atomic write: write to temp file, then rename // Atomic write: write to temp file, then rename
const tempPath = filePath + '.tmp.' + Date.now() + Math.random().toString(36).slice(2); const tempPath = filePath + '.tmp.' + Date.now() + Math.random().toString(36).slice(2);
await fs.writeFile(tempPath, buffer); await fs.writeFile(tempPath, buffer);
await fs.rename(tempPath, filePath); await fs.rename(tempPath, filePath);
} else { } else {
await fs.writeFile(filePath, buffer); await fs.writeFile(filePath, buffer);
} }
const stats = await fs.lstat(filePath); const stats = await fs.lstat(filePath);
const stat = statToSchema(stats, 'file'); const stat = statToSchema(stats, 'file');
const etag = computeEtag(stats.size, stats.mtimeMs); const etag = computeEtag(stats.size, stats.mtimeMs);
return { stat, etag };
});
// Schedule a debounced version history commit for knowledge files // Schedule a debounced version history commit for knowledge files
if (relPath.startsWith('knowledge/') && relPath.endsWith('.md')) { if (relPath.startsWith('knowledge/') && relPath.endsWith('.md')) {
@ -289,8 +294,8 @@ export async function writeFile(
return { return {
path: relPath, path: relPath,
stat, stat: result.stat,
etag, etag: result.etag,
}; };
} }

View file

@ -10,6 +10,7 @@ export * as serviceEvents from './service-events.js'
export * as inlineTask from './inline-task.js'; export * as inlineTask from './inline-task.js';
export * as blocks from './blocks.js'; export * as blocks from './blocks.js';
export * as trackBlock from './track-block.js'; export * as trackBlock from './track-block.js';
export * as promptBlock from './prompt-block.js';
export * as frontmatter from './frontmatter.js'; export * as frontmatter from './frontmatter.js';
export * as bases from './bases.js'; export * as bases from './bases.js';
export * as browserControl from './browser-control.js'; export * as browserControl from './browser-control.js';

View file

@ -25,6 +25,13 @@ const ipcSchemas = {
electron: z.string(), electron: z.string(),
}), }),
}, },
'analytics:bootstrap': {
req: z.null(),
res: z.object({
installationId: z.string(),
apiUrl: z.string(),
}),
},
'workspace:getRoot': { 'workspace:getRoot': {
req: z.null(), req: z.null(),
res: z.object({ res: z.object({

View file

@ -1,7 +1,7 @@
import { z } from "zod"; import { z } from "zod";
export const LlmProvider = z.object({ export const LlmProvider = z.object({
flavor: z.enum(["openai", "anthropic", "google", "openrouter", "aigateway", "ollama", "openai-compatible"]), flavor: z.enum(["openai", "anthropic", "google", "openrouter", "aigateway", "ollama", "openai-compatible", "rowboat"]),
apiKey: z.string().optional(), apiKey: z.string().optional(),
baseURL: z.string().optional(), baseURL: z.string().optional(),
headers: z.record(z.string(), z.string()).optional(), headers: z.record(z.string(), z.string()).optional(),
@ -11,6 +11,16 @@ export const LlmModelConfig = z.object({
provider: LlmProvider, provider: LlmProvider,
model: z.string(), model: z.string(),
models: z.array(z.string()).optional(), models: z.array(z.string()).optional(),
providers: z.record(z.string(), z.object({
apiKey: z.string().optional(),
baseURL: z.string().optional(),
headers: z.record(z.string(), z.string()).optional(),
model: z.string().optional(),
models: z.array(z.string()).optional(),
})).optional(),
// Per-category model overrides (BYOK only — signed-in users always get
// the curated gateway defaults). Read by helpers in core/models/defaults.ts.
knowledgeGraphModel: z.string().optional(), knowledgeGraphModel: z.string().optional(),
meetingNotesModel: z.string().optional(), meetingNotesModel: z.string().optional(),
trackBlockModel: z.string().optional(),
}); });

View file

@ -0,0 +1,8 @@
import z from 'zod';
export const PromptBlockSchema = z.object({
label: z.string().min(1).describe('Short title shown on the card'),
instruction: z.string().min(1).describe('Full prompt sent to Copilot when Run is clicked'),
});
export type PromptBlock = z.infer<typeof PromptBlockSchema>;

View file

@ -19,6 +19,17 @@ export const RunProcessingEndEvent = BaseRunEvent.extend({
export const StartEvent = BaseRunEvent.extend({ export const StartEvent = BaseRunEvent.extend({
type: z.literal("start"), type: z.literal("start"),
agentName: z.string(), agentName: z.string(),
model: z.string(),
provider: z.string(),
// useCase/subUseCase tag the run for analytics. Optional on read so legacy
// run files written before these fields existed still parse cleanly.
useCase: z.enum([
"copilot_chat",
"track_block",
"meeting_note",
"knowledge_sync",
]).optional(),
subUseCase: z.string().optional(),
}); });
export const SpawnSubFlowEvent = BaseRunEvent.extend({ export const SpawnSubFlowEvent = BaseRunEvent.extend({
@ -116,11 +127,22 @@ export const AskHumanResponsePayload = AskHumanResponseEvent.pick({
response: true, response: true,
}); });
export const UseCase = z.enum([
"copilot_chat",
"track_block",
"meeting_note",
"knowledge_sync",
]);
export const Run = z.object({ export const Run = z.object({
id: z.string(), id: z.string(),
title: z.string().optional(), title: z.string().optional(),
createdAt: z.iso.datetime(), createdAt: z.iso.datetime(),
agentId: z.string(), agentId: z.string(),
model: z.string(),
provider: z.string(),
useCase: UseCase.optional(),
subUseCase: z.string().optional(),
log: z.array(RunEvent), log: z.array(RunEvent),
}); });
@ -134,6 +156,10 @@ export const ListRunsResponse = z.object({
nextCursor: z.string().optional(), nextCursor: z.string().optional(),
}); });
export const CreateRunOptions = Run.pick({ export const CreateRunOptions = z.object({
agentId: true, agentId: z.string(),
model: z.string().optional(),
provider: z.string().optional(),
useCase: UseCase.optional(),
subUseCase: z.string().optional(),
}); });

View file

@ -25,6 +25,8 @@ export const TrackBlockSchema = z.object({
eventMatchCriteria: z.string().optional().describe('When set, this track participates in event-based triggering. Describe what kinds of events should consider this track for an update (e.g. "Emails about Q3 planning"). Omit to disable event triggers — the track will only run on schedule or manually.'), eventMatchCriteria: z.string().optional().describe('When set, this track participates in event-based triggering. Describe what kinds of events should consider this track for an update (e.g. "Emails about Q3 planning"). Omit to disable event triggers — the track will only run on schedule or manually.'),
active: z.boolean().default(true).describe('Set false to pause without deleting'), active: z.boolean().default(true).describe('Set false to pause without deleting'),
schedule: TrackScheduleSchema.optional(), schedule: TrackScheduleSchema.optional(),
model: z.string().optional().describe('ADVANCED — leave unset. Per-track LLM model override (e.g. "anthropic/claude-sonnet-4.6"). Only set when the user explicitly asked for a specific model for THIS track. The global default already picks a tuned model for tracks; overriding usually makes things worse, not better.'),
provider: z.string().optional().describe('ADVANCED — leave unset. Per-track provider name override (e.g. "openai", "anthropic"). Only set when the user explicitly asked for a specific provider for THIS track. Almost always omitted; the global default flows through correctly.'),
lastRunAt: z.string().optional().describe('Runtime-managed — never write this yourself'), lastRunAt: z.string().optional().describe('Runtime-managed — never write this yourself'),
lastRunId: z.string().optional().describe('Runtime-managed — never write this yourself'), lastRunId: z.string().optional().describe('Runtime-managed — never write this yourself'),
lastRunSummary: z.string().optional().describe('Runtime-managed — never write this yourself'), lastRunSummary: z.string().optional().describe('Runtime-managed — never write this yourself'),

47
apps/x/pnpm-lock.yaml generated
View file

@ -184,6 +184,9 @@ importers:
'@tiptap/extension-placeholder': '@tiptap/extension-placeholder':
specifier: ^3.15.3 specifier: ^3.15.3
version: 3.15.3(@tiptap/extensions@3.15.3(@tiptap/core@3.15.3(@tiptap/pm@3.15.3))(@tiptap/pm@3.15.3)) version: 3.15.3(@tiptap/extensions@3.15.3(@tiptap/core@3.15.3(@tiptap/pm@3.15.3))(@tiptap/pm@3.15.3))
'@tiptap/extension-table':
specifier: ^3.22.4
version: 3.22.4(@tiptap/core@3.15.3(@tiptap/pm@3.15.3))(@tiptap/pm@3.15.3)
'@tiptap/extension-task-item': '@tiptap/extension-task-item':
specifier: ^3.15.3 specifier: ^3.15.3
version: 3.15.3(@tiptap/extension-list@3.15.3(@tiptap/core@3.15.3(@tiptap/pm@3.15.3))(@tiptap/pm@3.15.3)) version: 3.15.3(@tiptap/extension-list@3.15.3(@tiptap/core@3.15.3(@tiptap/pm@3.15.3))(@tiptap/pm@3.15.3))
@ -244,6 +247,9 @@ importers:
recharts: recharts:
specifier: ^3.8.0 specifier: ^3.8.0
version: 3.8.1(@types/react@19.2.7)(react-dom@19.2.3(react@19.2.3))(react-is@16.13.1)(react@19.2.3)(redux@5.0.1) version: 3.8.1(@types/react@19.2.7)(react-dom@19.2.3(react@19.2.3))(react-is@16.13.1)(react@19.2.3)(redux@5.0.1)
remark-breaks:
specifier: ^4.0.0
version: 4.0.0
sonner: sonner:
specifier: ^2.0.7 specifier: ^2.0.7
version: 2.0.7(react-dom@19.2.3(react@19.2.3))(react@19.2.3) version: 2.0.7(react-dom@19.2.3(react@19.2.3))(react@19.2.3)
@ -398,6 +404,9 @@ importers:
pdf-parse: pdf-parse:
specifier: ^2.4.5 specifier: ^2.4.5
version: 2.4.5 version: 2.4.5
posthog-node:
specifier: ^4.18.0
version: 4.18.0
react: react:
specifier: ^19.2.3 specifier: ^19.2.3
version: 19.2.3 version: 19.2.3
@ -3166,6 +3175,12 @@ packages:
peerDependencies: peerDependencies:
'@tiptap/core': ^3.15.3 '@tiptap/core': ^3.15.3
'@tiptap/extension-table@3.22.4':
resolution: {integrity: sha512-kjvLv3Z4JI+1tLDqZKa+bKU8VcxY+ZOyMCKWQA7wYmy8nKWkLJ60W+xy8AcXXpHB2goCIgSFLhsTyswx0GXH4w==}
peerDependencies:
'@tiptap/core': 3.22.4
'@tiptap/pm': 3.22.4
'@tiptap/extension-task-item@3.15.3': '@tiptap/extension-task-item@3.15.3':
resolution: {integrity: sha512-bkrmouc1rE5n9ONw2G7+zCGfBRoF2HJWq8REThPMzg/6+L5GJJ5YTN4UmncaP48U9jHX8xeihjgg9Ypenjl4lw==} resolution: {integrity: sha512-bkrmouc1rE5n9ONw2G7+zCGfBRoF2HJWq8REThPMzg/6+L5GJJ5YTN4UmncaP48U9jHX8xeihjgg9Ypenjl4lw==}
peerDependencies: peerDependencies:
@ -5799,6 +5814,9 @@ packages:
mdast-util-mdxjs-esm@2.0.1: mdast-util-mdxjs-esm@2.0.1:
resolution: {integrity: sha512-EcmOpxsZ96CvlP03NghtH1EsLtr0n9Tm4lPUJUBccV9RwUOneqSycg19n5HGzCf+10LozMRSObtVr3ee1WoHtg==} resolution: {integrity: sha512-EcmOpxsZ96CvlP03NghtH1EsLtr0n9Tm4lPUJUBccV9RwUOneqSycg19n5HGzCf+10LozMRSObtVr3ee1WoHtg==}
mdast-util-newline-to-break@2.0.0:
resolution: {integrity: sha512-MbgeFca0hLYIEx/2zGsszCSEJJ1JSCdiY5xQxRcLDDGa8EPvlLPupJ4DSajbMPAnC0je8jfb9TiUATnxxrHUog==}
mdast-util-phrasing@4.1.0: mdast-util-phrasing@4.1.0:
resolution: {integrity: sha512-TqICwyvJJpBwvGAMZjj4J2n0X8QWp21b9l0o7eXyVJ25YNWYbJDVIyD1bZXE6WtV6RmKJVYmQAKWa0zWOABz2w==} resolution: {integrity: sha512-TqICwyvJJpBwvGAMZjj4J2n0X8QWp21b9l0o7eXyVJ25YNWYbJDVIyD1bZXE6WtV6RmKJVYmQAKWa0zWOABz2w==}
@ -6456,6 +6474,10 @@ packages:
posthog-js@1.332.0: posthog-js@1.332.0:
resolution: {integrity: sha512-w3+sL+IFK4mpfFmgTW7On8cR+z34pre+SOewx+eHZQSYF9RYqXsLIhrxagWbQKkowPd4tCwUHrkS1+VHsjnPqA==} resolution: {integrity: sha512-w3+sL+IFK4mpfFmgTW7On8cR+z34pre+SOewx+eHZQSYF9RYqXsLIhrxagWbQKkowPd4tCwUHrkS1+VHsjnPqA==}
posthog-node@4.18.0:
resolution: {integrity: sha512-XROs1h+DNatgKh/AlIlCtDxWzwrKdYDb2mOs58n4yN8BkGN9ewqeQwG5ApS4/IzwCb7HPttUkOVulkYatd2PIw==}
engines: {node: '>=15.0.0'}
postject@1.0.0-alpha.6: postject@1.0.0-alpha.6:
resolution: {integrity: sha512-b9Eb8h2eVqNE8edvKdwqkrY6O7kAwmI8kcnBv1NScolYJbo59XUF0noFq+lxbC1yN20bmC0WBEbDC5H/7ASb0A==} resolution: {integrity: sha512-b9Eb8h2eVqNE8edvKdwqkrY6O7kAwmI8kcnBv1NScolYJbo59XUF0noFq+lxbC1yN20bmC0WBEbDC5H/7ASb0A==}
engines: {node: '>=14.0.0'} engines: {node: '>=14.0.0'}
@ -6759,6 +6781,9 @@ packages:
rehype-raw@7.0.0: rehype-raw@7.0.0:
resolution: {integrity: sha512-/aE8hCfKlQeA8LmyeyQvQF3eBiLRGNlfBJEvWH7ivp9sBqs7TNqBL5X3v157rM4IFETqDnIOO+z5M/biZbo9Ww==} resolution: {integrity: sha512-/aE8hCfKlQeA8LmyeyQvQF3eBiLRGNlfBJEvWH7ivp9sBqs7TNqBL5X3v157rM4IFETqDnIOO+z5M/biZbo9Ww==}
remark-breaks@4.0.0:
resolution: {integrity: sha512-IjEjJOkH4FuJvHZVIW0QCDWxcG96kCq7An/KVH2NfJe6rKZU2AsHeB3OEjPNRxi4QC34Xdx7I2KGYn6IpT7gxQ==}
remark-cjk-friendly-gfm-strikethrough@1.2.3: remark-cjk-friendly-gfm-strikethrough@1.2.3:
resolution: {integrity: sha512-bXfMZtsaomK6ysNN/UGRIcasQAYkC10NtPmP0oOHOV8YOhA2TXmwRXCku4qOzjIFxAPfish5+XS0eIug2PzNZA==} resolution: {integrity: sha512-bXfMZtsaomK6ysNN/UGRIcasQAYkC10NtPmP0oOHOV8YOhA2TXmwRXCku4qOzjIFxAPfish5+XS0eIug2PzNZA==}
engines: {node: '>=16'} engines: {node: '>=16'}
@ -11151,6 +11176,11 @@ snapshots:
dependencies: dependencies:
'@tiptap/core': 3.15.3(@tiptap/pm@3.15.3) '@tiptap/core': 3.15.3(@tiptap/pm@3.15.3)
'@tiptap/extension-table@3.22.4(@tiptap/core@3.15.3(@tiptap/pm@3.15.3))(@tiptap/pm@3.15.3)':
dependencies:
'@tiptap/core': 3.15.3(@tiptap/pm@3.15.3)
'@tiptap/pm': 3.15.3
'@tiptap/extension-task-item@3.15.3(@tiptap/extension-list@3.15.3(@tiptap/core@3.15.3(@tiptap/pm@3.15.3))(@tiptap/pm@3.15.3))': '@tiptap/extension-task-item@3.15.3(@tiptap/extension-list@3.15.3(@tiptap/core@3.15.3(@tiptap/pm@3.15.3))(@tiptap/pm@3.15.3))':
dependencies: dependencies:
'@tiptap/extension-list': 3.15.3(@tiptap/core@3.15.3(@tiptap/pm@3.15.3))(@tiptap/pm@3.15.3) '@tiptap/extension-list': 3.15.3(@tiptap/core@3.15.3(@tiptap/pm@3.15.3))(@tiptap/pm@3.15.3)
@ -14400,6 +14430,11 @@ snapshots:
transitivePeerDependencies: transitivePeerDependencies:
- supports-color - supports-color
mdast-util-newline-to-break@2.0.0:
dependencies:
'@types/mdast': 4.0.4
mdast-util-find-and-replace: 3.0.2
mdast-util-phrasing@4.1.0: mdast-util-phrasing@4.1.0:
dependencies: dependencies:
'@types/mdast': 4.0.4 '@types/mdast': 4.0.4
@ -15175,6 +15210,12 @@ snapshots:
query-selector-shadow-dom: 1.0.1 query-selector-shadow-dom: 1.0.1
web-vitals: 4.2.4 web-vitals: 4.2.4
posthog-node@4.18.0:
dependencies:
axios: 1.13.2
transitivePeerDependencies:
- debug
postject@1.0.0-alpha.6: postject@1.0.0-alpha.6:
dependencies: dependencies:
commander: 9.5.0 commander: 9.5.0
@ -15594,6 +15635,12 @@ snapshots:
hast-util-raw: 9.1.0 hast-util-raw: 9.1.0
vfile: 6.0.3 vfile: 6.0.3
remark-breaks@4.0.0:
dependencies:
'@types/mdast': 4.0.4
mdast-util-newline-to-break: 2.0.0
unified: 11.0.5
remark-cjk-friendly-gfm-strikethrough@1.2.3(@types/mdast@4.0.4)(micromark-util-types@2.0.2)(micromark@4.0.2)(unified@11.0.5): remark-cjk-friendly-gfm-strikethrough@1.2.3(@types/mdast@4.0.4)(micromark-util-types@2.0.2)(micromark@4.0.2)(unified@11.0.5):
dependencies: dependencies:
micromark-extension-cjk-friendly-gfm-strikethrough: 1.2.3(micromark-util-types@2.0.2)(micromark@4.0.2) micromark-extension-cjk-friendly-gfm-strikethrough: 1.2.3(micromark-util-types@2.0.2)(micromark@4.0.2)