Skip to content

Commit c148a13

Browse files
committed
docs(ai-chat): surface chat.toStreamTextOptions() in entry-point pages
The spread-into-streamText pattern was only mentioned on the compaction / pending-messages / background-injection feature pages — buried far enough that customers writing a fresh chat.agent miss it and silently lose prepareStep wiring (compaction, steering, background context injection). The deeper pages are the ones that already include it; the entry points (quick-start, backend, overview) were the gap. quick-start: spread the call in the very first `streamText` example plus a Warning explaining what it wires up. backend: top-of-page Warning before the first example block, with the canonical "spread first, override after" snippet. Existing examples in this file keep the spread implicit for brevity now that the warning establishes the pattern. overview: short Warning in "What the backend accumulates" that points back to backend.mdx.
1 parent 3da7717 commit c148a13

3 files changed

Lines changed: 32 additions & 2 deletions

File tree

docs/ai-chat/backend.mdx

Lines changed: 18 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -16,6 +16,23 @@ The highest-level approach. Handles message accumulation, stop signals, turn lif
1616
Every `chat.agent` conversation is backed by a durable Session — `externalId` is your `chatId`, `type` is `"chat.agent"`, `taskIdentifier` is the agent's task ID. The session is the run manager: it owns the chat's runs, persists across run lifecycles, and orchestrates handoffs (idle continuation, `chat.requestUpgrade`). You rarely need to touch the session directly (`chat.stream`, `chat.messages`, `chat.stopSignal` wrap everything), but `payload.sessionId` is available if you want to reach in — e.g. `sessions.open(payload.sessionId)` to write from a sub-agent or from outside the turn loop.
1717
</Info>
1818

19+
<Warning>
20+
**Always spread `chat.toStreamTextOptions()` into every `streamText` call.** It wires up the `prepareStep` callback that drives [compaction](/ai-chat/compaction), [steering](/ai-chat/pending-messages), and [background injection](/ai-chat/background-injection) — features that silently no-op if the spread is missing. It also injects the system prompt set via `chat.prompt()`, the resolved model (when a registry is provided), and telemetry metadata.
21+
22+
Spread it **first** in the options object so any explicit overrides win:
23+
24+
```ts
25+
streamText({
26+
...chat.toStreamTextOptions({ registry, tools }),
27+
messages,
28+
abortSignal: signal,
29+
// any explicit overrides go here
30+
});
31+
```
32+
33+
Examples in this doc keep the spread implicit for brevity, but you should include it in real code.
34+
</Warning>
35+
1936
### Simple: return a StreamTextResult
2037

2138
Return the `streamText` result from `run` and it's automatically piped to the frontend:
@@ -29,6 +46,7 @@ export const simpleChat = chat.agent({
2946
id: "simple-chat",
3047
run: async ({ messages, signal }) => {
3148
return streamText({
49+
...chat.toStreamTextOptions(), // prepareStep, system, telemetry — see callout above
3250
model: openai("gpt-4o"),
3351
system: "You are a helpful assistant.",
3452
messages,

docs/ai-chat/overview.mdx

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -160,6 +160,10 @@ The accumulated messages are available in:
160160
- `onTurnStart()` as `uiMessages` (`UIMessage[]`) — for persisting before streaming
161161
- `onTurnComplete()` as `uiMessages` (`UIMessage[]`) — for persisting after the response
162162

163+
<Warning>
164+
**Always spread `chat.toStreamTextOptions()` into every `streamText` call.** It wires up the `prepareStep` callback that drives compaction, steering, and background injection. Skipping the spread silently disables those features. See [Backend → chat.agent()](/ai-chat/backend#chatagent).
165+
</Warning>
166+
163167
Agents appear in the **Agents** section of the dashboard (not Tasks) and can be tested via the **Playground**.
164168

165169
## Three approaches

docs/ai-chat/quick-start.mdx

Lines changed: 10 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -18,9 +18,13 @@ description: "Get a working AI agent in 3 steps — define an agent, generate a
1818
export const myChat = chat.agent({
1919
id: "my-chat",
2020
run: async ({ messages, signal }) => {
21-
// messages is ModelMessage[] — pass directly to streamText
22-
// signal fires on stop or run cancel
2321
return streamText({
22+
// Spread chat.toStreamTextOptions() FIRST — it wires up
23+
// prepareStep (compaction, steering, background injection),
24+
// the system prompt set via chat.prompt(), and telemetry.
25+
// Skipping this is the single most common cause of subtle
26+
// bugs (silent broken compaction, missing steering, etc.).
27+
...chat.toStreamTextOptions(),
2428
model: openai("gpt-4o"),
2529
messages,
2630
abortSignal: signal,
@@ -29,6 +33,10 @@ description: "Get a working AI agent in 3 steps — define an agent, generate a
2933
});
3034
```
3135

36+
<Warning>
37+
**Always spread `chat.toStreamTextOptions()` into your `streamText` call.** It wires up the `prepareStep` callback that drives compaction, mid-turn steering, and background injection — features that silently no-op if the spread is missing. Spread it **first** so any explicit overrides (e.g. a custom `prepareStep`) win.
38+
</Warning>
39+
3240
<Tip>
3341
For a **custom** [`UIMessage`](https://sdk.vercel.ai/docs/reference/ai-sdk-core/ui-message) subtype (typed `data-*` parts, tool map, etc.), define the agent with [`chat.withUIMessage<...>().agent({...})`](/ai-chat/types) instead of `chat.agent`.
3442
</Tip>

0 commit comments

Comments
 (0)