Skip to content

Conversation

@jherr
Copy link
Contributor

@jherr jherr commented Jan 23, 2026

🎯 Changes

βœ… Checklist

  • I have followed the steps in the Contributing guide.
  • I have tested this code locally with pnpm run test:pr.

πŸš€ Release Impact

  • This change affects published code, and I have generated a changeset.
  • This change is docs/CI/dev-only (no release).

Summary by CodeRabbit

  • New Features
    • Unified AG‑UI streaming protocol: RUN/TEXT/STEP/TOOL events, run‑scoped IDs, richer RUN_ERROR/RUN_FINISHED and TOOL_CALL lifecycle for clearer streaming and tool interactions.
  • UI
    • New mock chat API and mock‑driven E2E flows; added Grok and OpenRouter options and Gemini 2.5 Pro; DEFAULT_MODEL_OPTION exposed.
  • Documentation
    • Streaming guides and protocol docs rewritten to AG‑UI event terminology and examples.
  • Tests
    • Tests and E2E suites updated/added to validate AG‑UI event sequences, ordering, payloads, and mock scenarios.

✏️ Tip: You can customize this high-level summary in your review settings.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jan 23, 2026

πŸ“ Walkthrough

Walkthrough

Streaming and adapters were migrated to the AG‑UI Protocol: streams now emit RUN_STARTED, RUN_FINISHED, RUN_ERROR, TEXT_MESSAGE_/TOOL_CALL_/STEP_* and state/custom events with run/message/step IDs; types, converters, SSE/formatters, adapters, processors, tests, examples, and tooling updated; legacy chunk formats removed.

Changes

Cohort / File(s) Summary
Docs & Protocol
\.changeset/ag-ui-events.md, docs/guides/streaming.md, docs/protocol/chunk-definitions.md
Rename "chunks" to AG‑UI events; document AGUIEventType set and update examples to RUN_, TEXT_MESSAGE_, TOOL_CALL_, STEP_, STATE_*, CUSTOM.
Core Types (TS & Python)
packages/typescript/ai/src/types.ts, packages/python/tanstack-ai/src/tanstack_ai/types.py, packages/python/tanstack-ai/src/tanstack_ai/__init__.py
Introduce BaseAGUIEvent/AGUIEventType and per-event interfaces/TypedDicts; set StreamChunk = AGUIEvent and re-export new types.
Provider Adapters (TS/Python/PHP)
packages/typescript/ai-openai/..., packages/typescript/ai-anthropic/..., packages/typescript/ai-gemini/..., packages/typescript/ai-ollama/..., packages/typescript/ai-grok/..., packages/typescript/ai-openrouter/..., packages/python/.../anthropic_adapter.py, packages/php/tanstack-ai/src/StreamChunkConverter.php
Rework streaming paths to emit AG‑UI event sequences with run/message/step IDs and lifecycle helpers; replace legacy chunk outputs and adjust error/finish mapping.
Stream Conversion / Converters
packages/php/.../StreamChunkConverter.php, packages/python/.../converter.py
Convert provider deltas to AG‑UI event lists (may emit multiple events per input); convert_error now emits RUN_ERROR and ensures RUN_STARTED precedes it.
SSE / Formatting Utilities
packages/python/.../sse.py, packages/php/tanstack-ai/src/SSEFormatter.php
format_sse_error/formatError produce RUN_ERROR events including runId (generated if missing), model, timestamp and structured error.code/message.
Stream Processing & Tooling (TS & Python)
packages/typescript/ai/src/activities/chat/..., packages/typescript/ai/src/activities/chat/tools/tool-calls.ts, packages/typescript/ai/src/activities/chat/stream/*, packages/python/tanstack-ai/src/tanstack_ai/chat.py, packages/python/tanstack-ai/src/tanstack_ai/tool_manager.py
Replace legacy chunk handlers with AG‑UI event handlers; add addToolCallStart/Args/Complete APIs; executeTools accepts RunFinishedEvent; tool-call lifecycle adapted to TOOL_CALL_* events.
Stream-to-response & Summarize Paths
packages/typescript/ai/src/stream-to-response.ts, packages/typescript/ai-*/src/adapters/summarize.ts, packages/typescript/ai-openrouter/src/adapters/summarize.ts
Aggregate text from TEXT_MESSAGE_CONTENT events; summarize adapters updated to use TEXT_MESSAGE_CONTENT / RUN_FINISHED / RUN_ERROR and map usage.
Tests, Smoke & E2E Harnesses
packages/typescript/*/tests/*, packages/typescript/smoke-tests/*, packages/typescript/smoke-tests/e2e/src/*
Update tests to assert AG‑UI event sequences and payloads; add mock SSE endpoint and mock-driven e2e flows; many legacy tests removed and replaced with AG‑UI-focused suites.
Examples / UI changes
examples/ts-react-chat/src/lib/model-selection.ts, examples/*/model-selection.ts, examples/ts-react-chat/src/routes/*
Remove localStorage model preference helpers, add DEFAULT_MODEL_OPTION, update model lists (Gemini/Grok), and update usages.
Public API / Exports
packages/typescript/ai-client/src/index.ts, packages/typescript/ai/src/index.ts, packages/typescript/ai/src/activities/chat/stream/index.ts, packages/typescript/ai/src/activities/chat/stream/processor.ts
Remove StreamProcessorHandlers export; refactor to event-driven StreamProcessorEvents/StreamProcessorOptions and adjust event signatures.
Utilities & Scripts
scripts/distribute-keys.ts, knip.json, testing/panel/package.json
Add distribute-keys script, add scripts/** to knip ignore, add @tanstack/ai-openrouter dependency to testing panel.

Sequence Diagram(s)

sequenceDiagram
    participant Client
    participant Adapter as Text Adapter
    participant Stream as Event Stream
    participant Processor as Chat Processor
    participant ToolExec as Tool Executor

    Client->>Adapter: start streaming request
    Adapter->>Stream: RUN_STARTED(runId, model, timestamp)
    Stream->>Processor: RUN_STARTED
    Processor->>Processor: init run state

    Adapter->>Stream: TEXT_MESSAGE_START(messageId, runId)
    Stream->>Processor: TEXT_MESSAGE_START

    loop text chunks
        Adapter->>Stream: TEXT_MESSAGE_CONTENT(delta, content?, messageId)
        Stream->>Processor: TEXT_MESSAGE_CONTENT
        Processor->>Processor: accumulate content
    end

    alt tool call
        Adapter->>Stream: TOOL_CALL_START(toolCallId, toolName, runId)
        Stream->>Processor: TOOL_CALL_START
        loop arg chunks
            Adapter->>Stream: TOOL_CALL_ARGS(delta, args?, toolCallId)
            Stream->>Processor: TOOL_CALL_ARGS
            Processor->>Processor: accumulate args
        end
        Adapter->>Stream: TOOL_CALL_END(toolCallId, inputParsed)
        Stream->>Processor: TOOL_CALL_END
        Processor->>ToolExec: execute tool(inputParsed)
        ToolExec-->>Processor: tool result
    end

    Adapter->>Stream: TEXT_MESSAGE_END(messageId, runId)
    Stream->>Processor: TEXT_MESSAGE_END

    alt success
        Adapter->>Stream: RUN_FINISHED(runId, finishReason, usage)
        Stream->>Processor: RUN_FINISHED
        Processor->>Client: finalize response
    else error
        Adapter->>Stream: RUN_ERROR(runId, error{code,message}, timestamp)
        Stream->>Processor: RUN_ERROR
        Processor->>Client: emit error
    end
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Possibly related PRs

Suggested reviewers

  • jherr

Poem

πŸ‡ I nibble bytes and stamp my paw,
Events hop out that you all saw.
RUN_STARTED, TEXT, and TOOLS convene,
STEP then RUN_FINISHED β€” neat and clean.
Legacy trails tucked in my burrow, hurrah!

πŸš₯ Pre-merge checks | ❌ 3
❌ Failed checks (2 warnings, 1 inconclusive)
Check name Status Explanation Resolution
Description check ⚠️ Warning The PR description is incomplete; it contains only a template with checkboxes but no explanation of the changes, motivation, or impact in the 'Changes' section. Fill in the 'Changes' section with a clear description of what AG-UI support entails, why it was added, and how it affects the streaming event system.
Docstring Coverage ⚠️ Warning Docstring coverage is 76.92% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
Title check ❓ Inconclusive The title 'AG-UI Support' is vague and generic; it does not specify what AG-UI support means or describe the actual changes being made. Use a more specific and descriptive title that clarifies the primary change, such as 'Migrate streaming events to AG-UI protocol' or 'Implement AG-UI protocol event types for streaming'.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❀️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@jherr jherr marked this pull request as ready for review January 23, 2026 02:27
@nx-cloud
Copy link

nx-cloud bot commented Jan 23, 2026

View your CI Pipeline Execution β†— for commit 2a7de26

Command Status Duration Result
nx run-many --targets=build --exclude=examples/** βœ… Succeeded 1s View β†—

☁️ Nx Cloud last updated this comment at 2026-01-29 19:43:04 UTC

@pkg-pr-new
Copy link

pkg-pr-new bot commented Jan 23, 2026

Open in StackBlitz

@tanstack/ai

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai@244

@tanstack/ai-anthropic

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-anthropic@244

@tanstack/ai-client

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-client@244

@tanstack/ai-devtools-core

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-devtools-core@244

@tanstack/ai-gemini

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-gemini@244

@tanstack/ai-grok

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-grok@244

@tanstack/ai-ollama

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-ollama@244

@tanstack/ai-openai

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-openai@244

@tanstack/ai-openrouter

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-openrouter@244

@tanstack/ai-preact

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-preact@244

@tanstack/ai-react

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-react@244

@tanstack/ai-react-ui

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-react-ui@244

@tanstack/ai-solid

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-solid@244

@tanstack/ai-solid-ui

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-solid-ui@244

@tanstack/ai-svelte

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-svelte@244

@tanstack/ai-vue

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-vue@244

@tanstack/ai-vue-ui

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-vue-ui@244

@tanstack/preact-ai-devtools

npm i https://pkg.pr.new/TanStack/ai/@tanstack/preact-ai-devtools@244

@tanstack/react-ai-devtools

npm i https://pkg.pr.new/TanStack/ai/@tanstack/react-ai-devtools@244

@tanstack/solid-ai-devtools

npm i https://pkg.pr.new/TanStack/ai/@tanstack/solid-ai-devtools@244

commit: 2a7de26

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 8

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (8)
packages/typescript/smoke-tests/adapters/src/harness.ts (1)

199-236: Fallback to content when delta is missing.

TEXT_MESSAGE_CONTENT is aggregated using delta only. If an adapter emits content-only chunks, fullResponse and the draft will miss text.

πŸ”§ Proposed fix
-      const delta = chunk.delta || ''
+      const delta = chunk.delta ?? chunk.content ?? ''
       fullResponse += delta

       if (!assistantDraft) {
         assistantDraft = {
           role: 'assistant',
-          content: chunk.content || '',
+          content: chunk.content ?? delta ?? '',
           toolCalls: [],
         }
       } else {
         assistantDraft.content = (assistantDraft.content || '') + delta
       }
packages/typescript/ai-openai/src/adapters/text.ts (1)

626-645: Ensure RUN_STARTED precedes RUN_ERROR on early stream failure.

If the iterator throws before the first chunk, the catch block emits RUN_ERROR without a prior RUN_STARTED, breaking lifecycle ordering for consumers.

πŸ”§ Proposed fix
     } catch (error: unknown) {
       const err = error as Error & { code?: string }
       console.log(
         '[OpenAI Adapter] Stream ended with error. Event type summary:',
         {
           totalChunks: chunkCount,
           error: err.message,
         },
       )
+      if (!hasEmittedRunStarted) {
+        hasEmittedRunStarted = true
+        yield {
+          type: 'RUN_STARTED',
+          runId,
+          model: options.model,
+          timestamp,
+        }
+      }
       yield {
         type: 'RUN_ERROR',
         runId,
         model: options.model,
         timestamp,
         error: {
           message: err.message || 'Unknown error occurred',
           code: err.code,
         },
       }
     }
packages/typescript/ai-gemini/src/adapters/text.ts (2)

117-128: RUN_ERROR event missing runId field.

The error handler in chatStream emits a RUN_ERROR event but doesn't include runId. Since this catch block is reached before any streaming occurs, runId hasn't been generated yet. However, for consistency with the RunErrorEvent interface (which has optional runId), consider generating a runId even for pre-stream errors:

πŸ”§ Suggested fix: Generate runId for error events
   async *chatStream(
     options: TextOptions<GeminiTextProviderOptions>,
   ): AsyncIterable<StreamChunk> {
     const mappedOptions = this.mapCommonOptionsToGemini(options)
+    const runId = generateId(this.name)

     try {
       const result =
         await this.client.models.generateContentStream(mappedOptions)

-      yield* this.processStreamChunks(result, options.model)
+      yield* this.processStreamChunks(result, options.model, runId)
     } catch (error) {
       const timestamp = Date.now()
       yield {
         type: 'RUN_ERROR',
+        runId,
         model: options.model,
         timestamp,

368-421: Duplicate TOOL_CALL_END events may be emitted for UNEXPECTED_TOOL_CALL.

When finishReason === FinishReason.UNEXPECTED_TOOL_CALL, tool calls are added to toolCallMap with started: true (line 387), then TOOL_CALL_START and TOOL_CALL_END are emitted (lines 391-418). However, the loop at lines 424-441 iterates over all entries in toolCallMap and emits TOOL_CALL_END again, causing duplicate events for these tool calls.

πŸ› Proposed fix: Track which tool calls have already emitted TOOL_CALL_END
+        const endedToolCalls = new Set<string>()
+
         if (finishReason === FinishReason.UNEXPECTED_TOOL_CALL) {
           if (chunk.candidates[0].content?.parts) {
             for (const part of chunk.candidates[0].content.parts) {
               const functionCall = part.functionCall
               if (functionCall) {
                 const toolCallId =
                   functionCall.id ||
                   `${functionCall.name}_${Date.now()}_${nextToolIndex}`
                 // ... existing code ...

                 yield {
                   type: 'TOOL_CALL_END',
                   toolCallId,
                   toolName: functionCall.name || '',
                   model,
                   timestamp,
                   input: parsedInput,
                 }
+                endedToolCalls.add(toolCallId)
               }
             }
           }
         }

         // Emit TOOL_CALL_END for all tracked tool calls
         for (const [toolCallId, toolCallData] of toolCallMap.entries()) {
+          if (endedToolCalls.has(toolCallId)) {
+            continue
+          }
           let parsedInput: unknown = {}
packages/typescript/ai-anthropic/src/adapters/text.ts (1)

613-671: Prevent duplicate terminal events (RUN_FINISHED/RUN_ERROR).

The Anthropic Messages API always emits message_delta with stop_reason before message_stop. Both handlers currently emit terminal events, causing duplicates that break downstream state machines expecting a single terminal event.

Implement the suggested tracking flag to guard terminal event emissions:

Suggested fix
-    let hasEmittedRunStarted = false
-    let hasEmittedTextMessageStart = false
+    let hasEmittedRunStarted = false
+    let hasEmittedTextMessageStart = false
+    let hasEmittedRunTerminal = false
@@
-        } else if (event.type === 'message_stop') {
-          yield {
-            type: 'RUN_FINISHED',
-            runId,
-            model,
-            timestamp,
-            finishReason: 'stop',
-          }
+        } else if (event.type === 'message_stop') {
+          if (!hasEmittedRunTerminal) {
+            hasEmittedRunTerminal = true
+            yield {
+              type: 'RUN_FINISHED',
+              runId,
+              model,
+              timestamp,
+              finishReason: 'stop',
+            }
+          }
         } else if (event.type === 'message_delta') {
           if (event.delta.stop_reason) {
             switch (event.delta.stop_reason) {
               case 'tool_use': {
-                yield {
-                  type: 'RUN_FINISHED',
-                  runId,
-                  model,
-                  timestamp,
-                  finishReason: 'tool_calls',
-                  usage: {
-                    promptTokens: event.usage.input_tokens || 0,
-                    completionTokens: event.usage.output_tokens || 0,
-                    totalTokens:
-                      (event.usage.input_tokens || 0) +
-                      (event.usage.output_tokens || 0),
-                  },
-                }
+                if (!hasEmittedRunTerminal) {
+                  hasEmittedRunTerminal = true
+                  yield {
+                    type: 'RUN_FINISHED',
+                    runId,
+                    model,
+                    timestamp,
+                    finishReason: 'tool_calls',
+                    usage: {
+                      promptTokens: event.usage.input_tokens || 0,
+                      completionTokens: event.usage.output_tokens || 0,
+                      totalTokens:
+                        (event.usage.input_tokens || 0) +
+                        (event.usage.output_tokens || 0),
+                    },
+                  }
+                }
                 break
               }
               case 'max_tokens': {
-                yield {
-                  type: 'RUN_ERROR',
-                  runId,
-                  model,
-                  timestamp,
-                  error: {
-                    message:
-                      'The response was cut off because the maximum token limit was reached.',
-                    code: 'max_tokens',
-                  },
-                }
+                if (!hasEmittedRunTerminal) {
+                  hasEmittedRunTerminal = true
+                  yield {
+                    type: 'RUN_ERROR',
+                    runId,
+                    model,
+                    timestamp,
+                    error: {
+                      message:
+                        'The response was cut off because the maximum token limit was reached.',
+                      code: 'max_tokens',
+                    },
+                  }
+                }
                 break
               }
               default: {
-                yield {
-                  type: 'RUN_FINISHED',
-                  runId,
-                  model,
-                  timestamp,
-                  finishReason: 'stop',
-                  usage: {
-                    promptTokens: event.usage.input_tokens || 0,
-                    completionTokens: event.usage.output_tokens || 0,
-                    totalTokens:
-                      (event.usage.input_tokens || 0) +
-                      (event.usage.output_tokens || 0),
-                  },
-                }
+                if (!hasEmittedRunTerminal) {
+                  hasEmittedRunTerminal = true
+                  yield {
+                    type: 'RUN_FINISHED',
+                    runId,
+                    model,
+                    timestamp,
+                    finishReason: 'stop',
+                    usage: {
+                      promptTokens: event.usage.input_tokens || 0,
+                      completionTokens: event.usage.output_tokens || 0,
+                      totalTokens:
+                        (event.usage.input_tokens || 0) +
+                        (event.usage.output_tokens || 0),
+                    },
+                  }
+                }
               }
             }
           }
         }
packages/python/tanstack-ai/src/tanstack_ai/anthropic_adapter.py (2)

320-388: Prevent duplicate RUN_FINISHED / RUN_ERROR emissions.

message_delta emits terminal events and message_stop always emits RUN_FINISHED, which can duplicate finishes and even emit RUN_FINISHED after RUN_ERROR. Track completion to avoid double-terminating a run.

πŸ› Suggested fix
-                        if hasattr(delta, "stop_reason") and delta.stop_reason:
+                        if hasattr(delta, "stop_reason") and delta.stop_reason and not run_finished:
                             usage = None
                             if hasattr(event, "usage") and event.usage:
                                 usage = {
                                     "promptTokens": event.usage.input_tokens,
                                     "completionTokens": event.usage.output_tokens,
                                     "totalTokens": event.usage.input_tokens
                                     + event.usage.output_tokens,
                                 }
 
                             # Map Anthropic stop_reason to TanStack format
                             if delta.stop_reason == "max_tokens":
+                                run_finished = True
                                 yield RunErrorEvent(
                                     type="RUN_ERROR",
                                     runId=run_id,
                                     model=options.model,
                                     timestamp=timestamp,
                                     error={
                                         "message": "The response was cut off because the maximum token limit was reached.",
                                         "code": "max_tokens",
                                     },
                                 )
                             else:
                                 finish_reason = {
                                     "end_turn": "stop",
                                     "tool_use": "tool_calls",
                                 }.get(delta.stop_reason, "stop")
 
+                                run_finished = True
                                 yield RunFinishedEvent(
                                     type="RUN_FINISHED",
                                     runId=run_id,
                                     model=options.model,
                                     timestamp=timestamp,
                                     finishReason=finish_reason,
                                     usage=usage,
                                 )
@@
-                        yield RunFinishedEvent(
-                            type="RUN_FINISHED",
-                            runId=run_id,
-                            model=options.model,
-                            timestamp=int(time.time() * 1000),
-                            finishReason=finish_reason,
-                            usage=usage,
-                        )
+                        if not run_finished:
+                            run_finished = True
+                            yield RunFinishedEvent(
+                                type="RUN_FINISHED",
+                                runId=run_id,
+                                model=options.model,
+                                timestamp=int(time.time() * 1000),
+                                finishReason=finish_reason,
+                                usage=usage,
+                            )

Add the flag alongside the other lifecycle tracking variables:

run_finished = False

390-401: Emit RUN_STARTED before RUN_ERROR on early failures.

If an exception occurs before the first stream event, the current path emits RUN_ERROR without a preceding RUN_STARTED.

πŸ› Suggested fix
-        except Exception as e:
-            # Emit RUN_ERROR
-            yield RunErrorEvent(
-                type="RUN_ERROR",
-                runId=run_id,
-                model=options.model,
-                timestamp=int(time.time() * 1000),
-                error={
-                    "message": str(e),
-                    "code": getattr(e, "code", None),
-                },
-            )
+        except Exception as e:
+            ts = int(time.time() * 1000)
+            if not has_emitted_run_started:
+                has_emitted_run_started = True
+                yield RunStartedEvent(
+                    type="RUN_STARTED",
+                    runId=run_id,
+                    model=options.model,
+                    timestamp=ts,
+                    threadId=None,
+                )
+            yield RunErrorEvent(
+                type="RUN_ERROR",
+                runId=run_id,
+                model=options.model,
+                timestamp=ts,
+                error={
+                    "message": str(e),
+                    "code": getattr(e, "code", None),
+                },
+            )
packages/python/tanstack-ai/src/tanstack_ai/types.py (1)

326-333: Limit BaseStreamChunk.type to legacy values.

BaseStreamChunk is the base for legacy chunks, but type: StreamChunkType allows AG-UI values on legacy shapes. Tighten it to LegacyStreamChunkType to avoid mixed typing.

βœ… Suggested fix
-    type: StreamChunkType
+    type: LegacyStreamChunkType
πŸ€– Fix all issues with AI agents
In `@docs/protocol/chunk-definitions.md`:
- Around line 2-41: The AG-UI event list mentions STATE_SNAPSHOT, STATE_DELTA,
and CUSTOM but no shapes are defined; update the docs by either adding explicit
interface/type definitions for these events (e.g., StateSnapshotEvent,
StateDeltaEvent, CustomAGUIEvent that extend BaseAGUIEvent and include fields
like state: unknown, delta: unknown, source?: string, and payload?: unknown) or
clearly mark them as "reserved/future" with example usage and minimal required
fields (type, timestamp, model, rawEvent) so readers know expected structure;
reference BaseAGUIEvent and AGUIEventType when adding the new sections to keep
the schema consistent.

In `@packages/php/tanstack-ai/src/StreamChunkConverter.php`:
- Around line 226-263: The content_block_stop handler emits TOOL_CALL_END using
$this->currentToolIndex and leaves the entry in $this->toolCallsMap, so
subsequent content_block_stop events can emit TOOL_CALL_END again for the same
tool call; update the logic in the content_block_stop branch (where $toolCall is
read from $this->toolCallsMap[$this->currentToolIndex]) to prevent duplicates by
marking the call as completed or removing it after emitting the TOOL_CALL_END
(e.g., set a 'completed' flag on $toolCall or unset
$this->toolCallsMap[$this->currentToolIndex]) and ensure the earlier guard
checks that !$toolCall['completed'] before emitting
TOOL_CALL_START/TOOL_CALL_END.

In `@packages/typescript/ai-gemini/src/adapters/text.ts`:
- Around line 248-255: The STEP_FINISHED yield uses a fallback expression
"stepId || generateId(this.name)" even though stepId must have been set by
STEP_STARTED; replace the fallback with a non-null assertion on stepId (e.g.,
use stepId! in the STEP_FINISHED object) so the code expresses the invariant and
avoids silently generating a new id, and ensure the change is made in the yield
that produces type: 'STEP_FINISHED' (referencing the stepId and generateId
symbols and the surrounding STEP_STARTED/STEP_FINISHED logic).

In `@packages/typescript/ai-ollama/src/adapters/text.ts`:
- Around line 336-345: The STEP_FINISHED emission currently falls back to
generateId('step') when stepId is null which can create inconsistent IDs; update
the emission to rely on the fact STEP_STARTED sets stepId and remove the
fallback by using a non-null assertion (stepId!) or otherwise assert/throw if
stepId is missing so STEP_FINISHED always uses the same stepId set by
STEP_STARTED (refer to STEP_FINISHED, STEP_STARTED, stepId, generateId, and
chunk.message.thinking in the surrounding code).

In `@packages/typescript/ai-openai/src/adapters/summarize.ts`:
- Around line 65-87: The SummarizationResult.id stays empty for AG-UI streams
because only legacy 'content' sets id; update the logic in summarize.ts so that
when handling chunk.type === 'TEXT_MESSAGE_CONTENT' you set id = chunk.messageId
(or chunk.messageId || id) and when handling chunk.type === 'RUN_FINISHED' set
id = chunk.runId (or chunk.runId || id) so SummarizationResult.id is populated;
ensure you still preserve existing fallback behavior (keep existing id if new
properties are absent) and reference the variables chunk, id, model, usage, and
the event types TEXT_MESSAGE_CONTENT and RUN_FINISHED when applying the changes.

In `@packages/typescript/ai/src/activities/chat/index.ts`:
- Around line 584-589: In handleTextMessageContentEvent, guard explicitly
against undefined instead of using if (chunk.content) so empty-string content
("") is not treated as absent; change the condition to check chunk.content !==
undefined (or typeof chunk.content !== "undefined") and assign
this.accumulatedContent = chunk.content when present, otherwise append
chunk.delta; also ensure this.accumulatedContent is initialized to an empty
string before appending to avoid NaN/undefined concatenation (references:
handleTextMessageContentEvent, TextMessageContentEvent, this.accumulatedContent,
chunk.content, chunk.delta).

In `@packages/typescript/ai/src/stream-to-response.ts`:
- Around line 29-33: The streamToText handler currently only appends chunk.delta
for TEXT_MESSAGE_CONTENT, causing loss when an adapter emits chunk.content
without delta; update the logic in streamToText (the branch handling chunk.type
=== 'TEXT_MESSAGE_CONTENT') to fall back to chunk.content when chunk.delta is
undefined or empty, mirroring the existing fallback for legacy 'content' chunks
(the other branch checking chunk.type === 'content'), so accumulatedContent uses
chunk.delta if present otherwise chunk.content.

In `@packages/typescript/smoke-tests/adapters/src/harness.ts`:
- Around line 268-326: The TOOL_CALL_END branch is leaving entries in
toolCallsInProgress which can leak stale args; inside the TOOL_CALL_END handling
(the else if block checking chunk.type === 'TOOL_CALL_END') remove the completed
entry from toolCallsInProgress (call toolCallsInProgress.delete(id) using the id
local variable) right after you derive name/args and after updating
toolCallMap/assistantDraft so the in-progress state is cleared for reused IDs;
reference the toolCallsInProgress map and the TOOL_CALL_END branch in harness.ts
to make this change.
🧹 Nitpick comments (5)
packages/typescript/ai-ollama/src/adapters/text.ts (1)

212-257: Tool call handling emits TOOL_CALL_END immediately after TOOL_CALL_START without TOOL_CALL_ARGS.

The handleToolCall function emits TOOL_CALL_START followed immediately by TOOL_CALL_END. This differs from the Gemini adapter which emits TOOL_CALL_ARGS events between start and end.

If Ollama provides tool arguments in a single chunk (non-streaming), this is acceptable. However, for consistency with the AG-UI protocol and other adapters, consider emitting a TOOL_CALL_ARGS event with the full arguments before TOOL_CALL_END:

♻️ Suggested addition of TOOL_CALL_ARGS event
         // Emit TOOL_CALL_START if not already emitted for this tool call
         if (!toolCallsEmitted.has(toolCallId)) {
           toolCallsEmitted.add(toolCallId)
           events.push({
             type: 'TOOL_CALL_START',
             toolCallId,
             toolName: actualToolCall.function.name || '',
             model: chunk.model,
             timestamp,
             index: actualToolCall.function.index,
           })
         }

         // Parse input
         let parsedInput: unknown = {}
         const argsStr =
           typeof actualToolCall.function.arguments === 'string'
             ? actualToolCall.function.arguments
             : JSON.stringify(actualToolCall.function.arguments)
         try {
           parsedInput = JSON.parse(argsStr)
         } catch {
           parsedInput = actualToolCall.function.arguments
         }

+        // Emit TOOL_CALL_ARGS with full arguments
+        events.push({
+          type: 'TOOL_CALL_ARGS',
+          toolCallId,
+          model: chunk.model,
+          timestamp,
+          delta: argsStr,
+          args: argsStr,
+        })
+
         // Emit TOOL_CALL_END
         events.push({
           type: 'TOOL_CALL_END',
packages/typescript/ai-gemini/tests/gemini-adapter.test.ts (1)

300-331: Test assertions correctly validate AG-UI event sequence.

The updated assertions properly verify the new event lifecycle: RUN_STARTED β†’ TEXT_MESSAGE_START β†’ TEXT_MESSAGE_CONTENT (Γ—N) β†’ TEXT_MESSAGE_END β†’ RUN_FINISHED. Using toMatchObject allows for flexible matching while validating essential fields.

Consider adding test cases for:

  1. Tool call event sequence (TOOL_CALL_START β†’ TOOL_CALL_ARGS β†’ TOOL_CALL_END)
  2. Error scenarios (RUN_ERROR events)
  3. Thinking/reasoning flow (STEP_STARTED β†’ STEP_FINISHED)
packages/typescript/ai/src/activities/chat/tools/tool-calls.ts (2)

72-80: Silent no-op if tool call not found in addToolCallArgsEvent.

If addToolCallArgsEvent is called before addToolCallStartEvent (out-of-order events), the arguments are silently dropped. Consider logging a warning or throwing for debugging purposes.

♻️ Optional: Add warning for missing tool call
   addToolCallArgsEvent(event: ToolCallArgsEvent): void {
     // Find the tool call by ID
     for (const [, toolCall] of this.toolCallsMap.entries()) {
       if (toolCall.id === event.toolCallId) {
         toolCall.function.arguments += event.delta
-        break
+        return
       }
     }
+    // Tool call not found - this shouldn't happen in normal flow
+    console.warn(`TOOL_CALL_ARGS received for unknown toolCallId: ${event.toolCallId}`)
   }

240-258: TOOL_CALL_END event missing input field when emitted from executeTools.

The TOOL_CALL_END event emitted at line 241-248 includes result but not input. According to the ToolCallEndEvent interface in types.ts, input is an optional field that should contain the final parsed input arguments. For consistency with adapter emissions, consider including it:

♻️ Add input field to TOOL_CALL_END event
       if (finishEvent.type === 'RUN_FINISHED') {
+        let parsedInput: unknown
+        try {
+          parsedInput = JSON.parse(toolCall.function.arguments)
+        } catch {
+          parsedInput = undefined
+        }
         yield {
           type: 'TOOL_CALL_END',
           toolCallId: toolCall.id,
           toolName: toolCall.function.name,
           model: finishEvent.model,
           timestamp: Date.now(),
+          input: parsedInput,
           result: toolResultContent,
         }
packages/python/tanstack-ai/src/tanstack_ai/converter.py (1)

391-407: Drop the unused loop index to satisfy lint.

Ruff flags the loop index as unused. You can iterate over values directly.

♻️ Suggested tweak
-            for tool_index, tool_call in self.tool_calls_map.items():
+            for tool_call in self.tool_calls_map.values():

Comment on lines +226 to +263
} elseif ($eventType === 'content_block_stop') {
// Content block completed
$toolCall = $this->toolCallsMap[$this->currentToolIndex] ?? null;
if ($toolCall) {
// If tool call wasn't started yet (no args), start it now
if (!$toolCall['started']) {
$toolCall['started'] = true;
$this->toolCallsMap[$this->currentToolIndex] = $toolCall;

$chunks[] = [
'type' => 'TOOL_CALL_START',
'toolCallId' => $toolCall['id'],
'toolName' => $toolCall['name'],
'model' => $this->model,
'timestamp' => $this->timestamp,
'toolCall' => [
'id' => $toolCall['id'],
'type' => 'function',
'function' => [
'name' => $toolCall['name'],
'arguments' => $partialJson // Incremental JSON
]
],
'index' => $this->currentToolIndex
];
}

// Parse input and emit TOOL_CALL_END
$parsedInput = [];
if (!empty($toolCall['input'])) {
try {
$parsedInput = json_decode($toolCall['input'], true) ?? [];
} catch (\Exception $e) {
$parsedInput = [];
}
}

$chunks[] = [
'type' => 'TOOL_CALL_END',
'toolCallId' => $toolCall['id'],
'toolName' => $toolCall['name'],
'model' => $this->model,
'timestamp' => $this->timestamp,
'input' => $parsedInput
];
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Prevent duplicate TOOL_CALL_END emissions on later block stops.

content_block_stop reuses $currentToolIndex without clearing the tool call entry. If additional blocks occur after a tool_use block, TOOL_CALL_END can be emitted multiple times for the same call.

πŸ”§ Proposed fix
                 $chunks[] = [
                     'type' => 'TOOL_CALL_END',
                     'toolCallId' => $toolCall['id'],
                     'toolName' => $toolCall['name'],
                     'model' => $this->model,
                     'timestamp' => $this->timestamp,
                     'input' => $parsedInput
                 ];
+
+                unset($this->toolCallsMap[$this->currentToolIndex]);
πŸ“ Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
} elseif ($eventType === 'content_block_stop') {
// Content block completed
$toolCall = $this->toolCallsMap[$this->currentToolIndex] ?? null;
if ($toolCall) {
// If tool call wasn't started yet (no args), start it now
if (!$toolCall['started']) {
$toolCall['started'] = true;
$this->toolCallsMap[$this->currentToolIndex] = $toolCall;
$chunks[] = [
'type' => 'TOOL_CALL_START',
'toolCallId' => $toolCall['id'],
'toolName' => $toolCall['name'],
'model' => $this->model,
'timestamp' => $this->timestamp,
'toolCall' => [
'id' => $toolCall['id'],
'type' => 'function',
'function' => [
'name' => $toolCall['name'],
'arguments' => $partialJson // Incremental JSON
]
],
'index' => $this->currentToolIndex
];
}
// Parse input and emit TOOL_CALL_END
$parsedInput = [];
if (!empty($toolCall['input'])) {
try {
$parsedInput = json_decode($toolCall['input'], true) ?? [];
} catch (\Exception $e) {
$parsedInput = [];
}
}
$chunks[] = [
'type' => 'TOOL_CALL_END',
'toolCallId' => $toolCall['id'],
'toolName' => $toolCall['name'],
'model' => $this->model,
'timestamp' => $this->timestamp,
'input' => $parsedInput
];
}
} elseif ($eventType === 'content_block_stop') {
// Content block completed
$toolCall = $this->toolCallsMap[$this->currentToolIndex] ?? null;
if ($toolCall) {
// If tool call wasn't started yet (no args), start it now
if (!$toolCall['started']) {
$toolCall['started'] = true;
$this->toolCallsMap[$this->currentToolIndex] = $toolCall;
$chunks[] = [
'type' => 'TOOL_CALL_START',
'toolCallId' => $toolCall['id'],
'toolName' => $toolCall['name'],
'model' => $this->model,
'timestamp' => $this->timestamp,
'index' => $this->currentToolIndex
];
}
// Parse input and emit TOOL_CALL_END
$parsedInput = [];
if (!empty($toolCall['input'])) {
try {
$parsedInput = json_decode($toolCall['input'], true) ?? [];
} catch (\Exception $e) {
$parsedInput = [];
}
}
$chunks[] = [
'type' => 'TOOL_CALL_END',
'toolCallId' => $toolCall['id'],
'toolName' => $toolCall['name'],
'model' => $this->model,
'timestamp' => $this->timestamp,
'input' => $parsedInput
];
unset($this->toolCallsMap[$this->currentToolIndex]);
}
πŸ€– Prompt for AI Agents
In `@packages/php/tanstack-ai/src/StreamChunkConverter.php` around lines 226 -
263, The content_block_stop handler emits TOOL_CALL_END using
$this->currentToolIndex and leaves the entry in $this->toolCallsMap, so
subsequent content_block_stop events can emit TOOL_CALL_END again for the same
tool call; update the logic in the content_block_stop branch (where $toolCall is
read from $this->toolCallsMap[$this->currentToolIndex]) to prevent duplicates by
marking the call as completed or removing it after emitting the TOOL_CALL_END
(e.g., set a 'completed' flag on $toolCall or unset
$this->toolCallsMap[$this->currentToolIndex]) and ensure the earlier guard
checks that !$toolCall['completed'] before emitting
TOOL_CALL_START/TOOL_CALL_END.

Comment on lines 248 to 255
yield {
type: 'thinking',
content: part.text,
delta: part.text,
id: generateId(this.name),
type: 'STEP_FINISHED',
stepId: stepId || generateId(this.name),
model,
timestamp,
delta: part.text,
content: part.text,
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟑 Minor

Same stepId fallback issue as Ollama adapter.

Line 250 uses stepId || generateId(this.name) as a fallback, but stepId should always be set by STEP_STARTED before STEP_FINISHED is yielded. Consider using a non-null assertion for consistency.

πŸ€– Prompt for AI Agents
In `@packages/typescript/ai-gemini/src/adapters/text.ts` around lines 248 - 255,
The STEP_FINISHED yield uses a fallback expression "stepId ||
generateId(this.name)" even though stepId must have been set by STEP_STARTED;
replace the fallback with a non-null assertion on stepId (e.g., use stepId! in
the STEP_FINISHED object) so the code expresses the invariant and avoids
silently generating a new id, and ensure the change is made in the yield that
produces type: 'STEP_FINISHED' (referencing the stepId and generateId symbols
and the surrounding STEP_STARTED/STEP_FINISHED logic).

Comment on lines 336 to 345
accumulatedReasoning += chunk.message.thinking
yield {
type: 'thinking',
id: responseId,
type: 'STEP_FINISHED',
stepId: stepId || generateId('step'),
model: chunk.model,
timestamp,
content: accumulatedReasoning,
delta: chunk.message.thinking,
content: accumulatedReasoning,
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟑 Minor

Potential issue: stepId fallback generates new ID on each STEP_FINISHED.

On line 339, if stepId is null, a new ID is generated via generateId('step'). However, stepId should always be set by the STEP_STARTED emission on line 326. The fallback || generateId('step') suggests defensive coding, but if reached, it would create inconsistent step IDs across events.

πŸ”§ Suggested fix: Assert stepId is set or remove fallback
         yield {
           type: 'STEP_FINISHED',
-          stepId: stepId || generateId('step'),
+          stepId: stepId!,
           model: chunk.model,
           timestamp,
           delta: chunk.message.thinking,
           content: accumulatedReasoning,
         }

The ! assertion is safe here because STEP_FINISHED is only yielded inside the if (chunk.message.thinking) block which always sets stepId via STEP_STARTED first.

πŸ“ Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
accumulatedReasoning += chunk.message.thinking
yield {
type: 'thinking',
id: responseId,
type: 'STEP_FINISHED',
stepId: stepId || generateId('step'),
model: chunk.model,
timestamp,
content: accumulatedReasoning,
delta: chunk.message.thinking,
content: accumulatedReasoning,
}
}
accumulatedReasoning += chunk.message.thinking
yield {
type: 'STEP_FINISHED',
stepId: stepId!,
model: chunk.model,
timestamp,
delta: chunk.message.thinking,
content: accumulatedReasoning,
}
πŸ€– Prompt for AI Agents
In `@packages/typescript/ai-ollama/src/adapters/text.ts` around lines 336 - 345,
The STEP_FINISHED emission currently falls back to generateId('step') when
stepId is null which can create inconsistent IDs; update the emission to rely on
the fact STEP_STARTED sets stepId and remove the fallback by using a non-null
assertion (stepId!) or otherwise assert/throw if stepId is missing so
STEP_FINISHED always uses the same stepId set by STEP_STARTED (refer to
STEP_FINISHED, STEP_STARTED, stepId, generateId, and chunk.message.thinking in
the surrounding code).

Comment on lines 65 to 87
// Legacy content event
if (chunk.type === 'content') {
summary = chunk.content
id = chunk.id
model = chunk.model
}
// AG-UI TEXT_MESSAGE_CONTENT event
else if (chunk.type === 'TEXT_MESSAGE_CONTENT') {
if (chunk.content) {
summary = chunk.content
} else {
summary += chunk.delta
}
model = chunk.model || model
}
// Legacy done event
if (chunk.type === 'done' && chunk.usage) {
usage = chunk.usage
}
// AG-UI RUN_FINISHED event
else if (chunk.type === 'RUN_FINISHED' && chunk.usage) {
usage = chunk.usage
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟑 Minor

Populate id for AG-UI chunks.

For AG-UI streams, id remains '' because only legacy content sets it. Consider mapping from messageId (or runId on RUN_FINISHED) to keep SummarizationResult.id meaningful.

πŸ”§ Proposed fix
       if (chunk.type === 'content') {
         summary = chunk.content
         id = chunk.id
         model = chunk.model
       }
       // AG-UI TEXT_MESSAGE_CONTENT event
       else if (chunk.type === 'TEXT_MESSAGE_CONTENT') {
+        if (!id && 'messageId' in chunk) {
+          id = chunk.messageId
+        }
         if (chunk.content) {
           summary = chunk.content
         } else {
           summary += chunk.delta
         }
         model = chunk.model || model
       }
       // Legacy done event
       if (chunk.type === 'done' && chunk.usage) {
         usage = chunk.usage
       }
       // AG-UI RUN_FINISHED event
       else if (chunk.type === 'RUN_FINISHED' && chunk.usage) {
+        if (!id && 'runId' in chunk) {
+          id = chunk.runId
+        }
         usage = chunk.usage
       }
πŸ“ Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
// Legacy content event
if (chunk.type === 'content') {
summary = chunk.content
id = chunk.id
model = chunk.model
}
// AG-UI TEXT_MESSAGE_CONTENT event
else if (chunk.type === 'TEXT_MESSAGE_CONTENT') {
if (chunk.content) {
summary = chunk.content
} else {
summary += chunk.delta
}
model = chunk.model || model
}
// Legacy done event
if (chunk.type === 'done' && chunk.usage) {
usage = chunk.usage
}
// AG-UI RUN_FINISHED event
else if (chunk.type === 'RUN_FINISHED' && chunk.usage) {
usage = chunk.usage
}
// Legacy content event
if (chunk.type === 'content') {
summary = chunk.content
id = chunk.id
model = chunk.model
}
// AG-UI TEXT_MESSAGE_CONTENT event
else if (chunk.type === 'TEXT_MESSAGE_CONTENT') {
if (!id && 'messageId' in chunk) {
id = chunk.messageId
}
if (chunk.content) {
summary = chunk.content
} else {
summary += chunk.delta
}
model = chunk.model || model
}
// Legacy done event
if (chunk.type === 'done' && chunk.usage) {
usage = chunk.usage
}
// AG-UI RUN_FINISHED event
else if (chunk.type === 'RUN_FINISHED' && chunk.usage) {
if (!id && 'runId' in chunk) {
id = chunk.runId
}
usage = chunk.usage
}
πŸ€– Prompt for AI Agents
In `@packages/typescript/ai-openai/src/adapters/summarize.ts` around lines 65 -
87, The SummarizationResult.id stays empty for AG-UI streams because only legacy
'content' sets id; update the logic in summarize.ts so that when handling
chunk.type === 'TEXT_MESSAGE_CONTENT' you set id = chunk.messageId (or
chunk.messageId || id) and when handling chunk.type === 'RUN_FINISHED' set id =
chunk.runId (or chunk.runId || id) so SummarizationResult.id is populated;
ensure you still preserve existing fallback behavior (keep existing id if new
properties are absent) and reference the variables chunk, id, model, usage, and
the event types TEXT_MESSAGE_CONTENT and RUN_FINISHED when applying the changes.

Comment on lines +584 to +589
private handleTextMessageContentEvent(chunk: TextMessageContentEvent): void {
if (chunk.content) {
this.accumulatedContent = chunk.content
} else {
this.accumulatedContent += chunk.delta
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟑 Minor

Guard against empty-string content values when accumulating.

if (chunk.content) treats "" as absent and can mis-accumulate content. Prefer an explicit undefined check.

🩹 Suggested fix
-    if (chunk.content) {
+    if (chunk.content !== undefined) {
       this.accumulatedContent = chunk.content
     } else {
       this.accumulatedContent += chunk.delta
     }
πŸ“ Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
private handleTextMessageContentEvent(chunk: TextMessageContentEvent): void {
if (chunk.content) {
this.accumulatedContent = chunk.content
} else {
this.accumulatedContent += chunk.delta
}
private handleTextMessageContentEvent(chunk: TextMessageContentEvent): void {
if (chunk.content !== undefined) {
this.accumulatedContent = chunk.content
} else {
this.accumulatedContent += chunk.delta
}
πŸ€– Prompt for AI Agents
In `@packages/typescript/ai/src/activities/chat/index.ts` around lines 584 - 589,
In handleTextMessageContentEvent, guard explicitly against undefined instead of
using if (chunk.content) so empty-string content ("") is not treated as absent;
change the condition to check chunk.content !== undefined (or typeof
chunk.content !== "undefined") and assign this.accumulatedContent =
chunk.content when present, otherwise append chunk.delta; also ensure
this.accumulatedContent is initialized to an empty string before appending to
avoid NaN/undefined concatenation (references: handleTextMessageContentEvent,
TextMessageContentEvent, this.accumulatedContent, chunk.content, chunk.delta).

Comment on lines 29 to 33
// Handle both AG-UI TEXT_MESSAGE_CONTENT and legacy 'content' chunks
if (chunk.type === 'TEXT_MESSAGE_CONTENT' && chunk.delta) {
accumulatedContent += chunk.delta
} else if (chunk.type === 'content' && chunk.delta) {
accumulatedContent += chunk.delta
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟑 Minor

Handle TEXT_MESSAGE_CONTENT when delta is absent.

If an adapter emits TEXT_MESSAGE_CONTENT with only content (no delta), streamToText returns an empty string. Consider falling back to content to preserve non-delta events.

πŸ”§ Proposed fix
-    if (chunk.type === 'TEXT_MESSAGE_CONTENT' && chunk.delta) {
-      accumulatedContent += chunk.delta
-    } else if (chunk.type === 'content' && chunk.delta) {
-      accumulatedContent += chunk.delta
-    }
+    if (chunk.type === 'TEXT_MESSAGE_CONTENT') {
+      const piece = chunk.delta ?? chunk.content
+      if (piece) accumulatedContent += piece
+    } else if (chunk.type === 'content') {
+      const piece = chunk.delta ?? chunk.content
+      if (piece) accumulatedContent += piece
+    }
πŸ“ Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
// Handle both AG-UI TEXT_MESSAGE_CONTENT and legacy 'content' chunks
if (chunk.type === 'TEXT_MESSAGE_CONTENT' && chunk.delta) {
accumulatedContent += chunk.delta
} else if (chunk.type === 'content' && chunk.delta) {
accumulatedContent += chunk.delta
// Handle both AG-UI TEXT_MESSAGE_CONTENT and legacy 'content' chunks
if (chunk.type === 'TEXT_MESSAGE_CONTENT') {
const piece = chunk.delta ?? chunk.content
if (piece) accumulatedContent += piece
} else if (chunk.type === 'content') {
const piece = chunk.delta ?? chunk.content
if (piece) accumulatedContent += piece
}
πŸ€– Prompt for AI Agents
In `@packages/typescript/ai/src/stream-to-response.ts` around lines 29 - 33, The
streamToText handler currently only appends chunk.delta for
TEXT_MESSAGE_CONTENT, causing loss when an adapter emits chunk.content without
delta; update the logic in streamToText (the branch handling chunk.type ===
'TEXT_MESSAGE_CONTENT') to fall back to chunk.content when chunk.delta is
undefined or empty, mirroring the existing fallback for legacy 'content' chunks
(the other branch checking chunk.type === 'content'), so accumulatedContent uses
chunk.delta if present otherwise chunk.content.

Comment on lines 268 to 326
// AG-UI TOOL_CALL_START event
else if (chunk.type === 'TOOL_CALL_START') {
const id = chunk.toolCallId
toolCallsInProgress.set(id, {
name: chunk.toolName,
args: '',
})

if (!assistantDraft) {
assistantDraft = { role: 'assistant', content: null, toolCalls: [] }
}

chunkData.toolCallId = chunk.toolCallId
chunkData.toolName = chunk.toolName
}
// AG-UI TOOL_CALL_ARGS event
else if (chunk.type === 'TOOL_CALL_ARGS') {
const id = chunk.toolCallId
const existing = toolCallsInProgress.get(id)
if (existing) {
existing.args = chunk.args || existing.args + (chunk.delta || '')
}

chunkData.toolCallId = chunk.toolCallId
chunkData.delta = chunk.delta
chunkData.args = chunk.args
}
// AG-UI TOOL_CALL_END event
else if (chunk.type === 'TOOL_CALL_END') {
const id = chunk.toolCallId
const inProgress = toolCallsInProgress.get(id)
const name = chunk.toolName || inProgress?.name || ''
const args =
inProgress?.args || (chunk.input ? JSON.stringify(chunk.input) : '')

// Add to legacy toolCallMap for compatibility
toolCallMap.set(id, {
id,
name,
arguments: args,
})

// Add to assistant draft
if (!assistantDraft) {
assistantDraft = { role: 'assistant', content: null, toolCalls: [] }
}
assistantDraft.toolCalls?.push({
id,
type: 'function',
function: {
name,
arguments: args,
},
})

chunkData.toolCallId = chunk.toolCallId
chunkData.toolName = chunk.toolName
chunkData.input = chunk.input
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟑 Minor

Clear in-progress tool call state after TOOL_CALL_END.

toolCallsInProgress entries persist after completion. If a toolCallId is reused or multiple tool calls occur, stale args can leak.

πŸ”§ Proposed fix
     else if (chunk.type === 'TOOL_CALL_END') {
       const id = chunk.toolCallId
       const inProgress = toolCallsInProgress.get(id)
       const name = chunk.toolName || inProgress?.name || ''
       const args =
         inProgress?.args || (chunk.input ? JSON.stringify(chunk.input) : '')

+      toolCallsInProgress.delete(id)
πŸ€– Prompt for AI Agents
In `@packages/typescript/smoke-tests/adapters/src/harness.ts` around lines 268 -
326, The TOOL_CALL_END branch is leaving entries in toolCallsInProgress which
can leak stale args; inside the TOOL_CALL_END handling (the else if block
checking chunk.type === 'TOOL_CALL_END') remove the completed entry from
toolCallsInProgress (call toolCallsInProgress.delete(id) using the id local
variable) right after you derive name/args and after updating
toolCallMap/assistantDraft so the in-progress state is cleared for reused IDs;
reference the toolCallsInProgress map and the TOOL_CALL_END branch in harness.ts
to make this change.

@AlemTuzlak
Copy link
Contributor

So a few questions on my end:

  1. Why are we keeping in the legacy stuff? Why not just throw it out?
  2. Does the client side work without any code changes on the ai-client part? I'd expect that you'd need changes there as well?
  3. It seems that not all adapters were changed (openrouter and grok) are missing the changes if I'm seeing this correctly?

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
packages/typescript/ai-grok/src/adapters/text.ts (1)

379-394: Inconsistent logging and incomplete event closure on error path.

Two concerns:

  1. Line 381: Uses console.log instead of console.error for an error condition, inconsistent with the console.error usage in the chatStream catch block (lines 115-118).

  2. Event lifecycle gap: When an error occurs mid-stream, TEXT_MESSAGE_END and TOOL_CALL_END events are not emitted for in-progress messages/tool calls before RUN_ERROR. This leaves consumers with unclosed events, whereas the normal completion path properly closes all events before RUN_FINISHED.

πŸ”§ Suggested fix for logging consistency and event closure
    } catch (error: unknown) {
      const err = error as Error & { code?: string }
-     console.log('[Grok Adapter] Stream ended with error:', err.message)
+     console.error('[Grok Adapter] Stream ended with error:', err.message)
+
+     // Close any in-progress tool calls
+     for (const [, toolCall] of toolCallsInProgress) {
+       if (toolCall.started) {
+         yield {
+           type: 'TOOL_CALL_END',
+           toolCallId: toolCall.id,
+           toolName: toolCall.name,
+           model: options.model,
+           timestamp,
+           input: {},
+           error: true,
+         }
+       }
+     }
+
+     // Close text message if started
+     if (hasEmittedTextMessageStart) {
+       yield {
+         type: 'TEXT_MESSAGE_END',
+         messageId: aguiState.messageId,
+         model: options.model,
+         timestamp,
+         error: true,
+       }
+     }

      // Emit AG-UI RUN_ERROR
      yield {
        type: 'RUN_ERROR',
πŸ€– Fix all issues with AI agents
In `@packages/typescript/ai-grok/src/adapters/summarize.ts`:
- Around line 71-79: The RUN_FINISHED event branch currently doesn't set the
SummarizationResult.id, so when only AG-UI events are emitted the id stays
empty; update the RUN_FINISHED handler (the branch checking chunk.type ===
'RUN_FINISHED') to assign id = chunk.runId (or id = chunk.runId || id) similar
to how the legacy 'done' handler sets id = chunk.id, ensuring the final result
object produced by this adapter includes the runId. Also verify the variable
used to hold the result id (the same id referenced later when returning the
SummarizationResult) is updated in that branch so TEXT_MESSAGE_CONTENT handling
remains unchanged.

In `@packages/typescript/ai-openrouter/src/adapters/summarize.ts`:
- Around line 79-87: The TEXT_MESSAGE_CONTENT branch fails to set the message
identifier so SummarizationResult.id remains empty when only AG-UI events
arrive; update the handler for chunk.type === 'TEXT_MESSAGE_CONTENT' to extract
the AG-UI message id (chunk.messageId) and assign it to the same id variable
used by the legacy content path (the one set from chunk.id), e.g., set id =
chunk.messageId (or id ||= chunk.messageId) alongside assigning summary and
model, ensuring the SummarizationResult (and types like TextMessageContentEvent)
carries the messageId.

In `@packages/typescript/ai-openrouter/src/adapters/text.ts`:
- Around line 453-458: The code currently swallows JSON.parse errors when
parsing tc.arguments into parsedInput; change the catch so it preserves/exports
the raw malformed input and logs or includes it in the TOOL_CALL_END payload for
debugging: keep parsedInput = {} on parse failure but set a rawArguments (or
rawArgumentsOnError) variable to tc.arguments and call the existing logger/event
emitter to record a warning including tc.arguments and the parse error, and add
that rawArguments field to the TOOL_CALL_END event payload so consumers can
inspect the original malformed string; refer to parsedInput, tc.arguments and
TOOL_CALL_END to locate where to add the log/event and the new field.

In `@packages/typescript/ai-openrouter/tests/openrouter-adapter.test.ts`:
- Around line 389-439: The test currently assumes the first yielded StreamChunk
is an AG-UI RUN_STARTED which breaks if legacy chunks interleave; update the
test that consumes adapter.chatStream (the loop producing chunks:
Array<StreamChunk>) to first filter or map the collected chunks to only AG-UI
event types (look for chunk.type matching RUN_STARTED, RUN_ENDED, etc.) and then
assert that the first AG-UI event's type is 'RUN_STARTED' and validate its runId
and model (refer to adapter.chatStream, StreamChunk, and the 'RUN_STARTED'
symbol when locating where to change assertions).
🧹 Nitpick comments (1)
packages/typescript/ai-grok/src/adapters/text.ts (1)

324-332: Consider logging JSON parse failures for tool call arguments.

When JSON.parse(toolCall.arguments) fails, the error is silently caught and parsedInput defaults to {}. While this prevents stream interruption, it could mask malformed responses from the API. Consider adding a debug-level log to aid troubleshooting.

πŸ”§ Suggested improvement
              try {
                parsedInput = toolCall.arguments
                  ? JSON.parse(toolCall.arguments)
                  : {}
              } catch {
+               console.warn(
+                 `[Grok Adapter] Failed to parse tool call arguments for ${toolCall.name}:`,
+                 toolCall.arguments?.slice(0, 100),
+               )
                parsedInput = {}
              }

Comment on lines 71 to 79
// AG-UI TEXT_MESSAGE_CONTENT event
else if (chunk.type === 'TEXT_MESSAGE_CONTENT') {
if (chunk.content) {
summary = chunk.content
} else {
summary += chunk.delta
}
model = chunk.model || model
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | πŸ”΄ Critical

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Inspect StreamChunk type and AG-UI chunk fields
echo "=== StreamChunk Type Definition ==="
rg -n --type=ts 'type StreamChunk|interface StreamChunk' -C3

echo -e "\n=== AG-UI Chunk Types ==="
rg -n --type=ts 'TEXT_MESSAGE_CONTENT|RUN_STARTED|RUN_FINISHED' -C3

echo -e "\n=== Summarize Adapters ID Handling ==="
fd -e ts 'summarize.ts' packages/typescript/

Repository: TanStack/ai

Length of output: 50367


🏁 Script executed:

#!/bin/bash
# Inspect StreamChunk type and AG-UI chunk fields
echo "=== StreamChunk Type Definition ==="
rg -n --type=ts 'type StreamChunk|interface StreamChunk' -C5

echo -e "\n=== AG-UI Chunk Types ==="
rg -n --type=ts 'TEXT_MESSAGE_CONTENT|RUN_STARTED|RUN_FINISHED' -C3

echo -e "\n=== Summarize Adapter Files ==="
find packages/typescript -name 'summarize.ts' -type f

Repository: TanStack/ai

Length of output: 50367


🏁 Script executed:

#!/bin/bash
echo "=== Full ai-grok summarize.ts file ==="
cat -n packages/typescript/ai-grok/src/adapters/summarize.ts | head -100

echo -e "\n=== ai-openai summarize ID handling ==="
sed -n '65,95p' packages/typescript/ai-openai/src/adapters/summarize.ts | cat -n

echo -e "\n=== SummarizationResult type definition ==="
rg -n 'interface SummarizationResult|type SummarizationResult' -A5 packages/typescript/ai/src/types.ts

Repository: TanStack/ai

Length of output: 5087


Populate SummarizationResult.id from AG-UI RUN_FINISHED.runId.

When only AG-UI events are emitted, id remains empty. Extract chunk.runId from the RUN_FINISHED event branch to populate the result id field, matching the pattern in the legacy 'done' event handler that sets id = chunk.id.

πŸ€– Prompt for AI Agents
In `@packages/typescript/ai-grok/src/adapters/summarize.ts` around lines 71 - 79,
The RUN_FINISHED event branch currently doesn't set the SummarizationResult.id,
so when only AG-UI events are emitted the id stays empty; update the
RUN_FINISHED handler (the branch checking chunk.type === 'RUN_FINISHED') to
assign id = chunk.runId (or id = chunk.runId || id) similar to how the legacy
'done' handler sets id = chunk.id, ensuring the final result object produced by
this adapter includes the runId. Also verify the variable used to hold the
result id (the same id referenced later when returning the SummarizationResult)
is updated in that branch so TEXT_MESSAGE_CONTENT handling remains unchanged.

Comment on lines 79 to 87
// AG-UI TEXT_MESSAGE_CONTENT event
else if (chunk.type === 'TEXT_MESSAGE_CONTENT') {
if (chunk.content) {
summary = chunk.content
} else {
summary += chunk.delta
}
model = chunk.model || model
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

# First, locate the summarize.ts file
fd -t f "summarize.ts" packages/typescript/ai-openrouter/src/adapters/

Repository: TanStack/ai

Length of output: 114


🏁 Script executed:

# Once located, examine the code around lines 58-107
cat -n packages/typescript/ai-openrouter/src/adapters/summarize.ts | sed -n '55,110p'

Repository: TanStack/ai

Length of output: 2157


🏁 Script executed:

# Search for TEXT_MESSAGE_CONTENT type definition
rg -n "TEXT_MESSAGE_CONTENT" packages/typescript --type=ts -B 3 -A 10 | head -100

Repository: TanStack/ai

Length of output: 7724


🏁 Script executed:

# Search for type definitions of chunk or event structures in AI-related files
rg -n "type.*Content.*=" packages/typescript/ai-openrouter/src --type=ts -A 5 | head -80

Repository: TanStack/ai

Length of output: 37


🏁 Script executed:

# Search for RUN_STARTED and other AG-UI events that might provide ID
rg -n "RUN_STARTED\|TextMessageStartEvent\|BaseAGUIEvent" packages/typescript/ai/src/types.ts -A 8 | head -60

Repository: TanStack/ai

Length of output: 37


🏁 Script executed:

# Also check if messageId is optional or required in TextMessageContentEvent
rg -n "interface.*Event\|type.*Event" packages/typescript/ai/src/types.ts -A 6 | grep -A 6 "TextMessageContent\|RUN_START"

Repository: TanStack/ai

Length of output: 37


🏁 Script executed:

# Let's look at the types.ts file directly around the event definitions
cat -n packages/typescript/ai/src/types.ts | sed -n '750,850p'

Repository: TanStack/ai

Length of output: 3105


🏁 Script executed:

# Search for BaseAGUIEvent and related event definitions
rg -n "BaseAGUIEvent\|interface.*Start\|interface.*Finished" packages/typescript/ai/src/types.ts -B 2 -A 5

Repository: TanStack/ai

Length of output: 37


🏁 Script executed:

# Let's check if there are any other files that handle summarizeStream to see how they handle ID
rg -n "summarizeStream" packages/typescript/ai-openrouter/src/adapters/summarize.ts -A 20 | head -40

Repository: TanStack/ai

Length of output: 832


🏁 Script executed:

# Also check what fields the legacy 'content' chunk type has
rg -n "interface.*ContentEvent\|interface.*ChunkType\|type.*Chunk" packages/typescript/ai/src/types.ts -B 2 -A 6 | head -80

Repository: TanStack/ai

Length of output: 37


🏁 Script executed:

# Search for ContentChunk or legacy chunk definitions
rg -n "type.*ContentChunk\|interface.*ContentChunk\|type.*LegacyChunk" packages/typescript/ai/src/types.ts -A 8

Repository: TanStack/ai

Length of output: 37


🏁 Script executed:

# Look at the StreamChunk type union to see all possible chunk types
rg -n "type StreamChunk\|export type.*Chunk" packages/typescript/ai/src/types.ts -A 20 | head -80

Repository: TanStack/ai

Length of output: 37


🏁 Script executed:

# Let's find where StreamChunk is defined
rg -n "StreamChunk" packages/typescript/ai/src/types.ts | head -20

Repository: TanStack/ai

Length of output: 1027


🏁 Script executed:

# Get the actual definition area
cat -n packages/typescript/ai/src/types.ts | sed -n '690,750p'

Repository: TanStack/ai

Length of output: 2114


🏁 Script executed:

# Let's see the ContentStreamChunk definition
cat -n packages/typescript/ai/src/types.ts | sed -n '918,926p'

Repository: TanStack/ai

Length of output: 406


🏁 Script executed:

# And let's check what the StreamChunk union type includes
cat -n packages/typescript/ai/src/types.ts | sed -n '990,1010p'

Repository: TanStack/ai

Length of output: 841


🏁 Script executed:

# Verify if messageId is a required field (not optional) in TextMessageContentEvent
cat -n packages/typescript/ai/src/types.ts | sed -n '782,790p'

Repository: TanStack/ai

Length of output: 378


🏁 Script executed:

# Check if RUN_STARTED has a runId that could be used as fallback
cat -n packages/typescript/ai/src/types.ts | sed -n '729,735p'

Repository: TanStack/ai

Length of output: 301


Missing messageId assignment for AG-UI TEXT_MESSAGE_CONTENT events.

The legacy content event handler assigns id = chunk.id (line 76), but the TEXT_MESSAGE_CONTENT handler doesn't extract the message identifier. If only AG-UI events are received, the returned SummarizationResult will have an empty id. The TextMessageContentEvent interface defines messageId: string as a required field that should be captured.

Update the handler to extract messageId:

Proposed fix
       // AG-UI TEXT_MESSAGE_CONTENT event
       else if (chunk.type === 'TEXT_MESSAGE_CONTENT') {
+        id = chunk.messageId
         if (chunk.content) {
           summary = chunk.content
         } else {
           summary += chunk.delta
         }
         model = chunk.model || model
       }
πŸ“ Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
// AG-UI TEXT_MESSAGE_CONTENT event
else if (chunk.type === 'TEXT_MESSAGE_CONTENT') {
if (chunk.content) {
summary = chunk.content
} else {
summary += chunk.delta
}
model = chunk.model || model
}
// AG-UI TEXT_MESSAGE_CONTENT event
else if (chunk.type === 'TEXT_MESSAGE_CONTENT') {
id = chunk.messageId
if (chunk.content) {
summary = chunk.content
} else {
summary += chunk.delta
}
model = chunk.model || model
}
πŸ€– Prompt for AI Agents
In `@packages/typescript/ai-openrouter/src/adapters/summarize.ts` around lines 79
- 87, The TEXT_MESSAGE_CONTENT branch fails to set the message identifier so
SummarizationResult.id remains empty when only AG-UI events arrive; update the
handler for chunk.type === 'TEXT_MESSAGE_CONTENT' to extract the AG-UI message
id (chunk.messageId) and assign it to the same id variable used by the legacy
content path (the one set from chunk.id), e.g., set id = chunk.messageId (or id
||= chunk.messageId) alongside assigning summary and model, ensuring the
SummarizationResult (and types like TextMessageContentEvent) carries the
messageId.

Comment on lines +453 to +458
let parsedInput: unknown = {}
try {
parsedInput = tc.arguments ? JSON.parse(tc.arguments) : {}
} catch {
parsedInput = {}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟑 Minor

Silent JSON.parse failure hides malformed tool arguments.

When tc.arguments contains invalid JSON, the error is silently swallowed and parsedInput defaults to {}. This could hide issues with malformed arguments from the LLM and make debugging difficult.

Consider including the raw arguments in the event or logging a warning:

Proposed fix
           let parsedInput: unknown = {}
           try {
             parsedInput = tc.arguments ? JSON.parse(tc.arguments) : {}
           } catch {
-            parsedInput = {}
+            // Keep raw string if parsing fails
+            parsedInput = tc.arguments || {}
           }

Alternatively, add a rawArguments field to TOOL_CALL_END for debugging:

           yield {
             type: 'TOOL_CALL_END',
             toolCallId: tc.id,
             toolName: tc.name,
             model: meta.model,
             timestamp: meta.timestamp,
             input: parsedInput,
+            rawArguments: tc.arguments,
           }
πŸ“ Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
let parsedInput: unknown = {}
try {
parsedInput = tc.arguments ? JSON.parse(tc.arguments) : {}
} catch {
parsedInput = {}
}
let parsedInput: unknown = {}
try {
parsedInput = tc.arguments ? JSON.parse(tc.arguments) : {}
} catch {
// Keep raw string if parsing fails
parsedInput = tc.arguments || {}
}
πŸ€– Prompt for AI Agents
In `@packages/typescript/ai-openrouter/src/adapters/text.ts` around lines 453 -
458, The code currently swallows JSON.parse errors when parsing tc.arguments
into parsedInput; change the catch so it preserves/exports the raw malformed
input and logs or includes it in the TOOL_CALL_END payload for debugging: keep
parsedInput = {} on parse failure but set a rawArguments (or
rawArgumentsOnError) variable to tc.arguments and call the existing logger/event
emitter to record a warning including tc.arguments and the parse error, and add
that rawArguments field to the TOOL_CALL_END event payload so consumers can
inspect the original malformed string; refer to parsedInput, tc.arguments and
TOOL_CALL_END to locate where to add the log/event and the new field.

Comment on lines +389 to +439
describe('OpenRouter AG-UI event emission', () => {
beforeEach(() => {
vi.clearAllMocks()
})

it('emits RUN_STARTED as the first event', async () => {
const streamChunks = [
{
id: 'chatcmpl-123',
model: 'openai/gpt-4o-mini',
choices: [
{
delta: { content: 'Hello' },
finishReason: null,
},
],
},
{
id: 'chatcmpl-123',
model: 'openai/gpt-4o-mini',
choices: [
{
delta: {},
finishReason: 'stop',
},
],
usage: {
promptTokens: 5,
completionTokens: 1,
totalTokens: 6,
},
},
]

setupMockSdkClient(streamChunks)
const adapter = createAdapter()
const chunks: Array<StreamChunk> = []

for await (const chunk of adapter.chatStream({
model: 'openai/gpt-4o-mini',
messages: [{ role: 'user', content: 'Hello' }],
})) {
chunks.push(chunk)
}

expect(chunks[0]?.type).toBe('RUN_STARTED')
if (chunks[0]?.type === 'RUN_STARTED') {
expect(chunks[0].runId).toBeDefined()
expect(chunks[0].model).toBe('openai/gpt-4o-mini')
}
})
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟑 Minor

Avoid hard-coding RUN_STARTED at index 0 if legacy chunks can interleave.
If legacy StreamChunk types are still emitted, positional asserts can be brittle; consider filtering to AG‑UI events before ordering checks.

♻️ Suggested adjustment to avoid legacy chunk ordering brittleness
-    expect(chunks[0]?.type).toBe('RUN_STARTED')
-    if (chunks[0]?.type === 'RUN_STARTED') {
-      expect(chunks[0].runId).toBeDefined()
-      expect(chunks[0].model).toBe('openai/gpt-4o-mini')
-    }
+    const aguiChunks = chunks.filter((c) =>
+      [
+        'RUN_STARTED',
+        'TEXT_MESSAGE_START',
+        'TEXT_MESSAGE_CONTENT',
+        'TEXT_MESSAGE_END',
+        'RUN_FINISHED',
+        'RUN_ERROR',
+        'TOOL_CALL_START',
+        'TOOL_CALL_ARGS',
+        'TOOL_CALL_END',
+        'STEP_STARTED',
+        'STEP_FINISHED',
+      ].includes(c.type),
+    )
+    expect(aguiChunks[0]?.type).toBe('RUN_STARTED')
+    if (aguiChunks[0]?.type === 'RUN_STARTED') {
+      expect(aguiChunks[0].runId).toBeDefined()
+      expect(aguiChunks[0].model).toBe('openai/gpt-4o-mini')
+    }
πŸ€– Prompt for AI Agents
In `@packages/typescript/ai-openrouter/tests/openrouter-adapter.test.ts` around
lines 389 - 439, The test currently assumes the first yielded StreamChunk is an
AG-UI RUN_STARTED which breaks if legacy chunks interleave; update the test that
consumes adapter.chatStream (the loop producing chunks: Array<StreamChunk>) to
first filter or map the collected chunks to only AG-UI event types (look for
chunk.type matching RUN_STARTED, RUN_ENDED, etc.) and then assert that the first
AG-UI event's type is 'RUN_STARTED' and validate its runId and model (refer to
adapter.chatStream, StreamChunk, and the 'RUN_STARTED' symbol when locating
where to change assertions).

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 13

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (4)
packages/typescript/ai-ollama/src/adapters/summarize.ts (1)

107-155: Missing RUN_STARTED eventβ€”AG-UI protocol compliance issue.

The summarizeStream method starts emitting events with TEXT_MESSAGE_CONTENT but never emits the required RUN_STARTED event at the beginning of the stream. Other adapters (Grok, OpenRouter) delegate to their text adapters which correctly emit RUN_STARTED first. The Ollama summarize adapter directly implements streaming and omits this required protocol event.

Add a RUN_STARTED event as the first yield before processing the stream:

    const stream = await this.client.generate({
      model,
      prompt,
      options: {
        temperature: 0.3,
        num_predict: options.maxLength ?? 500,
      },
      stream: true,
    })

+   yield {
+     type: 'RUN_STARTED',
+     runId: id,
+     model,
+     timestamp: Date.now(),
+   }
+
    for await (const chunk of stream) {
packages/typescript/ai/src/stream-to-response.ts (1)

202-214: Same runId concern applies to toHttpStream error handler.

This error handler has the same pattern as toServerSentEventsStreamβ€”emitting RUN_ERROR without runId. Ensure consistency once the interface requirements are clarified.

packages/typescript/ai/src/activities/summarize/index.ts (1)

140-143: Update docstring example to use AG-UI event type.

The streaming example still references chunk.type === 'content', but the implementation now emits TEXT_MESSAGE_CONTENT events. Users following this example would not receive any output.

πŸ“ Suggested fix
  * `@example` Streaming summarization
  * ```ts
  * for await (const chunk of summarize({
  *   adapter: openaiSummarize('gpt-4o-mini'),
  *   text: 'Long article text here...',
  *   stream: true
  * })) {
-  *   if (chunk.type === 'content') {
+  *   if (chunk.type === 'TEXT_MESSAGE_CONTENT') {
  *     process.stdout.write(chunk.delta)
  *   }
  * }
  * ```
packages/python/tanstack-ai/src/tanstack_ai/chat.py (1)

538-546: Docstring example uses outdated chunk type.

The example in the docstring still references chunk["type"] == "content", which should be updated to "TEXT_MESSAGE_CONTENT" to align with the AG-UI protocol changes in this PR.

πŸ“ Proposed fix
     Example:
         >>> async for chunk in chat(
         ...     adapter=anthropic_adapter,
         ...     model="claude-3-5-sonnet-20241022",
         ...     messages=[{"role": "user", "content": "Hello!"}],
         ...     tools=[weather_tool],
         ... ):
-        ...     if chunk["type"] == "content":
+        ...     if chunk["type"] == "TEXT_MESSAGE_CONTENT":
         ...         print(chunk["delta"], end="", flush=True)
πŸ€– Fix all issues with AI agents
In `@docs/protocol/chunk-definitions.md`:
- Around line 293-302: Add a language specifier to the fenced code block that
contains the protocol example starting with RUN_STARTED / TOOL_CALL_START (name:
"get_weather") so the block is marked (e.g., ```text) instead of a plain fence;
locate the fenced block that shows RUN_STARTED, TOOL_CALL_START, TOOL_CALL_ARGS,
TEXT_MESSAGE_START/CONTENT/END and RUN_FINISHED and modify its opening backticks
to include the language specifier.
- Around line 321-332: The fenced code block showing the protocol example
(containing tokens like RUN_STARTED, TOOL_CALL_START, TOOL_CALL_END,
TEXT_MESSAGE_START, TEXT_MESSAGE_CONTENT, RUN_FINISHED) needs a language
specifier for correct rendering; update the opening fence from ``` to ```text so
the block becomes ```text and preserves formatting/monospace rendering for those
entries.
- Around line 281-291: The fenced code block showing the event sequence
(starting with RUN_STARTED and STEP_STARTED (stepType: "thinking")) is missing a
language specifier; update that triple-backtick fence to include a language
token such as text or plaintext (e.g., ```text) so the linter stops flagging it
and the sequence/flow diagram renders as plain text.
- Around line 304-316: The fenced code block showing the client tool approval
sequence (containing tokens like RUN_STARTED, TOOL_CALL_START (name:
"send_email"), CUSTOM (name: "approval-requested"), and RUN_FINISHED) lacks a
language specifier; update the opening fence from ``` to ```text (or another
appropriate language) in docs/protocol/chunk-definitions.md so the block becomes
a labeled plain-text code fence to enable correct formatting and highlighting.

In `@packages/python/tanstack-ai/src/tanstack_ai/types.py`:
- Around line 177-185: TextMessageStartEvent currently uses a TypedDict that
makes all fields required; change its definition to use total=False so optional
keys like model are truly optional, and mark mandatory fields explicitly with
typing_extensions.Required (or Required from typing if available) for keys such
as type, messageId, timestamp, and roleβ€”mirroring the pattern used in
RunStartedEventβ€”to ensure required vs optional fields are correct.
- Around line 262-278: StateSnapshotEvent and StateDeltaEvent currently mark
`model` as a required field due to total=True; update both classes so `model`
can be omitted by declaring it as NotRequired[Optional[str]] instead of
Optional[str], and add the appropriate NotRequired import (from
typing_extensions or typing depending on supported Python version); modify the
`model` annotations in the StateSnapshotEvent and StateDeltaEvent TypedDicts
accordingly so other fields remain required while `model` may be absent or None.
- Around line 198-205: TextMessageEndEvent currently declares model as
Optional[str] but still counts it as a required key on the TypedDict; change the
TypedDict so model is non-required by using NotRequired (e.g., model:
NotRequired[Optional[str]]) on the TextMessageEndEvent class and add the
corresponding import (from typing_extensions import NotRequired or from typing
import NotRequired if supported), leaving other fields required.
- Around line 146-154: The RunStartedEvent TypedDict currently makes all fields
required; update the RunStartedEvent definition so model and threadId are
optional like the TypeScript version β€” either convert the TypedDict to
total=False (e.g., class RunStartedEvent(TypedDict, total=False)) and then mark
required fields explicitly with Required[...] for type, runId, and timestamp, or
keep total=True but wrap only the required keys with Required[]; if using
Required, add the appropriate import (from typing import Required for Python
3.11+ or from typing_extensions import Required).

In `@packages/typescript/smoke-tests/adapters/src/harness.ts`:
- Around line 232-243: The TOOL_CALL_ARGS branch uses "chunk.args || ..." which
treats an empty string as missing and preserves stale args; update the
assignment in the block that handles chunk.type === 'TOOL_CALL_ARGS'
(referencing toolCallsInProgress, existing.args, chunk.args, chunk.delta, and
chunkData.*) to check for undefined explicitlyβ€”set existing.args to chunk.args
if chunk.args !== undefined, otherwise append chunk.delta (using a nullish check
for chunk.delta to allow empty strings) so empty args are preserved and deltas
still apply.
- Around line 199-216: The code drops the first text delta when chunk.content is
undefined because assistantDraft is initialized with chunk.content while delta
is appended only on later chunks; to fix, compute a unified fallback delta
(e.g., let delta = chunk.delta || chunk.content || '') and use that for both
fullResponse and assistantDraft initialization/update: add delta to
fullResponse, set assistantDraft.content = delta when creating the draft, and
append delta to assistantDraft.content for subsequent chunks; update references
to chunk.delta/chunk.content accordingly in the TEXT_MESSAGE_CONTENT handling
block (assistantDraft, fullResponse, chunkData).

In `@packages/typescript/smoke-tests/e2e/src/routes/index.tsx`:
- Around line 1-4: Reorder the named imports to satisfy the linter: alphabetize
members within each import so useMemo follows useState in the React import and
fetchServerSentEvents follows useChat in the `@tanstack/ai-react` import; keep the
import groups intact (React, `@tanstack/`* packages, types) and ensure the import
lines reference the same symbols (useState, useMemo, createFileRoute, useChat,
fetchServerSentEvents, UIMessage) so the sort-imports rule is satisfied.

In `@packages/typescript/smoke-tests/e2e/src/routes/mock.tsx`:
- Around line 1-19: Change the array type annotation from the bracket form to
the generic form for VALID_SCENARIOS to satisfy the lint rule: update the
declaration that references MockScenario so it uses Array<MockScenario> (e.g.,
change the type on the VALID_SCENARIOS constant from MockScenario[] to
Array<MockScenario>), leaving the values unchanged and keeping the MockScenario
type intact.

In `@packages/typescript/smoke-tests/e2e/tests/chat.spec.ts`:
- Line 1: The import groups are wrong: move the type-only Page into a separate
type import and alphabetize the value imports; change "import { test, expect,
Page } from '@playwright/test'" to value imports in alphabetical order (import {
expect, test } from '@playwright/test') and add a separate type import (import
type { Page } from '@playwright/test') so Page is a type-only import and value
imports are sorted.
🧹 Nitpick comments (10)
packages/typescript/ai-ollama/src/adapters/summarize.ts (1)

141-153: Add TEXT_MESSAGE_END event before RUN_FINISHED in summarize adapter.

The AG-UI protocol requires a message lifecycle: TEXT_MESSAGE_START, TEXT_MESSAGE_CONTENT, TEXT_MESSAGE_END, and RUN_FINISHED. The summarize adapter emits TEXT_MESSAGE_CONTENT directly followed by RUN_FINISHED, skipping the TEXT_MESSAGE_END event. This pattern is inconsistent with the text adapter and other provider implementations, which properly close messages before finishing runs.

packages/typescript/ai/src/activities/chat/tools/tool-calls.ts (1)

72-95: Consider a secondary index for O(1) lookup by toolCallId.

Both addToolCallArgsEvent and completeToolCall iterate over all entries to find a tool call by toolCallId. While this is acceptable for typical usage (few concurrent tool calls), a secondary Map<string, ToolCall> keyed by toolCallId would provide O(1) lookups.

♻️ Suggested optimization
 export class ToolCallManager {
   private toolCallsMap = new Map<number, ToolCall>()
+  private toolCallsById = new Map<string, ToolCall>()
   private tools: ReadonlyArray<Tool>

   // In addToolCallStartEvent:
+  this.toolCallsById.set(event.toolCallId, toolCall)

   // In addToolCallArgsEvent/completeToolCall:
-  for (const [, toolCall] of this.toolCallsMap.entries()) {
-    if (toolCall.id === event.toolCallId) {
-      // ...
-      break
-    }
-  }
+  const toolCall = this.toolCallsById.get(event.toolCallId)
+  if (toolCall) {
+    // ...
+  }

   // In clear():
+  this.toolCallsById.clear()
docs/guides/streaming.md (1)

79-92: Verify the STEP_FINISHED event structure in the code example.

The code example shows accessing chunk.content and chunk.delta on a STEP_FINISHED event. Based on the test file (stream-to-response.test.ts lines 636-643), STEP_FINISHED does include delta and content fields for thinking content. However, the documentation should clarify that these fields represent the accumulated thinking content, not incremental updates like TEXT_MESSAGE_CONTENT.

Consider adding a brief note to distinguish the semantics of delta/content in thinking steps vs. text messages, as developers may expect incremental deltas here.

packages/typescript/ai/tests/stream-to-response.test.ts (1)

458-492: Consider adding proper typing to the parseSSEStream helper.

The helper function returns Array<StreamChunk> but the parsed JSON is cast without validation. While acceptable for test code, consider using type guards or Zod validation to ensure the parsed chunks conform to expected shapes, reducing the need for as any casts in assertions.

πŸ’‘ Example with type narrowing
function isTextMessageContent(chunk: StreamChunk): chunk is TextMessageContentEvent {
  return chunk.type === 'TEXT_MESSAGE_CONTENT'
}

// Then in tests:
if (isTextMessageContent(parsedChunks[0])) {
  expect(parsedChunks[0].messageId).toBe('msg-1')
}
packages/typescript/ai-client/tests/test-utils.ts (1)

149-212: Consider emitting TOOL_CALL_END events for complete tool call sequences.

The helper emits TOOL_CALL_START and TOOL_CALL_ARGS but not TOOL_CALL_END. According to the AG-UI protocol, TOOL_CALL_END should mark the completion of a tool call. If tests expect complete tool call lifecycles, this may cause test failures or incomplete coverage.

♻️ Proposed addition after TOOL_CALL_ARGS
     // TOOL_CALL_ARGS event
     chunks.push({
       type: 'TOOL_CALL_ARGS',
       toolCallId: toolCall.id,
       model,
       timestamp: Date.now(),
       delta: toolCall.arguments,
     })
+
+    // TOOL_CALL_END event
+    chunks.push({
+      type: 'TOOL_CALL_END',
+      toolCallId: toolCall.id,
+      toolName: toolCall.name,
+      model,
+      timestamp: Date.now(),
+      input: parsedInput,
+    })
docs/protocol/chunk-definitions.md (1)

40-43: Document STATE_SNAPSHOT, STATE_DELTA, and CUSTOM event shapes.

STATE_SNAPSHOT, STATE_DELTA, and CUSTOM are listed in AGUIEventType but lack interface definitions in this document. Adding their shapes would complete the protocol documentation.

πŸ“ Suggested additions after STEP_FINISHED section
---

### STATE_SNAPSHOT

Emitted to provide a full state synchronization.

```typescript
interface StateSnapshotEvent extends BaseAGUIEvent {
  type: 'STATE_SNAPSHOT';
  state: Record<string, unknown>;
}

STATE_DELTA

Emitted to provide an incremental state update.

interface StateDeltaEvent extends BaseAGUIEvent {
  type: 'STATE_DELTA';
  delta: Record<string, unknown>;
}

CUSTOM

Custom event for extensibility. Used for application-specific events like approval requests.

interface CustomEvent extends BaseAGUIEvent {
  type: 'CUSTOM';
  name: string;
  data?: unknown;
}

Example (approval request):

{
  "type": "CUSTOM",
  "name": "approval-requested",
  "timestamp": 1701234567890,
  "data": {
    "toolCallId": "call_abc123",
    "toolName": "send_email",
    "input": { "to": "[email protected]" },
    "approval": { "id": "approval_xyz", "needsApproval": true }
  }
}
</details>

</blockquote></details>
<details>
<summary>packages/typescript/ai/src/types.ts (1)</summary><blockquote>

`714-725`: **Consider a shared `TokenUsage` type to reduce duplication.**  
This keeps usage fields in sync across events/chunks.


<details>
<summary>♻️ Proposed refactor</summary>

```diff
+export interface TokenUsage {
+  promptTokens: number
+  completionTokens: number
+  totalTokens: number
+}
+
 export interface RunFinishedEvent extends BaseAGUIEvent {
   type: 'RUN_FINISHED'
   /** Run identifier */
   runId: string
   /** Why the generation stopped */
   finishReason: 'stop' | 'length' | 'content_filter' | 'tool_calls' | null
   /** Token usage statistics */
-  usage?: {
-    promptTokens: number
-    completionTokens: number
-    totalTokens: number
-  }
+  usage?: TokenUsage
 }
 
 export interface TextCompletionChunk {
   id: string
   model: string
   content: string
   role?: 'assistant'
   finishReason?: 'stop' | 'length' | 'content_filter' | null
-  usage?: {
-    promptTokens: number
-    completionTokens: number
-    totalTokens: number
-  }
+  usage?: TokenUsage
 }

Also applies to: 902-906

packages/typescript/smoke-tests/e2e/src/routes/index.tsx (2)

14-63: Prefer StreamProcessor-based SSE parsing over manual JSON parsing.
The custom reader drops invalid JSON rather than buffering/repairing partial chunks; using the built-in streaming utilities avoids subtle edge cases and keeps parsing consistent with the rest of the codebase.

♻️ Possible refactor (reuse built-in SSE connection)
-function createMockConnection(scenario: MockScenario) {
-  return {
-    async *connect(
-      messages: Array<any>,
-      body: Record<string, any>,
-      abortSignal?: AbortSignal,
-    ) {
-      const response = await fetch('/api/mock-chat', {
-        method: 'POST',
-        headers: { 'Content-Type': 'application/json' },
-        body: JSON.stringify({ ...body, messages, scenario }),
-        signal: abortSignal,
-      })
-      ...
-    },
-  }
-}
+// Reuse fetchServerSentEvents (StreamProcessor-backed)
+const getMockConnection = () => fetchServerSentEvents('/api/mock-chat')
-  const connection = useMemo(() => {
-    if (apiMode === 'mock') {
-      return createMockConnection(mockScenario)
-    }
-    return fetchServerSentEvents('/api/tanchat')
-  }, [apiMode, mockScenario])
+  const connection = useMemo(() => {
+    return apiMode === 'mock'
+      ? getMockConnection()
+      : fetchServerSentEvents('/api/tanchat')
+  }, [apiMode])
-  const { messages, sendMessage, isLoading, stop, error } = useChat({
-    connection,
-  })
+  const { messages, sendMessage, isLoading, stop, error } = useChat({
+    connection,
+    body: apiMode === 'mock' ? { scenario: mockScenario } : undefined,
+  })
As per coding guidelines, Implement stream processing with StreamProcessor for handling chunked responses and support partial JSON parsing for streaming AI responses.

65-97: Deduplicate getMessageStats with /mock route.
This helper is identical in mock.tsx; consider extracting to a shared test utility to avoid divergence.

packages/typescript/smoke-tests/e2e/src/routes/mock.tsx (1)

21-53: Deduplicate getMessageStats with index route.
Consider exporting this helper from a shared test util to keep stats logic consistent.

Comment on lines 281 to 291
2. **With Thinking:**
```
ThinkingStreamChunk (delta: "I need to...")
ThinkingStreamChunk (delta: " check the weather")
ContentStreamChunk (delta: "Let me check")
DoneStreamChunk (finishReason: "stop")
RUN_STARTED
STEP_STARTED (stepType: "thinking")
STEP_FINISHED (delta: "I need to...")
STEP_FINISHED (delta: " check the weather")
TEXT_MESSAGE_START
TEXT_MESSAGE_CONTENT (delta: "Let me check")
TEXT_MESSAGE_END
RUN_FINISHED (finishReason: "stop")
```
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟑 Minor

Add language specifier to fenced code block.

The linter flagged this code block as missing a language specifier. Since this is a sequence/flow diagram, consider using a text or plaintext identifier.

🩹 Suggested fix
-```
+```text
    RUN_STARTED
    STEP_STARTED (stepType: "thinking")
    ...
🧰 Tools
πŸͺ› markdownlint-cli2 (0.18.1)

282-282: Fenced code blocks should have a language specified

(MD040, fenced-code-language)

πŸ€– Prompt for AI Agents
In `@docs/protocol/chunk-definitions.md` around lines 281 - 291, The fenced code
block showing the event sequence (starting with RUN_STARTED and STEP_STARTED
(stepType: "thinking")) is missing a language specifier; update that
triple-backtick fence to include a language token such as text or plaintext
(e.g., ```text) so the linter stops flagging it and the sequence/flow diagram
renders as plain text.

Comment on lines 293 to 302
3. **Tool Usage:**
```
ToolCallStreamChunk (name: "get_weather")
ToolResultStreamChunk (content: "{...}")
ContentStreamChunk (delta: "The weather is...")
DoneStreamChunk (finishReason: "stop")
RUN_STARTED
TOOL_CALL_START (name: "get_weather")
TOOL_CALL_ARGS / TOOL_CALL_END (result: "{...}")
TEXT_MESSAGE_START
TEXT_MESSAGE_CONTENT (delta: "The weather is...")
TEXT_MESSAGE_END
RUN_FINISHED (finishReason: "stop")
```
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟑 Minor

Add language specifier to fenced code block.

🩹 Suggested fix
-```
+```text
    RUN_STARTED
    TOOL_CALL_START (name: "get_weather")
    ...
πŸ“ Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
3. **Tool Usage:**
```
ToolCallStreamChunk (name: "get_weather")
ToolResultStreamChunk (content: "{...}")
ContentStreamChunk (delta: "The weather is...")
DoneStreamChunk (finishReason: "stop")
RUN_STARTED
TOOL_CALL_START (name: "get_weather")
TOOL_CALL_ARGS / TOOL_CALL_END (result: "{...}")
TEXT_MESSAGE_START
TEXT_MESSAGE_CONTENT (delta: "The weather is...")
TEXT_MESSAGE_END
RUN_FINISHED (finishReason: "stop")
```
3. **Tool Usage:**
🧰 Tools
πŸͺ› markdownlint-cli2 (0.18.1)

294-294: Fenced code blocks should have a language specified

(MD040, fenced-code-language)

πŸ€– Prompt for AI Agents
In `@docs/protocol/chunk-definitions.md` around lines 293 - 302, Add a language
specifier to the fenced code block that contains the protocol example starting
with RUN_STARTED / TOOL_CALL_START (name: "get_weather") so the block is marked
(e.g., ```text) instead of a plain fence; locate the fenced block that shows
RUN_STARTED, TOOL_CALL_START, TOOL_CALL_ARGS, TEXT_MESSAGE_START/CONTENT/END and
RUN_FINISHED and modify its opening backticks to include the language specifier.

Comment on lines 304 to 316
4. **Client Tool with Approval:**
```
ToolCallStreamChunk (name: "send_email")
ApprovalRequestedStreamChunk (toolName: "send_email")
RUN_STARTED
TOOL_CALL_START (name: "send_email")
TOOL_CALL_ARGS / TOOL_CALL_END
CUSTOM (name: "approval-requested")
[User approves]
ToolInputAvailableStreamChunk (toolName: "send_email")
[Client executes]
ToolResultStreamChunk (content: "{\"sent\":true}")
ContentStreamChunk (delta: "Email sent successfully")
DoneStreamChunk (finishReason: "stop")
TEXT_MESSAGE_START
TEXT_MESSAGE_CONTENT (delta: "Email sent successfully")
TEXT_MESSAGE_END
RUN_FINISHED (finishReason: "stop")
```
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟑 Minor

Add language specifier to fenced code block.

🩹 Suggested fix
-```
+```text
    RUN_STARTED
    TOOL_CALL_START (name: "send_email")
    ...
🧰 Tools
πŸͺ› markdownlint-cli2 (0.18.1)

305-305: Fenced code blocks should have a language specified

(MD040, fenced-code-language)

πŸ€– Prompt for AI Agents
In `@docs/protocol/chunk-definitions.md` around lines 304 - 316, The fenced code
block showing the client tool approval sequence (containing tokens like
RUN_STARTED, TOOL_CALL_START (name: "send_email"), CUSTOM (name:
"approval-requested"), and RUN_FINISHED) lacks a language specifier; update the
opening fence from ``` to ```text (or another appropriate language) in
docs/protocol/chunk-definitions.md so the block becomes a labeled plain-text
code fence to enable correct formatting and highlighting.

Comment on lines 321 to 332

```
ToolCallStreamChunk (index: 0, name: "get_weather")
ToolCallStreamChunk (index: 1, name: "get_time")
ToolResultStreamChunk (toolCallId: "call_1")
ToolResultStreamChunk (toolCallId: "call_2")
ContentStreamChunk (delta: "Based on the data...")
DoneStreamChunk (finishReason: "stop")
RUN_STARTED
TOOL_CALL_START (index: 0, name: "get_weather")
TOOL_CALL_START (index: 1, name: "get_time")
TOOL_CALL_END (toolCallId: "call_1", result: "...")
TOOL_CALL_END (toolCallId: "call_2", result: "...")
TEXT_MESSAGE_START
TEXT_MESSAGE_CONTENT (delta: "Based on the data...")
TEXT_MESSAGE_END
RUN_FINISHED (finishReason: "stop")
```
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟑 Minor

Add language specifier to fenced code block.

🩹 Suggested fix
-```
+```text
 RUN_STARTED
 TOOL_CALL_START (index: 0, name: "get_weather")
 ...
πŸ“ Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
```
ToolCallStreamChunk (index: 0, name: "get_weather")
ToolCallStreamChunk (index: 1, name: "get_time")
ToolResultStreamChunk (toolCallId: "call_1")
ToolResultStreamChunk (toolCallId: "call_2")
ContentStreamChunk (delta: "Based on the data...")
DoneStreamChunk (finishReason: "stop")
RUN_STARTED
TOOL_CALL_START (index: 0, name: "get_weather")
TOOL_CALL_START (index: 1, name: "get_time")
TOOL_CALL_END (toolCallId: "call_1", result: "...")
TOOL_CALL_END (toolCallId: "call_2", result: "...")
TEXT_MESSAGE_START
TEXT_MESSAGE_CONTENT (delta: "Based on the data...")
TEXT_MESSAGE_END
RUN_FINISHED (finishReason: "stop")
```
🧰 Tools
πŸͺ› markdownlint-cli2 (0.18.1)

322-322: Fenced code blocks should have a language specified

(MD040, fenced-code-language)

πŸ€– Prompt for AI Agents
In `@docs/protocol/chunk-definitions.md` around lines 321 - 332, The fenced code
block showing the protocol example (containing tokens like RUN_STARTED,
TOOL_CALL_START, TOOL_CALL_END, TEXT_MESSAGE_START, TEXT_MESSAGE_CONTENT,
RUN_FINISHED) needs a language specifier for correct rendering; update the
opening fence from ``` to ```text so the block becomes ```text and preserves
formatting/monospace rendering for those entries.

Comment on lines +146 to +154
class RunStartedEvent(TypedDict):
"""Emitted when a run starts. This is the first event in any streaming response."""

type: Literal["RUN_STARTED"]
runId: str
timestamp: int
model: Optional[str]
threadId: Optional[str]

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟑 Minor

RunStartedEvent has incorrect field requirements.

model and threadId should be optional (as per the TypeScript definition), but this TypedDict uses total=True which makes all fields required. Consider using total=False or explicitly marking required fields with Required[].

🩹 Suggested fix
-class RunStartedEvent(TypedDict):
+class RunStartedEvent(TypedDict, total=False):
     """Emitted when a run starts. This is the first event in any streaming response."""

-    type: Literal["RUN_STARTED"]
-    runId: str
-    timestamp: int
-    model: Optional[str]
-    threadId: Optional[str]
+    type: Required[Literal["RUN_STARTED"]]
+    runId: Required[str]
+    timestamp: Required[int]
+    model: Optional[str]
+    threadId: Optional[str]

Note: Requires from typing import Required (Python 3.11+) or from typing_extensions import Required.

πŸ€– Prompt for AI Agents
In `@packages/python/tanstack-ai/src/tanstack_ai/types.py` around lines 146 - 154,
The RunStartedEvent TypedDict currently makes all fields required; update the
RunStartedEvent definition so model and threadId are optional like the
TypeScript version β€” either convert the TypedDict to total=False (e.g., class
RunStartedEvent(TypedDict, total=False)) and then mark required fields
explicitly with Required[...] for type, runId, and timestamp, or keep total=True
but wrap only the required keys with Required[]; if using Required, add the
appropriate import (from typing import Required for Python 3.11+ or from
typing_extensions import Required).

Comment on lines +199 to +216
// AG-UI TEXT_MESSAGE_CONTENT event
if (chunk.type === 'TEXT_MESSAGE_CONTENT') {
chunkData.delta = chunk.delta
chunkData.content = chunk.content
chunkData.role = chunk.role
const delta = chunk.delta || chunk.content || ''
chunkData.role = 'assistant'
const delta = chunk.delta || ''
fullResponse += delta

if (chunk.role === 'assistant') {
if (!assistantDraft) {
assistantDraft = {
role: 'assistant',
content: chunk.content || '',
toolCalls: [],
}
} else {
assistantDraft.content = (assistantDraft.content || '') + delta
if (!assistantDraft) {
assistantDraft = {
role: 'assistant',
content: chunk.content || '',
toolCalls: [],
}
} else {
assistantDraft.content = (assistantDraft.content || '') + delta
}
} else if (chunk.type === 'tool_call') {
const id = chunk.toolCall.id
const existing = toolCallMap.get(id) || {
id,
name: chunk.toolCall.function.name,
arguments: '',
}
existing.arguments += chunk.toolCall.function.arguments || ''
toolCallMap.set(id, existing)

chunkData.toolCall = chunk.toolCall
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Avoid dropping the first text delta when content is absent.
If chunk.content is undefined, the first chunk.delta never reaches assistantDraft.content because the draft is initialized with chunk.content and the delta is only appended on later chunks; fullResponse also ignores content when delta is missing. Initialize from a unified fallback.

πŸ› Proposed fix
-      const delta = chunk.delta || ''
+      const delta =
+        chunk.delta !== undefined
+          ? chunk.delta
+          : chunk.content !== undefined
+            ? chunk.content
+            : ''
       fullResponse += delta

       if (!assistantDraft) {
         assistantDraft = {
           role: 'assistant',
-          content: chunk.content || '',
+          content: delta,
           toolCalls: [],
         }
       } else {
         assistantDraft.content = (assistantDraft.content || '') + delta
       }
πŸ€– Prompt for AI Agents
In `@packages/typescript/smoke-tests/adapters/src/harness.ts` around lines 199 -
216, The code drops the first text delta when chunk.content is undefined because
assistantDraft is initialized with chunk.content while delta is appended only on
later chunks; to fix, compute a unified fallback delta (e.g., let delta =
chunk.delta || chunk.content || '') and use that for both fullResponse and
assistantDraft initialization/update: add delta to fullResponse, set
assistantDraft.content = delta when creating the draft, and append delta to
assistantDraft.content for subsequent chunks; update references to
chunk.delta/chunk.content accordingly in the TEXT_MESSAGE_CONTENT handling block
(assistantDraft, fullResponse, chunkData).

Comment on lines +232 to +243
// AG-UI TOOL_CALL_ARGS event
else if (chunk.type === 'TOOL_CALL_ARGS') {
const id = chunk.toolCallId
const existing = toolCallsInProgress.get(id)
if (existing) {
existing.args = chunk.args || existing.args + (chunk.delta || '')
}
} else if (chunk.type === 'tool_result') {

chunkData.toolCallId = chunk.toolCallId
chunkData.content = chunk.content
toolResults.push({
toolCallId: chunk.toolCallId,
content: chunk.content,
})
reconstructedMessages.push({
role: 'tool',
toolCallId: chunk.toolCallId,
content: chunk.content,
chunkData.delta = chunk.delta
chunkData.args = chunk.args
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟑 Minor

Preserve explicit empty args in TOOL_CALL_ARGS.
Using || treats '' as missing and can keep stale args. Prefer an undefined check.

πŸ› Proposed fix
-      if (existing) {
-        existing.args = chunk.args || existing.args + (chunk.delta || '')
-      }
+      if (existing) {
+        if (chunk.args !== undefined) {
+          existing.args = chunk.args
+        } else if (chunk.delta !== undefined) {
+          existing.args = existing.args + chunk.delta
+        }
+      }
πŸ€– Prompt for AI Agents
In `@packages/typescript/smoke-tests/adapters/src/harness.ts` around lines 232 -
243, The TOOL_CALL_ARGS branch uses "chunk.args || ..." which treats an empty
string as missing and preserves stale args; update the assignment in the block
that handles chunk.type === 'TOOL_CALL_ARGS' (referencing toolCallsInProgress,
existing.args, chunk.args, chunk.delta, and chunkData.*) to check for undefined
explicitlyβ€”set existing.args to chunk.args if chunk.args !== undefined,
otherwise append chunk.delta (using a nullish check for chunk.delta to allow
empty strings) so empty args are preserved and deltas still apply.

Comment on lines +1 to +4
import { useState, useMemo } from 'react'
import { createFileRoute } from '@tanstack/react-router'
import { useChat, fetchServerSentEvents } from '@tanstack/ai-react'
import type { UIMessage } from '@tanstack/ai-client'
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟑 Minor

Fix linted import ordering.
Static analysis reports sort-imports errors for useMemo and fetchServerSentEvents.

πŸ”§ Suggested fix
-import { useState, useMemo } from 'react'
+import { useMemo, useState } from 'react'
 import { createFileRoute } from '@tanstack/react-router'
-import { useChat, fetchServerSentEvents } from '@tanstack/ai-react'
+import { fetchServerSentEvents, useChat } from '@tanstack/ai-react'
πŸ“ Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
import { useState, useMemo } from 'react'
import { createFileRoute } from '@tanstack/react-router'
import { useChat, fetchServerSentEvents } from '@tanstack/ai-react'
import type { UIMessage } from '@tanstack/ai-client'
import { useMemo, useState } from 'react'
import { createFileRoute } from '@tanstack/react-router'
import { fetchServerSentEvents, useChat } from '@tanstack/ai-react'
import type { UIMessage } from '@tanstack/ai-client'
🧰 Tools
πŸͺ› ESLint

[error] 1-1: Member 'useMemo' of the import declaration should be sorted alphabetically.

(sort-imports)


[error] 3-3: Member 'fetchServerSentEvents' of the import declaration should be sorted alphabetically.

(sort-imports)

πŸ€– Prompt for AI Agents
In `@packages/typescript/smoke-tests/e2e/src/routes/index.tsx` around lines 1 - 4,
Reorder the named imports to satisfy the linter: alphabetize members within each
import so useMemo follows useState in the React import and fetchServerSentEvents
follows useChat in the `@tanstack/ai-react` import; keep the import groups intact
(React, `@tanstack/`* packages, types) and ensure the import lines reference the
same symbols (useState, useMemo, createFileRoute, useChat,
fetchServerSentEvents, UIMessage) so the sort-imports rule is satisfied.

Comment on lines +1 to +19
import { useMemo, useState } from 'react'
import { createFileRoute, useSearch } from '@tanstack/react-router'
import { useChat, fetchServerSentEvents } from '@tanstack/ai-react'
import type { UIMessage } from '@tanstack/ai-client'

type MockScenario =
| 'simple-text'
| 'tool-call'
| 'multi-tool'
| 'text-tool-text'
| 'error'

const VALID_SCENARIOS: MockScenario[] = [
'simple-text',
'tool-call',
'multi-tool',
'text-tool-text',
'error',
]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟑 Minor

🧩 Analysis chain

🏁 Script executed:

# Find ESLint configuration files
find . -type f -name ".eslintrc*" -o -name "eslint.config.*" | head -20

Repository: TanStack/ai

Length of output: 676


🏁 Script executed:

# Check the project structure
ls -la packages/typescript/smoke-tests/

Repository: TanStack/ai

Length of output: 338


🏁 Script executed:

# Look for ESLint config in the workspace root
ls -la | grep -i eslint

Repository: TanStack/ai

Length of output: 123


🏁 Script executed:

# Read the root ESLint config
cat eslint.config.js

Repository: TanStack/ai

Length of output: 603


🏁 Script executed:

# Check if there's an ESLint config in smoke-tests
find packages/typescript/smoke-tests -name "eslint.config.*" -o -name ".eslintrc*"

Repository: TanStack/ai

Length of output: 37


🏁 Script executed:

# Check the actual target file to confirm import order
cat packages/typescript/smoke-tests/e2e/src/routes/mock.tsx | head -20

Repository: TanStack/ai

Length of output: 515


🏁 Script executed:

# Check the `@tanstack/eslint-config` package
find node_modules -path "*/@tanstack/eslint-config*" -type f -name "*.js" -o -name "*.json" | head -20

Repository: TanStack/ai

Length of output: 2632


🏁 Script executed:

# Search for rules related to import/order or array types in the codebase
rg "import/order|array-type|prefer-array.*type" --type js --type json -i

Repository: TanStack/ai

Length of output: 37


🏁 Script executed:

# Check if there's a package.json in smoke-tests for any special config
cat packages/typescript/smoke-tests/package.json

Repository: TanStack/ai

Length of output: 101


🏁 Script executed:

# Look for `@tanstack/eslint-config` in node_modules
ls -la node_modules/.pnpm | grep eslint-config | head -10

Repository: TanStack/ai

Length of output: 228


🏁 Script executed:

# Read the import rules configuration
cat "node_modules/.pnpm/@[email protected]_@[email protected][email protected][email protected]__b544848c059229b1bd2850611abc5e4a/node_modules/@tanstack/eslint-config/src/import.js"

Repository: TanStack/ai

Length of output: 921


🏁 Script executed:

# Read the TypeScript rules configuration
cat "node_modules/.pnpm/@[email protected]_@[email protected][email protected][email protected]__b544848c059229b1bd2850611abc5e4a/node_modules/@tanstack/eslint-config/src/typescript.js"

Repository: TanStack/ai

Length of output: 2906


🏁 Script executed:

# Also check the main index to see all included configs
cat "node_modules/.pnpm/@[email protected]_@[email protected][email protected][email protected]__b544848c059229b1bd2850611abc5e4a/node_modules/@tanstack/eslint-config/src/index.js"

Repository: TanStack/ai

Length of output: 1945


🏁 Script executed:

# Check if alphabetize is mentioned anywhere in eslint configs
rg "alphabetize" node_modules/.pnpm/@[email protected]_@[email protected][email protected][email protected]__b544848c059229b1bd2850611abc5e4a/node_modules/@tanstack/eslint-config/ 2>/dev/null || echo "No alphabetize found"

Repository: TanStack/ai

Length of output: 75


🏁 Script executed:

# Check if there are any other ESLint config overrides in the project root
grep -r "alphabetize\|import/order" eslint.config.js packages/*/eslint.config.js 2>/dev/null | head -20

Repository: TanStack/ai

Length of output: 37


🏁 Script executed:

# Verify the stylistic.js file in case it has import ordering rules
cat "node_modules/.pnpm/@[email protected]_@[email protected][email protected][email protected]__b544848c059229b1bd2850611abc5e4a/node_modules/@tanstack/eslint-config/src/stylistic.js"

Repository: TanStack/ai

Length of output: 285


Change array type syntax to generic format.
The @typescript-eslint/array-type rule requires Array<MockScenario> instead of MockScenario[].

πŸ”§ Suggested fix
-const VALID_SCENARIOS: MockScenario[] = [
+const VALID_SCENARIOS: Array<MockScenario> = [
   'simple-text',
   'tool-call',
   'multi-tool',
   'text-tool-text',
   'error',
 ]
🧰 Tools
πŸͺ› ESLint

[error] 3-3: Member 'fetchServerSentEvents' of the import declaration should be sorted alphabetically.

(sort-imports)


[error] 13-13: Array type using 'MockScenario[]' is forbidden. Use 'Array' instead.

(@typescript-eslint/array-type)

πŸ€– Prompt for AI Agents
In `@packages/typescript/smoke-tests/e2e/src/routes/mock.tsx` around lines 1 - 19,
Change the array type annotation from the bracket form to the generic form for
VALID_SCENARIOS to satisfy the lint rule: update the declaration that references
MockScenario so it uses Array<MockScenario> (e.g., change the type on the
VALID_SCENARIOS constant from MockScenario[] to Array<MockScenario>), leaving
the values unchanged and keeping the MockScenario type intact.

@@ -1,11 +1,95 @@
import { test, expect } from '@playwright/test'
import { test, expect, Page } from '@playwright/test'
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟑 Minor

🧩 Analysis chain

🏁 Script executed:

# Check if the file exists and its current state
cat -n packages/typescript/smoke-tests/e2e/tests/chat.spec.ts | head -20

Repository: TanStack/ai

Length of output: 764


🏁 Script executed:

# Look for ESLint configuration files
fd -e "eslintrc*" -e ".eslintignore" -o -name "eslint.config.*" -o -name "package.json" | head -20

Repository: TanStack/ai

Length of output: 285


🏁 Script executed:

# Check the root package.json for ESLint rules
head -100 package.json

Repository: TanStack/ai

Length of output: 3448


🏁 Script executed:

# Look for TypeScript and ESLint configuration patterns
find . -maxdepth 3 -type f \( -name ".eslintrc*" -o -name "eslint.config.*" -o -name "tsconfig.json" \) 2>/dev/null | head -20

Repository: TanStack/ai

Length of output: 313


🏁 Script executed:

cat eslint.config.js

Repository: TanStack/ai

Length of output: 603


🏁 Script executed:

# Check if 'test' is used in the file
grep -n "test\(" packages/typescript/smoke-tests/e2e/tests/chat.spec.ts | head -5

Repository: TanStack/ai

Length of output: 78


🏁 Script executed:

# Search for how Page is exported from `@playwright/test`
# First, let's check if there's a node_modules or type definitions we can inspect
find packages/typescript/smoke-tests -name "*.d.ts" -o -name "package.json" | grep playwright

Repository: TanStack/ai

Length of output: 37


🏁 Script executed:

# Check if 'test' is used in the file (using rg instead of grep for better escaping)
rg "test\." packages/typescript/smoke-tests/e2e/tests/chat.spec.ts | head -10

Repository: TanStack/ai

Length of output: 354


🏁 Script executed:

# Look at the full file to see imports and test usage
cat packages/typescript/smoke-tests/e2e/tests/chat.spec.ts

Repository: TanStack/ai

Length of output: 10630


🏁 Script executed:

# Check `@tanstack/eslint-config` to understand the import ordering rules
# First find where it's located
find node_modules -name "eslint-config" -type d 2>/dev/null | grep tanstack | head -1

Repository: TanStack/ai

Length of output: 231


🏁 Script executed:

# Check the eslint-config to understand import ordering rules
cat "node_modules/.pnpm/@[email protected]_@[email protected][email protected][email protected]__b544848c059229b1bd2850611abc5e4a/node_modules/@tanstack/eslint-config/dist/index.js" | head -100

Repository: TanStack/ai

Length of output: 277


🏁 Script executed:

# Look for import sorting rules in the eslint config
rg "import" "node_modules/.pnpm/@[email protected]_@[email protected][email protected][email protected]__b544848c059229b1bd2850611abc5e4a/node_modules/@tanstack/eslint-config" -A 5 | head -50

Repository: TanStack/ai

Length of output: 9760


🏁 Script executed:

# Check if Page from `@playwright/test` is actually a type-only export
# We need to check the `@playwright/test` module definition
rg "export.*Page" node_modules/@playwright/test/index.d.ts 2>/dev/null | head -20

Repository: TanStack/ai

Length of output: 37


🏁 Script executed:

# Look for the `@playwright/test` type definitions
find node_modules/@playwright -name "index.d.ts" 2>/dev/null | head -5

Repository: TanStack/ai

Length of output: 37


🏁 Script executed:

# Check if Page is exported as type-only
rg "type Page\|export.*Page" node_modules/@playwright/test 2>/dev/null | head -20

Repository: TanStack/ai

Length of output: 37


🏁 Script executed:

# Let's find the playwright package and check its structure
find node_modules -path "*/@playwright/test" -type d 2>/dev/null | head -3

Repository: TanStack/ai

Length of output: 37


🏁 Script executed:

# List the files in the `@playwright/test` directory
ls -la node_modules/@playwright/test/ 2>/dev/null | head -20

Repository: TanStack/ai

Length of output: 37


🏁 Script executed:

# Try searching for Page in a different way
grep -r "export.*Page" node_modules/@playwright/test/ 2>/dev/null | head -10

Repository: TanStack/ai

Length of output: 37


🏁 Script executed:

# Check the eslint-plugin-import-x configuration in the tanstack config
cat "node_modules/.pnpm/@[email protected]_@[email protected][email protected][email protected]__b544848c059229b1bd2850611abc5e4a/node_modules/@tanstack/eslint-config/src/index.js" 2>/dev/null | head -150

Repository: TanStack/ai

Length of output: 1945


🏁 Script executed:

# Let's check the import-x plugin rules
rg "import-x" "node_modules/.pnpm/@[email protected]_@[email protected][email protected][email protected]__b544848c059229b1bd2850611abc5e4a/node_modules/@tanstack/eslint-config/src" -A 10 | head -100

Repository: TanStack/ai

Length of output: 5117


🏁 Script executed:

# Look at all the source files in the eslint config
ls -la "node_modules/.pnpm/@[email protected]_@[email protected][email protected][email protected]__b544848c059229b1bd2850611abc5e4a/node_modules/@tanstack/eslint-config/src/"

Repository: TanStack/ai

Length of output: 588


🏁 Script executed:

# Get the full import.js file
cat "node_modules/.pnpm/@[email protected]_@[email protected][email protected][email protected]__b544848c059229b1bd2850611abc5e4a/node_modules/@tanstack/eslint-config/src/import.js"

Repository: TanStack/ai

Length of output: 921


🏁 Script executed:

# Also check the typescript.js for consistent-type-imports rule
cat "node_modules/.pnpm/@[email protected]_@[email protected][email protected][email protected]__b544848c059229b1bd2850611abc5e4a/node_modules/@tanstack/eslint-config/src/typescript.js" | grep -A 5 "consistent-type-imports"

Repository: TanStack/ai

Length of output: 271


🏁 Script executed:

# Now let's verify if the suggested fix is correct by checking the Playwright test documentation
web_search "playwright test Page type export"

Repository: TanStack/ai

Length of output: 105


🏁 Script executed:

# Let me verify the import ordering more carefully by checking if there's any custom eslint config override for this file
find packages/typescript/smoke-tests -name ".eslintrc*" -o -name "eslint.config.*"

Repository: TanStack/ai

Length of output: 37


🏁 Script executed:

# Check if there's a .eslintignore
find packages/typescript/smoke-tests -name ".eslintignore"

Repository: TanStack/ai

Length of output: 37


Separate type-only import and order value imports alphabetically.
Page is used only as a type annotation and must be imported with import type. The import/order rule requires type imports to appear after value imports, and value imports must be sorted alphabetically.

πŸ”§ Suggested fix
-import { test, expect, Page } from '@playwright/test'
+import { expect, test } from '@playwright/test'
+import type { Page } from '@playwright/test'
πŸ“ Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
import { test, expect, Page } from '@playwright/test'
import { expect, test } from '@playwright/test'
import type { Page } from '@playwright/test'
🧰 Tools
πŸͺ› ESLint

[error] 1-1: Imports "Page" are only used as type.

(@typescript-eslint/consistent-type-imports)


[error] 1-1: Member 'expect' of the import declaration should be sorted alphabetically.

(sort-imports)

πŸ€– Prompt for AI Agents
In `@packages/typescript/smoke-tests/e2e/tests/chat.spec.ts` at line 1, The import
groups are wrong: move the type-only Page into a separate type import and
alphabetize the value imports; change "import { test, expect, Page } from
'@playwright/test'" to value imports in alphabetical order (import { expect,
test } from '@playwright/test') and add a separate type import (import type {
Page } from '@playwright/test') so Page is a type-only import and value imports
are sorted.

@jherr jherr requested a review from a team January 29, 2026 01:05
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 6

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
testing/panel/src/routes/summarize.tsx (1)

72-105: Handle partial SSE chunks with buffering between reads.

The per-line JSON parse ignores incomplete SSE events; if a data: line is split across reads, that event is lost and the summary misses deltas. Buffer incomplete lines between reads:

πŸ”§ Buffering fix
         let accumulatedSummary = ''
+        let buffer = ''

-        const text = decoder.decode(value)
-        const lines = text.split('\n')
+        buffer += decoder.decode(value, { stream: true })
+        const lines = buffer.split('\n')
+        buffer = lines.pop() ?? ''

Note: StreamProcessor is designed for structured event objects from the AI SDK, not SSE-formatted text streams. Since this endpoint wraps events in SSE format on the server, buffering incomplete lines is the appropriate fix.

πŸ€– Fix all issues with AI agents
In @.changeset/ag-ui-events.md:
- Around line 14-19: Update the changeset text to document all AG-UI protocol
events introduced in the PR by adding the missing event types and short
descriptions: include RUN_ERROR (error lifecycle event), STEP_STARTED and
STEP_FINISHED (step lifecycle events), STATE_SNAPSHOT and STATE_DELTA (state
events), and CUSTOM (custom events), alongside the already listed
RUN_STARTED/RUN_FINISHED, TEXT_MESSAGE_*, and TOOL_CALL_* entries; ensure each
event name (e.g., RUN_ERROR, STEP_STARTED, STATE_SNAPSHOT, CUSTOM) appears in
the list with a one-line description so the changelog fully reflects the
protocol surface.
- Around line 2-9: The changeset currently marks several packages
('@tanstack/ai', '@tanstack/ai-client', '@tanstack/ai-openai',
'@tanstack/ai-anthropic', '@tanstack/ai-gemini', '@tanstack/ai-grok',
'@tanstack/ai-ollama', '@tanstack/ai-openrouter') as "minor" but the diff and
content state that legacy chunk formats are no longer accepted (breaking API),
so update the changeset to mark each of those package entries as "major" instead
of "minor" and ensure the changeset message reflects the breaking change so the
release tooling will perform a major bump for these packages.
- Around line 2-9: The changeset erroneously includes '@tanstack/ai-client'
which lacks AG-UI event implementations; either remove '@tanstack/ai-client'
from the changeset or implement the missing AG-UI event handlers (RUN_STARTED,
TEXT_MESSAGE_START, TOOL_CALL_START, RUN_FINISHED, RUN_ERROR) inside the
'@tanstack/ai-client' adapter (add the corresponding event-emission or handling
code and tests to match the other adapters such as '@tanstack/ai-grok' and
'@tanstack/ai-openrouter'); ensure the adapter exports or wires the same event
constants/handlers so the release contains consistent AG-UI support.

In `@scripts/distribute-keys.ts`:
- Around line 30-34: Change explicit square-bracket array type annotations to
the project's preferred generic form: in function findExampleEnvFiles replace
the return type and local declaration annotations from string[] to Array<string>
(update the function signature and the envFiles variable). Do the same for the
other occurrence around lines 78-80 (replace any string[] annotations with
Array<string>) so the code conforms to `@typescript-eslint/array-type`.

In `@testing/panel/src/routes/api.structured.ts`:
- Around line 61-104: The code currently uses unsafe "as any" casts when passing
actualModel to adapter factories (see defaultModels, actualModel, and adapter
factory functions like anthropicText, openaiText, geminiText, grokText,
ollamaText, openRouterText and createChatOptions); remove those casts and
introduce a discriminated union mapping type (e.g., ModelByProvider) that maps
each Provider literal to its allowed model union, then narrow actualModel to
ModelByProvider[typeof provider] (or use a typed switch on provider) so each
branch passes a properly typed model to anthropicText/openaiText/etc. without
casting, preserving per-provider compile-time validation.

In `@testing/panel/src/routes/api.summarize.ts`:
- Around line 31-74: The adapters are currently passed actualModel using `as
any` which bypasses compile-time model validation; remove all `as any` casts in
the adapterConfig and instead narrow actualModel to a provider-specific model
type before calling each adapter factory (e.g., introduce a conditional type
like ValidSummarizeModel<P extends Provider> and compute const validModel =
actualModel as ValidSummarizeModel<typeof provider> or implement a
discriminated-union mapping of provider→model), then pass that typed
`validModel` into createSummarizeOptions calls (for functions such as
openaiSummarize, anthropicSummarize, geminiSummarize, grokSummarize,
ollamaSummarize, openRouterSummarize) so TypeScript enforces per-provider model
correctness at compile time.
🧹 Nitpick comments (4)
scripts/distribute-keys.ts (1)

17-25: Rename STATIC_ENV_LOCAL_PATHS to camelCase.

The constant name violates the camelCase naming rule for variables.

♻️ Proposed change
-const STATIC_ENV_LOCAL_PATHS = [
+const staticEnvLocalPaths = [
   'testing/panel/.env.local',
   'packages/typescript/smoke-tests/e2e/.env.local',
   'packages/typescript/smoke-tests/adapters/.env.local',
   'packages/typescript/ai-code-mode/.env.local',
   'packages/typescript/ai-anthropic/live-tests/.env.local',
   'packages/typescript/ai-openai/live-tests/.env.local',
 ]
@@
-  const allEnvPaths = [...STATIC_ENV_LOCAL_PATHS, ...exampleEnvFiles]
+  const allEnvPaths = [...staticEnvLocalPaths, ...exampleEnvFiles]

As per coding guidelines, **/*.{ts,tsx,js,jsx}: Use camelCase for function and variable names throughout the codebase.

Also applies to: 117-117

testing/panel/src/routes/api.chat.ts (2)

56-62: Consider reusing the shared Provider type to avoid drift.
Importing Provider from testing/panel/src/lib/model-selection.ts keeps UI options and API routing in sync.


195-198: Avoid as any for OpenRouter model selection.
Use a provider-specific model union (or model-meta mapping) so openRouterText remains type-safe.

As per coding guidelines: Use type-safe per-model configuration with provider options typed based on selected model to ensure compile-time safety.

testing/panel/src/routes/api.summarize.ts (1)

5-8: This import pattern aligns with the current package exportsβ€”while the tree-shakeable /adapters subpath pattern is specified in architectural guidelines, it has not yet been implemented in the provider packages' package.json exports. The root-level imports (from '@tanstack/ai-grok', etc.) are currently the only available approach. Consider adding /adapters subpath exports to each provider package's package.json (similar to the core @tanstack/ai package) as a separate refactoring effort to enable tree-shaking benefits.

Comment on lines +2 to +9
'@tanstack/ai': minor
'@tanstack/ai-client': minor
'@tanstack/ai-openai': minor
'@tanstack/ai-anthropic': minor
'@tanstack/ai-gemini': minor
'@tanstack/ai-grok': minor
'@tanstack/ai-ollama': minor
'@tanstack/ai-openrouter': minor
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | πŸ”΄ Critical

BREAKING: This must be a major version bump, not minor.

The change explicitly states "previous legacy chunk formats are no longer accepted" (line 20), which is a breaking API change. Per semantic versioning, breaking changes require a major version bump, not minor.

πŸ“‹ Proposed fix
 ---
-'@tanstack/ai': minor
-'@tanstack/ai-client': minor
-'@tanstack/ai-openai': minor
-'@tanstack/ai-anthropic': minor
-'@tanstack/ai-gemini': minor
-'@tanstack/ai-grok': minor
-'@tanstack/ai-ollama': minor
-'@tanstack/ai-openrouter': minor
+'@tanstack/ai': major
+'@tanstack/ai-client': major
+'@tanstack/ai-openai': major
+'@tanstack/ai-anthropic': major
+'@tanstack/ai-gemini': major
+'@tanstack/ai-grok': major
+'@tanstack/ai-ollama': major
+'@tanstack/ai-openrouter': major
 ---
πŸ“ Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
'@tanstack/ai': minor
'@tanstack/ai-client': minor
'@tanstack/ai-openai': minor
'@tanstack/ai-anthropic': minor
'@tanstack/ai-gemini': minor
'@tanstack/ai-grok': minor
'@tanstack/ai-ollama': minor
'@tanstack/ai-openrouter': minor
---
'@tanstack/ai': major
'@tanstack/ai-client': major
'@tanstack/ai-openai': major
'@tanstack/ai-anthropic': major
'@tanstack/ai-gemini': major
'@tanstack/ai-grok': major
'@tanstack/ai-ollama': major
'@tanstack/ai-openrouter': major
πŸ€– Prompt for AI Agents
In @.changeset/ag-ui-events.md around lines 2 - 9, The changeset currently marks
several packages ('@tanstack/ai', '@tanstack/ai-client', '@tanstack/ai-openai',
'@tanstack/ai-anthropic', '@tanstack/ai-gemini', '@tanstack/ai-grok',
'@tanstack/ai-ollama', '@tanstack/ai-openrouter') as "minor" but the diff and
content state that legacy chunk formats are no longer accepted (breaking API),
so update the changeset to mark each of those package entries as "major" instead
of "minor" and ensure the changeset message reflects the breaking change so the
release tooling will perform a major bump for these packages.

⚠️ Potential issue | πŸ”΄ Critical

🧩 Analysis chain

🏁 Script executed:

# First, let's understand the repository structure
find . -maxdepth 3 -type d -name "packages" | head -5
ls -la packages/typescript/ 2>/dev/null | grep "ai" || echo "Checking alternate structure..."

Repository: TanStack/ai

Length of output: 1433


🏁 Script executed:

#!/bin/bash
# Description: Verify AG-UI event implementation in all listed adapters.

# Search for AG-UI event type usage in each adapter package
for pkg in ai ai-client ai-openai ai-anthropic ai-gemini ai-grok ai-ollama ai-openrouter; do
  echo "=== Checking `@tanstack/`$pkg ==="
  
  # Look for AG-UI event emissions (RUN_STARTED, TEXT_MESSAGE_START, etc.)
  rg -n --type=ts -C2 'RUN_STARTED|TEXT_MESSAGE_START|TOOL_CALL_START|RUN_FINISHED|RUN_ERROR' \
    "packages/typescript/$pkg/src/" 2>/dev/null || echo "  ⚠️  No AG-UI events found"
  
  echo ""
done

Repository: TanStack/ai

Length of output: 47126


Remove @tanstack/ai-client from the changeset or add AG-UI event implementations.

Verification found that @tanstack/ai-client does not contain AG-UI event implementations (RUN_STARTED, TEXT_MESSAGE_START, TOOL_CALL_START, RUN_FINISHED, RUN_ERROR), unlike the other 8 adapter packages. Either this package should be removed from the changeset, or the missing AG-UI event handling should be implemented before release.

The initially questioned packages (@tanstack/ai-grok and @tanstack/ai-openrouter) do have AG-UI event implementations and can remain in the changeset.

πŸ€– Prompt for AI Agents
In @.changeset/ag-ui-events.md around lines 2 - 9, The changeset erroneously
includes '@tanstack/ai-client' which lacks AG-UI event implementations; either
remove '@tanstack/ai-client' from the changeset or implement the missing AG-UI
event handlers (RUN_STARTED, TEXT_MESSAGE_START, TOOL_CALL_START, RUN_FINISHED,
RUN_ERROR) inside the '@tanstack/ai-client' adapter (add the corresponding
event-emission or handling code and tests to match the other adapters such as
'@tanstack/ai-grok' and '@tanstack/ai-openrouter'); ensure the adapter exports
or wires the same event constants/handlers so the release contains consistent
AG-UI support.

Comment on lines +14 to +19
All text adapters now emit AG-UI protocol events only:

- `RUN_STARTED` / `RUN_FINISHED` - Run lifecycle events
- `TEXT_MESSAGE_START` / `TEXT_MESSAGE_CONTENT` / `TEXT_MESSAGE_END` - Text message streaming
- `TOOL_CALL_START` / `TOOL_CALL_ARGS` / `TOOL_CALL_END` - Tool call streaming

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Document all AG-UI event types introduced in this PR.

The changeset omits several event types mentioned in the broader PR changes. Based on the AI summary, the following event types are also part of the AG-UI protocol but are not documented here:

  • RUN_ERROR β€” error lifecycle event
  • STEP_STARTED / STEP_FINISHED β€” step lifecycle events
  • STATE_SNAPSHOT / STATE_DELTA β€” state events
  • CUSTOM β€” custom events

Include all introduced event types in the changeset description to provide a complete changelog for users.

πŸ€– Prompt for AI Agents
In @.changeset/ag-ui-events.md around lines 14 - 19, Update the changeset text
to document all AG-UI protocol events introduced in the PR by adding the missing
event types and short descriptions: include RUN_ERROR (error lifecycle event),
STEP_STARTED and STEP_FINISHED (step lifecycle events), STATE_SNAPSHOT and
STATE_DELTA (state events), and CUSTOM (custom events), alongside the already
listed RUN_STARTED/RUN_FINISHED, TEXT_MESSAGE_*, and TOOL_CALL_* entries; ensure
each event name (e.g., RUN_ERROR, STEP_STARTED, STATE_SNAPSHOT, CUSTOM) appears
in the list with a one-line description so the changelog fully reflects the
protocol surface.

Comment on lines +30 to +34
function findExampleEnvFiles(projectRoot: string): string[] {
const examplesDir = path.join(projectRoot, 'examples')
if (!fs.existsSync(examplesDir)) return []

const envFiles: string[] = []
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Fix ESLint array-type violations.

The explicit string[] annotations violate @typescript-eslint/array-type and will fail lint.

🧹 Proposed fix
-function findExampleEnvFiles(projectRoot: string): string[] {
+function findExampleEnvFiles(projectRoot: string): Array<string> {
@@
-  const envFiles: string[] = []
+  const envFiles: Array<string> = []
@@
-function serializeEnvFile(entries: Map<string, string>): string {
-  const lines: string[] = []
+function serializeEnvFile(entries: Map<string, string>): string {
+  const lines: Array<string> = []

Also applies to: 78-80

🧰 Tools
πŸͺ› ESLint

[error] 30-30: Array type using 'string[]' is forbidden. Use 'Array' instead.

(@typescript-eslint/array-type)


[error] 34-34: Array type using 'string[]' is forbidden. Use 'Array' instead.

(@typescript-eslint/array-type)

πŸ€– Prompt for AI Agents
In `@scripts/distribute-keys.ts` around lines 30 - 34, Change explicit
square-bracket array type annotations to the project's preferred generic form:
in function findExampleEnvFiles replace the return type and local declaration
annotations from string[] to Array<string> (update the function signature and
the envFiles variable). Do the same for the other occurrence around lines 78-80
(replace any string[] annotations with Array<string>) so the code conforms to
`@typescript-eslint/array-type`.

Comment on lines +61 to 104
// Don't set a global default - let each adapter use its own default model
const model: string | undefined = data.model || body.model

try {
// Default models per provider
const defaultModels: Record<Provider, string> = {
anthropic: 'claude-sonnet-4-5',
gemini: 'gemini-2.0-flash',
grok: 'grok-3-mini',
ollama: 'mistral:7b',
openai: 'gpt-4o',
openrouter: 'openai/gpt-4o',
}

// Determine the actual model being used
const actualModel = model || defaultModels[provider]

// Pre-define typed adapter configurations with full type inference
// Model is passed to the adapter factory function for type-safe autocomplete
const adapterConfig = {
anthropic: () =>
createChatOptions({
adapter: anthropicText(
(model || 'claude-sonnet-4-5-20250929') as any,
),
adapter: anthropicText(actualModel as any),
}),
gemini: () =>
createChatOptions({
adapter: geminiText((model || 'gemini-2.0-flash-exp') as any),
adapter: geminiText(actualModel as any),
}),
grok: () =>
createChatOptions({
adapter: grokText(actualModel as any),
}),
ollama: () =>
createChatOptions({
adapter: ollamaText((model || 'mistral:7b') as any),
adapter: ollamaText(actualModel as any),
}),
openai: () =>
createChatOptions({
adapter: openaiText((model || 'gpt-4o') as any),
adapter: openaiText(actualModel as any),
}),
openrouter: () =>
createChatOptions({
adapter: openRouterText(actualModel as any),
}),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
set -euo pipefail

# First, run the provided search for adapter exports
echo "=== Searching for adapter exports ==="
rg -nP "export (const|function) (anthropicText|geminiText|grokText|openaiText|openRouterText|ollamaText)" -g '*.ts'

Repository: TanStack/ai

Length of output: 859


🏁 Script executed:

#!/bin/bash
set -euo pipefail

# Search for files that might contain adapter implementations
echo "=== Finding adapter-related files ==="
fd -e ts -e tsx | grep -E "(adapter|anthropic|gemini|grok|openai|ollama|openrouter)" | head -20

Repository: TanStack/ai

Length of output: 1263


🏁 Script executed:

#!/bin/bash
set -euo pipefail

# Look for the specific file mentioned in the review
echo "=== Checking the reviewed file ==="
cat -n testing/panel/src/routes/api.structured.ts | sed -n '61,104p'

Repository: TanStack/ai

Length of output: 2033


🏁 Script executed:

#!/bin/bash
set -euo pipefail

# Search for model-meta files
echo "=== Searching for model metadata files ==="
fd -e ts "model-meta" || rg "model-meta" -g '*.ts' | head -5

Repository: TanStack/ai

Length of output: 4665


🏁 Script executed:

#!/bin/bash
set -euo pipefail

# Check the anthropic adapter implementation
echo "=== Anthropic adapter (first 50 lines) ==="
head -50 packages/typescript/ai-anthropic/src/adapters/text.ts

Repository: TanStack/ai

Length of output: 1381


🏁 Script executed:

#!/bin/bash
set -euo pipefail

# Check what ANTHROPIC_MODELS constant is
echo "=== Finding ANTHROPIC_MODELS ==="
rg "ANTHROPIC_MODELS" -g '*.ts' -B 2 -A 2 | head -30

Repository: TanStack/ai

Length of output: 2117


🏁 Script executed:

#!/bin/bash
set -euo pipefail

# Check the openai adapter for comparison
echo "=== Finding OPENAI_CHAT_MODELS ==="
rg "OPENAI_CHAT_MODELS" -g '*.ts' -B 2 -A 2 | head -30

Repository: TanStack/ai

Length of output: 2069


🏁 Script executed:

#!/bin/bash
set -euo pipefail

# Look at the anthropic model-meta file
echo "=== Anthropic model metadata ==="
cat packages/typescript/ai-anthropic/src/model-meta.ts | head -80

Repository: TanStack/ai

Length of output: 2003


🏁 Script executed:

#!/bin/bash
set -euo pipefail

# Check the full anthropic text adapter export signature
echo "=== Anthropic text adapter export (around line 714) ==="
sed -n '710,730p' packages/typescript/ai-anthropic/src/adapters/text.ts

Repository: TanStack/ai

Length of output: 590


🏁 Script executed:

#!/bin/bash
set -euo pipefail

# Check how defaultModels are constrained
echo "=== Looking for type patterns in testing panel ==="
head -100 testing/panel/src/routes/api.structured.ts | tail -50

Repository: TanStack/ai

Length of output: 1986


🏁 Script executed:

#!/bin/bash
set -euso pipefail

# Check if ANTHROPIC_MODELS, OPENAI_CHAT_MODELS are exported
echo "=== Checking what's exported from anthropic model-meta ==="
rg "^export" packages/typescript/ai-anthropic/src/model-meta.ts

Repository: TanStack/ai

Length of output: 390


🏁 Script executed:

#!/bin/bash
set -euo pipefail

# Look for any type narrowing patterns in the codebase
echo "=== Searching for provider-based type narrowing examples ==="
rg "provider.*as.*const\|match.*provider\|switch.*provider" -g '*.ts' -A 3 | head -40

Repository: TanStack/ai

Length of output: 115


🏁 Script executed:

#!/bin/bash
set -euo pipefail

# Look for patterns showing how to properly type models per provider
echo "=== Checking other adapter usage patterns ==="
rg "anthropicText|openaiText|geminiText" -g '*.ts' -B 2 -A 2 | grep -v "^--$" | head -60

Repository: TanStack/ai

Length of output: 4871


🏁 Script executed:

#!/bin/bash
set -euo pipefail

# Check if there's a Provider type definition and how it's used
echo "=== Looking for Provider type definition ==="
rg "type Provider|enum Provider" -g '*.ts' -A 10 | head -40

Repository: TanStack/ai

Length of output: 2534


🏁 Script executed:

#!/bin/bash
set -euo pipefail

# See if Parameters helper pattern is used elsewhere
echo "=== Checking if Parameters type extraction is used elsewhere ==="
rg "Parameters<typeof" -g '*.ts' | head -20

Repository: TanStack/ai

Length of output: 213


🏁 Script executed:

#!/bin/bash
set -euo pipefail

# Check the full context of the api.structured.ts to see how adapterConfig is used
echo "=== Full api.structured.ts file ==="
wc -l testing/panel/src/routes/api.structured.ts

Repository: TanStack/ai

Length of output: 137


🏁 Script executed:

#!/bin/bash
set -euo pipefail

# Check the full api.structured.ts file to see context around the problematic code
echo "=== Full context of api.structured.ts ==="
cat -n testing/panel/src/routes/api.structured.ts

Repository: TanStack/ai

Length of output: 7936


🏁 Script executed:

#!/bin/bash
set -euo pipefail

# Check what ANTHROPIC_MODELS, OPENAI_CHAT_MODELS etc. contain
echo "=== ANTHROPIC_MODELS content ==="
sed -n '/^export const ANTHROPIC_MODELS/,/^]/p' packages/typescript/ai-anthropic/src/model-meta.ts | head -20

Repository: TanStack/ai

Length of output: 333


🏁 Script executed:

#!/bin/bash
set -euo pipefail

# Check a similar file that might have better type handling
echo "=== api.chat.ts for comparison ==="
cat testing/panel/src/routes/api.chat.ts

Repository: TanStack/ai

Length of output: 9180


Remove as any casts to preserve per-model type safety.

The casts bypass compile-time model validation and allow passing models intended for another provider. Extract per-provider model types using a discriminated union pattern: create a type that narrows actualModel based on the provider literal type. For example:

type ModelByProvider = {
  anthropic: typeof ANTHROPIC_MODELS[number]
  openai: typeof OPENAI_CHAT_MODELS[number]
  gemini: typeof GEMINI_MODELS[number]
  grok: typeof GROK_CHAT_MODELS[number]
  ollama: string
  openrouter: OpenRouterTextModels
}

const actualModel = model || defaultModels[provider] as ModelByProvider[typeof provider]

Then pass actualModel directly without casting:

adapter: anthropicText(actualModel)  // type-safe when provider === 'anthropic'

This ensures the TypeScript compiler validates that each adapter receives only valid models for its provider.

πŸ€– Prompt for AI Agents
In `@testing/panel/src/routes/api.structured.ts` around lines 61 - 104, The code
currently uses unsafe "as any" casts when passing actualModel to adapter
factories (see defaultModels, actualModel, and adapter factory functions like
anthropicText, openaiText, geminiText, grokText, ollamaText, openRouterText and
createChatOptions); remove those casts and introduce a discriminated union
mapping type (e.g., ModelByProvider) that maps each Provider literal to its
allowed model union, then narrow actualModel to ModelByProvider[typeof provider]
(or use a typed switch on provider) so each branch passes a properly typed model
to anthropicText/openaiText/etc. without casting, preserving per-provider
compile-time validation.

Comment on lines +683 to +687
const { toolCallId, toolName, input, approval } = chunk.data as {
toolCallId: string
toolName: string
input: any
approval: { id: string; needsApproval: boolean }
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we somehow type this properly instead of casting it?

)
// Handle client tool input availability - trigger client-side execution
if (chunk.name === 'tool-input-available' && chunk.data) {
const { toolCallId, toolName, input } = chunk.data as {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we somehow type this properly instead of casting it?

Copy link
Contributor

@AlemTuzlak AlemTuzlak left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Have some questions but all in all LGTM

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

πŸ€– Fix all issues with AI agents
In `@packages/typescript/ai-vue/src/use-chat.ts`:
- Around line 41-48: The watcher currently only reacts to reference changes of
options.body; update the watch invocation that observes () => options.body so it
deep-watches nested mutations and handles refs by using unref (i.e., pass {
deep: true } as the options and unref the body in the getter), then call
client.updateOptions({ body: newBody }) as before so nested property updates on
options.body trigger the watcher; locate the watch surrounding options.body and
client.updateOptions to apply this change.

Comment on lines +41 to +48
// Sync body changes to the client
// This allows dynamic body values (like model selection) to be updated without recreating the client
watch(
() => options.body,
(newBody) => {
client.updateOptions({ body: newBody })
},
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟑 Minor

🧩 Analysis chain

🌐 Web query:

Vue 3 watch reactive object nested mutations deep watch behavior

πŸ’‘ Result:

  • watch() is shallow by default: it only re-runs when the watched source value is replaced (reference change). Nested mutations (e.g., obj.nested.x++) won’t trigger unless you use a deep watcher. [1]

    watch(() => state.obj, () => {}, { deep: true })
  • Deep watchers (deep: true) traverse nested properties so the callback fires on nested mutations. A key gotcha: on nested mutations, newValue === oldValue is typically true because the object wasn’t replacedβ€”both parameters point to the same reactive object. [1]

  • Watching a reactive() object directly is implicitly deep (you don’t need { deep: true } in that case): [1]

    const obj = reactive({ nested: { count: 0 } })
    
    watch(obj, (newVal, oldVal) => {
      // fires on obj.nested.count++
      // newVal and oldVal are the same reference on nested mutations
    })
    
    obj.nested.count++
  • Vue 3.5+ supports deep as a max traversal depth number (e.g., deep: 1 for β€œshallow-ish” deep watch). [2][3]

  • Arrays in Vue 3: watching an array triggers only when the array is replaced, not mutated (e.g., push/splice), unless you specify deep. Vue 3.5+ recommends deep: 1 for array replacement + mutation behavior. [2]

Sources: Vue Watchers guide (deep watchers + implicit deep on reactive objects) [1], Vue 3 Migration Guide (array watch behavior + deep: 1 in 3.5+) [2], Vue reactivity API docs (deep option, including numeric depth in 3.5+) [3].


🏁 Script executed:

cat -n packages/typescript/ai-vue/src/use-chat.ts | head -60

Repository: TanStack/ai

Length of output: 2620


🏁 Script executed:

cat -n packages/typescript/ai-vue/src/types.ts

Repository: TanStack/ai

Length of output: 3423


🏁 Script executed:

cd packages/typescript/ai-client && grep -A 20 "export.*ChatRequestBody\|type.*ChatRequestBody" src/types.ts | head -40

Repository: TanStack/ai

Length of output: 651


Watch won't detect nested mutations on the body object.

watch(() => options.body, ...) only reacts when the body reference changes. Nested mutations (e.g., options.body.data['key'] = value) won't trigger the watcher. Add { deep: true } and unref() to detect nested property changes:

-import { onScopeDispose, readonly, shallowRef, useId, watch } from 'vue'
+import { onScopeDispose, readonly, shallowRef, useId, watch, unref } from 'vue'
@@
 watch(
-  () => options.body,
+  () => unref(options.body),
   (newBody) => {
     client.updateOptions({ body: newBody })
   },
+  { deep: true },
 )
πŸ“ Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
// Sync body changes to the client
// This allows dynamic body values (like model selection) to be updated without recreating the client
watch(
() => options.body,
(newBody) => {
client.updateOptions({ body: newBody })
},
)
// Sync body changes to the client
// This allows dynamic body values (like model selection) to be updated without recreating the client
watch(
() => unref(options.body),
(newBody) => {
client.updateOptions({ body: newBody })
},
{ deep: true },
)
πŸ€– Prompt for AI Agents
In `@packages/typescript/ai-vue/src/use-chat.ts` around lines 41 - 48, The watcher
currently only reacts to reference changes of options.body; update the watch
invocation that observes () => options.body so it deep-watches nested mutations
and handles refs by using unref (i.e., pass { deep: true } as the options and
unref the body in the getter), then call client.updateOptions({ body: newBody })
as before so nested property updates on options.body trigger the watcher; locate
the watch surrounding options.body and client.updateOptions to apply this
change.

@jherr jherr merged commit e52135f into main Jan 29, 2026
8 of 9 checks passed
@jherr jherr deleted the feat/ag-ui-events branch January 29, 2026 21:50
@github-actions github-actions bot mentioned this pull request Jan 29, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants