-
-
Notifications
You must be signed in to change notification settings - Fork 116
AG-UI Support #244
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. Weβll occasionally send you account related emails.
Already on GitHub? Sign in to your account
AG-UI Support #244
Conversation
π WalkthroughWalkthroughStreaming and adapters were migrated to the AGβUI Protocol: streams now emit RUN_STARTED, RUN_FINISHED, RUN_ERROR, TEXT_MESSAGE_/TOOL_CALL_/STEP_* and state/custom events with run/message/step IDs; types, converters, SSE/formatters, adapters, processors, tests, examples, and tooling updated; legacy chunk formats removed. Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant Adapter as Text Adapter
participant Stream as Event Stream
participant Processor as Chat Processor
participant ToolExec as Tool Executor
Client->>Adapter: start streaming request
Adapter->>Stream: RUN_STARTED(runId, model, timestamp)
Stream->>Processor: RUN_STARTED
Processor->>Processor: init run state
Adapter->>Stream: TEXT_MESSAGE_START(messageId, runId)
Stream->>Processor: TEXT_MESSAGE_START
loop text chunks
Adapter->>Stream: TEXT_MESSAGE_CONTENT(delta, content?, messageId)
Stream->>Processor: TEXT_MESSAGE_CONTENT
Processor->>Processor: accumulate content
end
alt tool call
Adapter->>Stream: TOOL_CALL_START(toolCallId, toolName, runId)
Stream->>Processor: TOOL_CALL_START
loop arg chunks
Adapter->>Stream: TOOL_CALL_ARGS(delta, args?, toolCallId)
Stream->>Processor: TOOL_CALL_ARGS
Processor->>Processor: accumulate args
end
Adapter->>Stream: TOOL_CALL_END(toolCallId, inputParsed)
Stream->>Processor: TOOL_CALL_END
Processor->>ToolExec: execute tool(inputParsed)
ToolExec-->>Processor: tool result
end
Adapter->>Stream: TEXT_MESSAGE_END(messageId, runId)
Stream->>Processor: TEXT_MESSAGE_END
alt success
Adapter->>Stream: RUN_FINISHED(runId, finishReason, usage)
Stream->>Processor: RUN_FINISHED
Processor->>Client: finalize response
else error
Adapter->>Stream: RUN_ERROR(runId, error{code,message}, timestamp)
Stream->>Processor: RUN_ERROR
Processor->>Client: emit error
end
Estimated code review effortπ― 4 (Complex) | β±οΈ ~60 minutes Possibly related PRs
Suggested reviewers
Poem
π₯ Pre-merge checks | β 3β Failed checks (2 warnings, 1 inconclusive)
βοΈ Tip: You can configure your own custom pre-merge checks in the settings. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
View your CI Pipeline Execution β for commit 2a7de26
βοΈ Nx Cloud last updated this comment at |
@tanstack/ai
@tanstack/ai-anthropic
@tanstack/ai-client
@tanstack/ai-devtools-core
@tanstack/ai-gemini
@tanstack/ai-grok
@tanstack/ai-ollama
@tanstack/ai-openai
@tanstack/ai-openrouter
@tanstack/ai-preact
@tanstack/ai-react
@tanstack/ai-react-ui
@tanstack/ai-solid
@tanstack/ai-solid-ui
@tanstack/ai-svelte
@tanstack/ai-vue
@tanstack/ai-vue-ui
@tanstack/preact-ai-devtools
@tanstack/react-ai-devtools
@tanstack/solid-ai-devtools
commit: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 8
Caution
Some comments are outside the diff and canβt be posted inline due to platform limitations.
β οΈ Outside diff range comments (8)
packages/typescript/smoke-tests/adapters/src/harness.ts (1)
199-236: Fallback tocontentwhen delta is missing.
TEXT_MESSAGE_CONTENTis aggregated usingdeltaonly. If an adapter emits content-only chunks,fullResponseand the draft will miss text.π§ Proposed fix
- const delta = chunk.delta || '' + const delta = chunk.delta ?? chunk.content ?? '' fullResponse += delta if (!assistantDraft) { assistantDraft = { role: 'assistant', - content: chunk.content || '', + content: chunk.content ?? delta ?? '', toolCalls: [], } } else { assistantDraft.content = (assistantDraft.content || '') + delta }packages/typescript/ai-openai/src/adapters/text.ts (1)
626-645: Ensure RUN_STARTED precedes RUN_ERROR on early stream failure.If the iterator throws before the first chunk, the catch block emits RUN_ERROR without a prior RUN_STARTED, breaking lifecycle ordering for consumers.
π§ Proposed fix
} catch (error: unknown) { const err = error as Error & { code?: string } console.log( '[OpenAI Adapter] Stream ended with error. Event type summary:', { totalChunks: chunkCount, error: err.message, }, ) + if (!hasEmittedRunStarted) { + hasEmittedRunStarted = true + yield { + type: 'RUN_STARTED', + runId, + model: options.model, + timestamp, + } + } yield { type: 'RUN_ERROR', runId, model: options.model, timestamp, error: { message: err.message || 'Unknown error occurred', code: err.code, }, } }packages/typescript/ai-gemini/src/adapters/text.ts (2)
117-128: RUN_ERROR event missingrunIdfield.The error handler in
chatStreamemits aRUN_ERRORevent but doesn't includerunId. Since this catch block is reached before any streaming occurs,runIdhasn't been generated yet. However, for consistency with theRunErrorEventinterface (which has optionalrunId), consider generating a runId even for pre-stream errors:π§ Suggested fix: Generate runId for error events
async *chatStream( options: TextOptions<GeminiTextProviderOptions>, ): AsyncIterable<StreamChunk> { const mappedOptions = this.mapCommonOptionsToGemini(options) + const runId = generateId(this.name) try { const result = await this.client.models.generateContentStream(mappedOptions) - yield* this.processStreamChunks(result, options.model) + yield* this.processStreamChunks(result, options.model, runId) } catch (error) { const timestamp = Date.now() yield { type: 'RUN_ERROR', + runId, model: options.model, timestamp,
368-421: Duplicate TOOL_CALL_END events may be emitted for UNEXPECTED_TOOL_CALL.When
finishReason === FinishReason.UNEXPECTED_TOOL_CALL, tool calls are added totoolCallMapwithstarted: true(line 387), thenTOOL_CALL_STARTandTOOL_CALL_ENDare emitted (lines 391-418). However, the loop at lines 424-441 iterates over all entries intoolCallMapand emitsTOOL_CALL_ENDagain, causing duplicate events for these tool calls.π Proposed fix: Track which tool calls have already emitted TOOL_CALL_END
+ const endedToolCalls = new Set<string>() + if (finishReason === FinishReason.UNEXPECTED_TOOL_CALL) { if (chunk.candidates[0].content?.parts) { for (const part of chunk.candidates[0].content.parts) { const functionCall = part.functionCall if (functionCall) { const toolCallId = functionCall.id || `${functionCall.name}_${Date.now()}_${nextToolIndex}` // ... existing code ... yield { type: 'TOOL_CALL_END', toolCallId, toolName: functionCall.name || '', model, timestamp, input: parsedInput, } + endedToolCalls.add(toolCallId) } } } } // Emit TOOL_CALL_END for all tracked tool calls for (const [toolCallId, toolCallData] of toolCallMap.entries()) { + if (endedToolCalls.has(toolCallId)) { + continue + } let parsedInput: unknown = {}packages/typescript/ai-anthropic/src/adapters/text.ts (1)
613-671: Prevent duplicate terminal events (RUN_FINISHED/RUN_ERROR).The Anthropic Messages API always emits
message_deltawithstop_reasonbeforemessage_stop. Both handlers currently emit terminal events, causing duplicates that break downstream state machines expecting a single terminal event.Implement the suggested tracking flag to guard terminal event emissions:
Suggested fix
- let hasEmittedRunStarted = false - let hasEmittedTextMessageStart = false + let hasEmittedRunStarted = false + let hasEmittedTextMessageStart = false + let hasEmittedRunTerminal = false @@ - } else if (event.type === 'message_stop') { - yield { - type: 'RUN_FINISHED', - runId, - model, - timestamp, - finishReason: 'stop', - } + } else if (event.type === 'message_stop') { + if (!hasEmittedRunTerminal) { + hasEmittedRunTerminal = true + yield { + type: 'RUN_FINISHED', + runId, + model, + timestamp, + finishReason: 'stop', + } + } } else if (event.type === 'message_delta') { if (event.delta.stop_reason) { switch (event.delta.stop_reason) { case 'tool_use': { - yield { - type: 'RUN_FINISHED', - runId, - model, - timestamp, - finishReason: 'tool_calls', - usage: { - promptTokens: event.usage.input_tokens || 0, - completionTokens: event.usage.output_tokens || 0, - totalTokens: - (event.usage.input_tokens || 0) + - (event.usage.output_tokens || 0), - }, - } + if (!hasEmittedRunTerminal) { + hasEmittedRunTerminal = true + yield { + type: 'RUN_FINISHED', + runId, + model, + timestamp, + finishReason: 'tool_calls', + usage: { + promptTokens: event.usage.input_tokens || 0, + completionTokens: event.usage.output_tokens || 0, + totalTokens: + (event.usage.input_tokens || 0) + + (event.usage.output_tokens || 0), + }, + } + } break } case 'max_tokens': { - yield { - type: 'RUN_ERROR', - runId, - model, - timestamp, - error: { - message: - 'The response was cut off because the maximum token limit was reached.', - code: 'max_tokens', - }, - } + if (!hasEmittedRunTerminal) { + hasEmittedRunTerminal = true + yield { + type: 'RUN_ERROR', + runId, + model, + timestamp, + error: { + message: + 'The response was cut off because the maximum token limit was reached.', + code: 'max_tokens', + }, + } + } break } default: { - yield { - type: 'RUN_FINISHED', - runId, - model, - timestamp, - finishReason: 'stop', - usage: { - promptTokens: event.usage.input_tokens || 0, - completionTokens: event.usage.output_tokens || 0, - totalTokens: - (event.usage.input_tokens || 0) + - (event.usage.output_tokens || 0), - }, - } + if (!hasEmittedRunTerminal) { + hasEmittedRunTerminal = true + yield { + type: 'RUN_FINISHED', + runId, + model, + timestamp, + finishReason: 'stop', + usage: { + promptTokens: event.usage.input_tokens || 0, + completionTokens: event.usage.output_tokens || 0, + totalTokens: + (event.usage.input_tokens || 0) + + (event.usage.output_tokens || 0), + }, + } + } } } } }packages/python/tanstack-ai/src/tanstack_ai/anthropic_adapter.py (2)
320-388: Prevent duplicate RUN_FINISHED / RUN_ERROR emissions.
message_deltaemits terminal events andmessage_stopalways emitsRUN_FINISHED, which can duplicate finishes and even emitRUN_FINISHEDafterRUN_ERROR. Track completion to avoid double-terminating a run.π Suggested fix
- if hasattr(delta, "stop_reason") and delta.stop_reason: + if hasattr(delta, "stop_reason") and delta.stop_reason and not run_finished: usage = None if hasattr(event, "usage") and event.usage: usage = { "promptTokens": event.usage.input_tokens, "completionTokens": event.usage.output_tokens, "totalTokens": event.usage.input_tokens + event.usage.output_tokens, } # Map Anthropic stop_reason to TanStack format if delta.stop_reason == "max_tokens": + run_finished = True yield RunErrorEvent( type="RUN_ERROR", runId=run_id, model=options.model, timestamp=timestamp, error={ "message": "The response was cut off because the maximum token limit was reached.", "code": "max_tokens", }, ) else: finish_reason = { "end_turn": "stop", "tool_use": "tool_calls", }.get(delta.stop_reason, "stop") + run_finished = True yield RunFinishedEvent( type="RUN_FINISHED", runId=run_id, model=options.model, timestamp=timestamp, finishReason=finish_reason, usage=usage, ) @@ - yield RunFinishedEvent( - type="RUN_FINISHED", - runId=run_id, - model=options.model, - timestamp=int(time.time() * 1000), - finishReason=finish_reason, - usage=usage, - ) + if not run_finished: + run_finished = True + yield RunFinishedEvent( + type="RUN_FINISHED", + runId=run_id, + model=options.model, + timestamp=int(time.time() * 1000), + finishReason=finish_reason, + usage=usage, + )Add the flag alongside the other lifecycle tracking variables:
run_finished = False
390-401: Emit RUN_STARTED before RUN_ERROR on early failures.If an exception occurs before the first stream event, the current path emits RUN_ERROR without a preceding RUN_STARTED.
π Suggested fix
- except Exception as e: - # Emit RUN_ERROR - yield RunErrorEvent( - type="RUN_ERROR", - runId=run_id, - model=options.model, - timestamp=int(time.time() * 1000), - error={ - "message": str(e), - "code": getattr(e, "code", None), - }, - ) + except Exception as e: + ts = int(time.time() * 1000) + if not has_emitted_run_started: + has_emitted_run_started = True + yield RunStartedEvent( + type="RUN_STARTED", + runId=run_id, + model=options.model, + timestamp=ts, + threadId=None, + ) + yield RunErrorEvent( + type="RUN_ERROR", + runId=run_id, + model=options.model, + timestamp=ts, + error={ + "message": str(e), + "code": getattr(e, "code", None), + }, + )packages/python/tanstack-ai/src/tanstack_ai/types.py (1)
326-333: Limit BaseStreamChunk.type to legacy values.
BaseStreamChunkis the base for legacy chunks, buttype: StreamChunkTypeallows AG-UI values on legacy shapes. Tighten it toLegacyStreamChunkTypeto avoid mixed typing.β Suggested fix
- type: StreamChunkType + type: LegacyStreamChunkType
π€ Fix all issues with AI agents
In `@docs/protocol/chunk-definitions.md`:
- Around line 2-41: The AG-UI event list mentions STATE_SNAPSHOT, STATE_DELTA,
and CUSTOM but no shapes are defined; update the docs by either adding explicit
interface/type definitions for these events (e.g., StateSnapshotEvent,
StateDeltaEvent, CustomAGUIEvent that extend BaseAGUIEvent and include fields
like state: unknown, delta: unknown, source?: string, and payload?: unknown) or
clearly mark them as "reserved/future" with example usage and minimal required
fields (type, timestamp, model, rawEvent) so readers know expected structure;
reference BaseAGUIEvent and AGUIEventType when adding the new sections to keep
the schema consistent.
In `@packages/php/tanstack-ai/src/StreamChunkConverter.php`:
- Around line 226-263: The content_block_stop handler emits TOOL_CALL_END using
$this->currentToolIndex and leaves the entry in $this->toolCallsMap, so
subsequent content_block_stop events can emit TOOL_CALL_END again for the same
tool call; update the logic in the content_block_stop branch (where $toolCall is
read from $this->toolCallsMap[$this->currentToolIndex]) to prevent duplicates by
marking the call as completed or removing it after emitting the TOOL_CALL_END
(e.g., set a 'completed' flag on $toolCall or unset
$this->toolCallsMap[$this->currentToolIndex]) and ensure the earlier guard
checks that !$toolCall['completed'] before emitting
TOOL_CALL_START/TOOL_CALL_END.
In `@packages/typescript/ai-gemini/src/adapters/text.ts`:
- Around line 248-255: The STEP_FINISHED yield uses a fallback expression
"stepId || generateId(this.name)" even though stepId must have been set by
STEP_STARTED; replace the fallback with a non-null assertion on stepId (e.g.,
use stepId! in the STEP_FINISHED object) so the code expresses the invariant and
avoids silently generating a new id, and ensure the change is made in the yield
that produces type: 'STEP_FINISHED' (referencing the stepId and generateId
symbols and the surrounding STEP_STARTED/STEP_FINISHED logic).
In `@packages/typescript/ai-ollama/src/adapters/text.ts`:
- Around line 336-345: The STEP_FINISHED emission currently falls back to
generateId('step') when stepId is null which can create inconsistent IDs; update
the emission to rely on the fact STEP_STARTED sets stepId and remove the
fallback by using a non-null assertion (stepId!) or otherwise assert/throw if
stepId is missing so STEP_FINISHED always uses the same stepId set by
STEP_STARTED (refer to STEP_FINISHED, STEP_STARTED, stepId, generateId, and
chunk.message.thinking in the surrounding code).
In `@packages/typescript/ai-openai/src/adapters/summarize.ts`:
- Around line 65-87: The SummarizationResult.id stays empty for AG-UI streams
because only legacy 'content' sets id; update the logic in summarize.ts so that
when handling chunk.type === 'TEXT_MESSAGE_CONTENT' you set id = chunk.messageId
(or chunk.messageId || id) and when handling chunk.type === 'RUN_FINISHED' set
id = chunk.runId (or chunk.runId || id) so SummarizationResult.id is populated;
ensure you still preserve existing fallback behavior (keep existing id if new
properties are absent) and reference the variables chunk, id, model, usage, and
the event types TEXT_MESSAGE_CONTENT and RUN_FINISHED when applying the changes.
In `@packages/typescript/ai/src/activities/chat/index.ts`:
- Around line 584-589: In handleTextMessageContentEvent, guard explicitly
against undefined instead of using if (chunk.content) so empty-string content
("") is not treated as absent; change the condition to check chunk.content !==
undefined (or typeof chunk.content !== "undefined") and assign
this.accumulatedContent = chunk.content when present, otherwise append
chunk.delta; also ensure this.accumulatedContent is initialized to an empty
string before appending to avoid NaN/undefined concatenation (references:
handleTextMessageContentEvent, TextMessageContentEvent, this.accumulatedContent,
chunk.content, chunk.delta).
In `@packages/typescript/ai/src/stream-to-response.ts`:
- Around line 29-33: The streamToText handler currently only appends chunk.delta
for TEXT_MESSAGE_CONTENT, causing loss when an adapter emits chunk.content
without delta; update the logic in streamToText (the branch handling chunk.type
=== 'TEXT_MESSAGE_CONTENT') to fall back to chunk.content when chunk.delta is
undefined or empty, mirroring the existing fallback for legacy 'content' chunks
(the other branch checking chunk.type === 'content'), so accumulatedContent uses
chunk.delta if present otherwise chunk.content.
In `@packages/typescript/smoke-tests/adapters/src/harness.ts`:
- Around line 268-326: The TOOL_CALL_END branch is leaving entries in
toolCallsInProgress which can leak stale args; inside the TOOL_CALL_END handling
(the else if block checking chunk.type === 'TOOL_CALL_END') remove the completed
entry from toolCallsInProgress (call toolCallsInProgress.delete(id) using the id
local variable) right after you derive name/args and after updating
toolCallMap/assistantDraft so the in-progress state is cleared for reused IDs;
reference the toolCallsInProgress map and the TOOL_CALL_END branch in harness.ts
to make this change.
π§Ή Nitpick comments (5)
packages/typescript/ai-ollama/src/adapters/text.ts (1)
212-257: Tool call handling emits TOOL_CALL_END immediately after TOOL_CALL_START without TOOL_CALL_ARGS.The
handleToolCallfunction emitsTOOL_CALL_STARTfollowed immediately byTOOL_CALL_END. This differs from the Gemini adapter which emitsTOOL_CALL_ARGSevents between start and end.If Ollama provides tool arguments in a single chunk (non-streaming), this is acceptable. However, for consistency with the AG-UI protocol and other adapters, consider emitting a
TOOL_CALL_ARGSevent with the full arguments beforeTOOL_CALL_END:β»οΈ Suggested addition of TOOL_CALL_ARGS event
// Emit TOOL_CALL_START if not already emitted for this tool call if (!toolCallsEmitted.has(toolCallId)) { toolCallsEmitted.add(toolCallId) events.push({ type: 'TOOL_CALL_START', toolCallId, toolName: actualToolCall.function.name || '', model: chunk.model, timestamp, index: actualToolCall.function.index, }) } // Parse input let parsedInput: unknown = {} const argsStr = typeof actualToolCall.function.arguments === 'string' ? actualToolCall.function.arguments : JSON.stringify(actualToolCall.function.arguments) try { parsedInput = JSON.parse(argsStr) } catch { parsedInput = actualToolCall.function.arguments } + // Emit TOOL_CALL_ARGS with full arguments + events.push({ + type: 'TOOL_CALL_ARGS', + toolCallId, + model: chunk.model, + timestamp, + delta: argsStr, + args: argsStr, + }) + // Emit TOOL_CALL_END events.push({ type: 'TOOL_CALL_END',packages/typescript/ai-gemini/tests/gemini-adapter.test.ts (1)
300-331: Test assertions correctly validate AG-UI event sequence.The updated assertions properly verify the new event lifecycle:
RUN_STARTEDβTEXT_MESSAGE_STARTβTEXT_MESSAGE_CONTENT(ΓN) βTEXT_MESSAGE_ENDβRUN_FINISHED. UsingtoMatchObjectallows for flexible matching while validating essential fields.Consider adding test cases for:
- Tool call event sequence (
TOOL_CALL_STARTβTOOL_CALL_ARGSβTOOL_CALL_END)- Error scenarios (
RUN_ERRORevents)- Thinking/reasoning flow (
STEP_STARTEDβSTEP_FINISHED)packages/typescript/ai/src/activities/chat/tools/tool-calls.ts (2)
72-80: Silent no-op if tool call not found inaddToolCallArgsEvent.If
addToolCallArgsEventis called beforeaddToolCallStartEvent(out-of-order events), the arguments are silently dropped. Consider logging a warning or throwing for debugging purposes.β»οΈ Optional: Add warning for missing tool call
addToolCallArgsEvent(event: ToolCallArgsEvent): void { // Find the tool call by ID for (const [, toolCall] of this.toolCallsMap.entries()) { if (toolCall.id === event.toolCallId) { toolCall.function.arguments += event.delta - break + return } } + // Tool call not found - this shouldn't happen in normal flow + console.warn(`TOOL_CALL_ARGS received for unknown toolCallId: ${event.toolCallId}`) }
240-258: TOOL_CALL_END event missinginputfield when emitted from executeTools.The
TOOL_CALL_ENDevent emitted at line 241-248 includesresultbut notinput. According to theToolCallEndEventinterface in types.ts,inputis an optional field that should contain the final parsed input arguments. For consistency with adapter emissions, consider including it:β»οΈ Add input field to TOOL_CALL_END event
if (finishEvent.type === 'RUN_FINISHED') { + let parsedInput: unknown + try { + parsedInput = JSON.parse(toolCall.function.arguments) + } catch { + parsedInput = undefined + } yield { type: 'TOOL_CALL_END', toolCallId: toolCall.id, toolName: toolCall.function.name, model: finishEvent.model, timestamp: Date.now(), + input: parsedInput, result: toolResultContent, }packages/python/tanstack-ai/src/tanstack_ai/converter.py (1)
391-407: Drop the unused loop index to satisfy lint.Ruff flags the loop index as unused. You can iterate over values directly.
β»οΈ Suggested tweak
- for tool_index, tool_call in self.tool_calls_map.items(): + for tool_call in self.tool_calls_map.values():
| } elseif ($eventType === 'content_block_stop') { | ||
| // Content block completed | ||
| $toolCall = $this->toolCallsMap[$this->currentToolIndex] ?? null; | ||
| if ($toolCall) { | ||
| // If tool call wasn't started yet (no args), start it now | ||
| if (!$toolCall['started']) { | ||
| $toolCall['started'] = true; | ||
| $this->toolCallsMap[$this->currentToolIndex] = $toolCall; | ||
|
|
||
| $chunks[] = [ | ||
| 'type' => 'TOOL_CALL_START', | ||
| 'toolCallId' => $toolCall['id'], | ||
| 'toolName' => $toolCall['name'], | ||
| 'model' => $this->model, | ||
| 'timestamp' => $this->timestamp, | ||
| 'toolCall' => [ | ||
| 'id' => $toolCall['id'], | ||
| 'type' => 'function', | ||
| 'function' => [ | ||
| 'name' => $toolCall['name'], | ||
| 'arguments' => $partialJson // Incremental JSON | ||
| ] | ||
| ], | ||
| 'index' => $this->currentToolIndex | ||
| ]; | ||
| } | ||
|
|
||
| // Parse input and emit TOOL_CALL_END | ||
| $parsedInput = []; | ||
| if (!empty($toolCall['input'])) { | ||
| try { | ||
| $parsedInput = json_decode($toolCall['input'], true) ?? []; | ||
| } catch (\Exception $e) { | ||
| $parsedInput = []; | ||
| } | ||
| } | ||
|
|
||
| $chunks[] = [ | ||
| 'type' => 'TOOL_CALL_END', | ||
| 'toolCallId' => $toolCall['id'], | ||
| 'toolName' => $toolCall['name'], | ||
| 'model' => $this->model, | ||
| 'timestamp' => $this->timestamp, | ||
| 'input' => $parsedInput | ||
| ]; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Prevent duplicate TOOL_CALL_END emissions on later block stops.
content_block_stop reuses $currentToolIndex without clearing the tool call entry. If additional blocks occur after a tool_use block, TOOL_CALL_END can be emitted multiple times for the same call.
π§ Proposed fix
$chunks[] = [
'type' => 'TOOL_CALL_END',
'toolCallId' => $toolCall['id'],
'toolName' => $toolCall['name'],
'model' => $this->model,
'timestamp' => $this->timestamp,
'input' => $parsedInput
];
+
+ unset($this->toolCallsMap[$this->currentToolIndex]);π Committable suggestion
βΌοΈ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| } elseif ($eventType === 'content_block_stop') { | |
| // Content block completed | |
| $toolCall = $this->toolCallsMap[$this->currentToolIndex] ?? null; | |
| if ($toolCall) { | |
| // If tool call wasn't started yet (no args), start it now | |
| if (!$toolCall['started']) { | |
| $toolCall['started'] = true; | |
| $this->toolCallsMap[$this->currentToolIndex] = $toolCall; | |
| $chunks[] = [ | |
| 'type' => 'TOOL_CALL_START', | |
| 'toolCallId' => $toolCall['id'], | |
| 'toolName' => $toolCall['name'], | |
| 'model' => $this->model, | |
| 'timestamp' => $this->timestamp, | |
| 'toolCall' => [ | |
| 'id' => $toolCall['id'], | |
| 'type' => 'function', | |
| 'function' => [ | |
| 'name' => $toolCall['name'], | |
| 'arguments' => $partialJson // Incremental JSON | |
| ] | |
| ], | |
| 'index' => $this->currentToolIndex | |
| ]; | |
| } | |
| // Parse input and emit TOOL_CALL_END | |
| $parsedInput = []; | |
| if (!empty($toolCall['input'])) { | |
| try { | |
| $parsedInput = json_decode($toolCall['input'], true) ?? []; | |
| } catch (\Exception $e) { | |
| $parsedInput = []; | |
| } | |
| } | |
| $chunks[] = [ | |
| 'type' => 'TOOL_CALL_END', | |
| 'toolCallId' => $toolCall['id'], | |
| 'toolName' => $toolCall['name'], | |
| 'model' => $this->model, | |
| 'timestamp' => $this->timestamp, | |
| 'input' => $parsedInput | |
| ]; | |
| } | |
| } elseif ($eventType === 'content_block_stop') { | |
| // Content block completed | |
| $toolCall = $this->toolCallsMap[$this->currentToolIndex] ?? null; | |
| if ($toolCall) { | |
| // If tool call wasn't started yet (no args), start it now | |
| if (!$toolCall['started']) { | |
| $toolCall['started'] = true; | |
| $this->toolCallsMap[$this->currentToolIndex] = $toolCall; | |
| $chunks[] = [ | |
| 'type' => 'TOOL_CALL_START', | |
| 'toolCallId' => $toolCall['id'], | |
| 'toolName' => $toolCall['name'], | |
| 'model' => $this->model, | |
| 'timestamp' => $this->timestamp, | |
| 'index' => $this->currentToolIndex | |
| ]; | |
| } | |
| // Parse input and emit TOOL_CALL_END | |
| $parsedInput = []; | |
| if (!empty($toolCall['input'])) { | |
| try { | |
| $parsedInput = json_decode($toolCall['input'], true) ?? []; | |
| } catch (\Exception $e) { | |
| $parsedInput = []; | |
| } | |
| } | |
| $chunks[] = [ | |
| 'type' => 'TOOL_CALL_END', | |
| 'toolCallId' => $toolCall['id'], | |
| 'toolName' => $toolCall['name'], | |
| 'model' => $this->model, | |
| 'timestamp' => $this->timestamp, | |
| 'input' => $parsedInput | |
| ]; | |
| unset($this->toolCallsMap[$this->currentToolIndex]); | |
| } |
π€ Prompt for AI Agents
In `@packages/php/tanstack-ai/src/StreamChunkConverter.php` around lines 226 -
263, The content_block_stop handler emits TOOL_CALL_END using
$this->currentToolIndex and leaves the entry in $this->toolCallsMap, so
subsequent content_block_stop events can emit TOOL_CALL_END again for the same
tool call; update the logic in the content_block_stop branch (where $toolCall is
read from $this->toolCallsMap[$this->currentToolIndex]) to prevent duplicates by
marking the call as completed or removing it after emitting the TOOL_CALL_END
(e.g., set a 'completed' flag on $toolCall or unset
$this->toolCallsMap[$this->currentToolIndex]) and ensure the earlier guard
checks that !$toolCall['completed'] before emitting
TOOL_CALL_START/TOOL_CALL_END.
| yield { | ||
| type: 'thinking', | ||
| content: part.text, | ||
| delta: part.text, | ||
| id: generateId(this.name), | ||
| type: 'STEP_FINISHED', | ||
| stepId: stepId || generateId(this.name), | ||
| model, | ||
| timestamp, | ||
| delta: part.text, | ||
| content: part.text, | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same stepId fallback issue as Ollama adapter.
Line 250 uses stepId || generateId(this.name) as a fallback, but stepId should always be set by STEP_STARTED before STEP_FINISHED is yielded. Consider using a non-null assertion for consistency.
π€ Prompt for AI Agents
In `@packages/typescript/ai-gemini/src/adapters/text.ts` around lines 248 - 255,
The STEP_FINISHED yield uses a fallback expression "stepId ||
generateId(this.name)" even though stepId must have been set by STEP_STARTED;
replace the fallback with a non-null assertion on stepId (e.g., use stepId! in
the STEP_FINISHED object) so the code expresses the invariant and avoids
silently generating a new id, and ensure the change is made in the yield that
produces type: 'STEP_FINISHED' (referencing the stepId and generateId symbols
and the surrounding STEP_STARTED/STEP_FINISHED logic).
| accumulatedReasoning += chunk.message.thinking | ||
| yield { | ||
| type: 'thinking', | ||
| id: responseId, | ||
| type: 'STEP_FINISHED', | ||
| stepId: stepId || generateId('step'), | ||
| model: chunk.model, | ||
| timestamp, | ||
| content: accumulatedReasoning, | ||
| delta: chunk.message.thinking, | ||
| content: accumulatedReasoning, | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Potential issue: stepId fallback generates new ID on each STEP_FINISHED.
On line 339, if stepId is null, a new ID is generated via generateId('step'). However, stepId should always be set by the STEP_STARTED emission on line 326. The fallback || generateId('step') suggests defensive coding, but if reached, it would create inconsistent step IDs across events.
π§ Suggested fix: Assert stepId is set or remove fallback
yield {
type: 'STEP_FINISHED',
- stepId: stepId || generateId('step'),
+ stepId: stepId!,
model: chunk.model,
timestamp,
delta: chunk.message.thinking,
content: accumulatedReasoning,
}The ! assertion is safe here because STEP_FINISHED is only yielded inside the if (chunk.message.thinking) block which always sets stepId via STEP_STARTED first.
π Committable suggestion
βΌοΈ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| accumulatedReasoning += chunk.message.thinking | |
| yield { | |
| type: 'thinking', | |
| id: responseId, | |
| type: 'STEP_FINISHED', | |
| stepId: stepId || generateId('step'), | |
| model: chunk.model, | |
| timestamp, | |
| content: accumulatedReasoning, | |
| delta: chunk.message.thinking, | |
| content: accumulatedReasoning, | |
| } | |
| } | |
| accumulatedReasoning += chunk.message.thinking | |
| yield { | |
| type: 'STEP_FINISHED', | |
| stepId: stepId!, | |
| model: chunk.model, | |
| timestamp, | |
| delta: chunk.message.thinking, | |
| content: accumulatedReasoning, | |
| } |
π€ Prompt for AI Agents
In `@packages/typescript/ai-ollama/src/adapters/text.ts` around lines 336 - 345,
The STEP_FINISHED emission currently falls back to generateId('step') when
stepId is null which can create inconsistent IDs; update the emission to rely on
the fact STEP_STARTED sets stepId and remove the fallback by using a non-null
assertion (stepId!) or otherwise assert/throw if stepId is missing so
STEP_FINISHED always uses the same stepId set by STEP_STARTED (refer to
STEP_FINISHED, STEP_STARTED, stepId, generateId, and chunk.message.thinking in
the surrounding code).
| // Legacy content event | ||
| if (chunk.type === 'content') { | ||
| summary = chunk.content | ||
| id = chunk.id | ||
| model = chunk.model | ||
| } | ||
| // AG-UI TEXT_MESSAGE_CONTENT event | ||
| else if (chunk.type === 'TEXT_MESSAGE_CONTENT') { | ||
| if (chunk.content) { | ||
| summary = chunk.content | ||
| } else { | ||
| summary += chunk.delta | ||
| } | ||
| model = chunk.model || model | ||
| } | ||
| // Legacy done event | ||
| if (chunk.type === 'done' && chunk.usage) { | ||
| usage = chunk.usage | ||
| } | ||
| // AG-UI RUN_FINISHED event | ||
| else if (chunk.type === 'RUN_FINISHED' && chunk.usage) { | ||
| usage = chunk.usage | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Populate id for AG-UI chunks.
For AG-UI streams, id remains '' because only legacy content sets it. Consider mapping from messageId (or runId on RUN_FINISHED) to keep SummarizationResult.id meaningful.
π§ Proposed fix
if (chunk.type === 'content') {
summary = chunk.content
id = chunk.id
model = chunk.model
}
// AG-UI TEXT_MESSAGE_CONTENT event
else if (chunk.type === 'TEXT_MESSAGE_CONTENT') {
+ if (!id && 'messageId' in chunk) {
+ id = chunk.messageId
+ }
if (chunk.content) {
summary = chunk.content
} else {
summary += chunk.delta
}
model = chunk.model || model
}
// Legacy done event
if (chunk.type === 'done' && chunk.usage) {
usage = chunk.usage
}
// AG-UI RUN_FINISHED event
else if (chunk.type === 'RUN_FINISHED' && chunk.usage) {
+ if (!id && 'runId' in chunk) {
+ id = chunk.runId
+ }
usage = chunk.usage
}π Committable suggestion
βΌοΈ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Legacy content event | |
| if (chunk.type === 'content') { | |
| summary = chunk.content | |
| id = chunk.id | |
| model = chunk.model | |
| } | |
| // AG-UI TEXT_MESSAGE_CONTENT event | |
| else if (chunk.type === 'TEXT_MESSAGE_CONTENT') { | |
| if (chunk.content) { | |
| summary = chunk.content | |
| } else { | |
| summary += chunk.delta | |
| } | |
| model = chunk.model || model | |
| } | |
| // Legacy done event | |
| if (chunk.type === 'done' && chunk.usage) { | |
| usage = chunk.usage | |
| } | |
| // AG-UI RUN_FINISHED event | |
| else if (chunk.type === 'RUN_FINISHED' && chunk.usage) { | |
| usage = chunk.usage | |
| } | |
| // Legacy content event | |
| if (chunk.type === 'content') { | |
| summary = chunk.content | |
| id = chunk.id | |
| model = chunk.model | |
| } | |
| // AG-UI TEXT_MESSAGE_CONTENT event | |
| else if (chunk.type === 'TEXT_MESSAGE_CONTENT') { | |
| if (!id && 'messageId' in chunk) { | |
| id = chunk.messageId | |
| } | |
| if (chunk.content) { | |
| summary = chunk.content | |
| } else { | |
| summary += chunk.delta | |
| } | |
| model = chunk.model || model | |
| } | |
| // Legacy done event | |
| if (chunk.type === 'done' && chunk.usage) { | |
| usage = chunk.usage | |
| } | |
| // AG-UI RUN_FINISHED event | |
| else if (chunk.type === 'RUN_FINISHED' && chunk.usage) { | |
| if (!id && 'runId' in chunk) { | |
| id = chunk.runId | |
| } | |
| usage = chunk.usage | |
| } |
π€ Prompt for AI Agents
In `@packages/typescript/ai-openai/src/adapters/summarize.ts` around lines 65 -
87, The SummarizationResult.id stays empty for AG-UI streams because only legacy
'content' sets id; update the logic in summarize.ts so that when handling
chunk.type === 'TEXT_MESSAGE_CONTENT' you set id = chunk.messageId (or
chunk.messageId || id) and when handling chunk.type === 'RUN_FINISHED' set id =
chunk.runId (or chunk.runId || id) so SummarizationResult.id is populated;
ensure you still preserve existing fallback behavior (keep existing id if new
properties are absent) and reference the variables chunk, id, model, usage, and
the event types TEXT_MESSAGE_CONTENT and RUN_FINISHED when applying the changes.
| private handleTextMessageContentEvent(chunk: TextMessageContentEvent): void { | ||
| if (chunk.content) { | ||
| this.accumulatedContent = chunk.content | ||
| } else { | ||
| this.accumulatedContent += chunk.delta | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Guard against empty-string content values when accumulating.
if (chunk.content) treats "" as absent and can mis-accumulate content. Prefer an explicit undefined check.
π©Ή Suggested fix
- if (chunk.content) {
+ if (chunk.content !== undefined) {
this.accumulatedContent = chunk.content
} else {
this.accumulatedContent += chunk.delta
}π Committable suggestion
βΌοΈ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| private handleTextMessageContentEvent(chunk: TextMessageContentEvent): void { | |
| if (chunk.content) { | |
| this.accumulatedContent = chunk.content | |
| } else { | |
| this.accumulatedContent += chunk.delta | |
| } | |
| private handleTextMessageContentEvent(chunk: TextMessageContentEvent): void { | |
| if (chunk.content !== undefined) { | |
| this.accumulatedContent = chunk.content | |
| } else { | |
| this.accumulatedContent += chunk.delta | |
| } |
π€ Prompt for AI Agents
In `@packages/typescript/ai/src/activities/chat/index.ts` around lines 584 - 589,
In handleTextMessageContentEvent, guard explicitly against undefined instead of
using if (chunk.content) so empty-string content ("") is not treated as absent;
change the condition to check chunk.content !== undefined (or typeof
chunk.content !== "undefined") and assign this.accumulatedContent =
chunk.content when present, otherwise append chunk.delta; also ensure
this.accumulatedContent is initialized to an empty string before appending to
avoid NaN/undefined concatenation (references: handleTextMessageContentEvent,
TextMessageContentEvent, this.accumulatedContent, chunk.content, chunk.delta).
| // Handle both AG-UI TEXT_MESSAGE_CONTENT and legacy 'content' chunks | ||
| if (chunk.type === 'TEXT_MESSAGE_CONTENT' && chunk.delta) { | ||
| accumulatedContent += chunk.delta | ||
| } else if (chunk.type === 'content' && chunk.delta) { | ||
| accumulatedContent += chunk.delta |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Handle TEXT_MESSAGE_CONTENT when delta is absent.
If an adapter emits TEXT_MESSAGE_CONTENT with only content (no delta), streamToText returns an empty string. Consider falling back to content to preserve non-delta events.
π§ Proposed fix
- if (chunk.type === 'TEXT_MESSAGE_CONTENT' && chunk.delta) {
- accumulatedContent += chunk.delta
- } else if (chunk.type === 'content' && chunk.delta) {
- accumulatedContent += chunk.delta
- }
+ if (chunk.type === 'TEXT_MESSAGE_CONTENT') {
+ const piece = chunk.delta ?? chunk.content
+ if (piece) accumulatedContent += piece
+ } else if (chunk.type === 'content') {
+ const piece = chunk.delta ?? chunk.content
+ if (piece) accumulatedContent += piece
+ }π Committable suggestion
βΌοΈ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Handle both AG-UI TEXT_MESSAGE_CONTENT and legacy 'content' chunks | |
| if (chunk.type === 'TEXT_MESSAGE_CONTENT' && chunk.delta) { | |
| accumulatedContent += chunk.delta | |
| } else if (chunk.type === 'content' && chunk.delta) { | |
| accumulatedContent += chunk.delta | |
| // Handle both AG-UI TEXT_MESSAGE_CONTENT and legacy 'content' chunks | |
| if (chunk.type === 'TEXT_MESSAGE_CONTENT') { | |
| const piece = chunk.delta ?? chunk.content | |
| if (piece) accumulatedContent += piece | |
| } else if (chunk.type === 'content') { | |
| const piece = chunk.delta ?? chunk.content | |
| if (piece) accumulatedContent += piece | |
| } |
π€ Prompt for AI Agents
In `@packages/typescript/ai/src/stream-to-response.ts` around lines 29 - 33, The
streamToText handler currently only appends chunk.delta for
TEXT_MESSAGE_CONTENT, causing loss when an adapter emits chunk.content without
delta; update the logic in streamToText (the branch handling chunk.type ===
'TEXT_MESSAGE_CONTENT') to fall back to chunk.content when chunk.delta is
undefined or empty, mirroring the existing fallback for legacy 'content' chunks
(the other branch checking chunk.type === 'content'), so accumulatedContent uses
chunk.delta if present otherwise chunk.content.
| // AG-UI TOOL_CALL_START event | ||
| else if (chunk.type === 'TOOL_CALL_START') { | ||
| const id = chunk.toolCallId | ||
| toolCallsInProgress.set(id, { | ||
| name: chunk.toolName, | ||
| args: '', | ||
| }) | ||
|
|
||
| if (!assistantDraft) { | ||
| assistantDraft = { role: 'assistant', content: null, toolCalls: [] } | ||
| } | ||
|
|
||
| chunkData.toolCallId = chunk.toolCallId | ||
| chunkData.toolName = chunk.toolName | ||
| } | ||
| // AG-UI TOOL_CALL_ARGS event | ||
| else if (chunk.type === 'TOOL_CALL_ARGS') { | ||
| const id = chunk.toolCallId | ||
| const existing = toolCallsInProgress.get(id) | ||
| if (existing) { | ||
| existing.args = chunk.args || existing.args + (chunk.delta || '') | ||
| } | ||
|
|
||
| chunkData.toolCallId = chunk.toolCallId | ||
| chunkData.delta = chunk.delta | ||
| chunkData.args = chunk.args | ||
| } | ||
| // AG-UI TOOL_CALL_END event | ||
| else if (chunk.type === 'TOOL_CALL_END') { | ||
| const id = chunk.toolCallId | ||
| const inProgress = toolCallsInProgress.get(id) | ||
| const name = chunk.toolName || inProgress?.name || '' | ||
| const args = | ||
| inProgress?.args || (chunk.input ? JSON.stringify(chunk.input) : '') | ||
|
|
||
| // Add to legacy toolCallMap for compatibility | ||
| toolCallMap.set(id, { | ||
| id, | ||
| name, | ||
| arguments: args, | ||
| }) | ||
|
|
||
| // Add to assistant draft | ||
| if (!assistantDraft) { | ||
| assistantDraft = { role: 'assistant', content: null, toolCalls: [] } | ||
| } | ||
| assistantDraft.toolCalls?.push({ | ||
| id, | ||
| type: 'function', | ||
| function: { | ||
| name, | ||
| arguments: args, | ||
| }, | ||
| }) | ||
|
|
||
| chunkData.toolCallId = chunk.toolCallId | ||
| chunkData.toolName = chunk.toolName | ||
| chunkData.input = chunk.input | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Clear in-progress tool call state after TOOL_CALL_END.
toolCallsInProgress entries persist after completion. If a toolCallId is reused or multiple tool calls occur, stale args can leak.
π§ Proposed fix
else if (chunk.type === 'TOOL_CALL_END') {
const id = chunk.toolCallId
const inProgress = toolCallsInProgress.get(id)
const name = chunk.toolName || inProgress?.name || ''
const args =
inProgress?.args || (chunk.input ? JSON.stringify(chunk.input) : '')
+ toolCallsInProgress.delete(id)π€ Prompt for AI Agents
In `@packages/typescript/smoke-tests/adapters/src/harness.ts` around lines 268 -
326, The TOOL_CALL_END branch is leaving entries in toolCallsInProgress which
can leak stale args; inside the TOOL_CALL_END handling (the else if block
checking chunk.type === 'TOOL_CALL_END') remove the completed entry from
toolCallsInProgress (call toolCallsInProgress.delete(id) using the id local
variable) right after you derive name/args and after updating
toolCallMap/assistantDraft so the in-progress state is cleared for reused IDs;
reference the toolCallsInProgress map and the TOOL_CALL_END branch in harness.ts
to make this change.
|
So a few questions on my end:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
Caution
Some comments are outside the diff and canβt be posted inline due to platform limitations.
β οΈ Outside diff range comments (1)
packages/typescript/ai-grok/src/adapters/text.ts (1)
379-394: Inconsistent logging and incomplete event closure on error path.Two concerns:
Line 381: Uses
console.loginstead ofconsole.errorfor an error condition, inconsistent with theconsole.errorusage in thechatStreamcatch block (lines 115-118).Event lifecycle gap: When an error occurs mid-stream,
TEXT_MESSAGE_ENDandTOOL_CALL_ENDevents are not emitted for in-progress messages/tool calls beforeRUN_ERROR. This leaves consumers with unclosed events, whereas the normal completion path properly closes all events beforeRUN_FINISHED.π§ Suggested fix for logging consistency and event closure
} catch (error: unknown) { const err = error as Error & { code?: string } - console.log('[Grok Adapter] Stream ended with error:', err.message) + console.error('[Grok Adapter] Stream ended with error:', err.message) + + // Close any in-progress tool calls + for (const [, toolCall] of toolCallsInProgress) { + if (toolCall.started) { + yield { + type: 'TOOL_CALL_END', + toolCallId: toolCall.id, + toolName: toolCall.name, + model: options.model, + timestamp, + input: {}, + error: true, + } + } + } + + // Close text message if started + if (hasEmittedTextMessageStart) { + yield { + type: 'TEXT_MESSAGE_END', + messageId: aguiState.messageId, + model: options.model, + timestamp, + error: true, + } + } // Emit AG-UI RUN_ERROR yield { type: 'RUN_ERROR',
π€ Fix all issues with AI agents
In `@packages/typescript/ai-grok/src/adapters/summarize.ts`:
- Around line 71-79: The RUN_FINISHED event branch currently doesn't set the
SummarizationResult.id, so when only AG-UI events are emitted the id stays
empty; update the RUN_FINISHED handler (the branch checking chunk.type ===
'RUN_FINISHED') to assign id = chunk.runId (or id = chunk.runId || id) similar
to how the legacy 'done' handler sets id = chunk.id, ensuring the final result
object produced by this adapter includes the runId. Also verify the variable
used to hold the result id (the same id referenced later when returning the
SummarizationResult) is updated in that branch so TEXT_MESSAGE_CONTENT handling
remains unchanged.
In `@packages/typescript/ai-openrouter/src/adapters/summarize.ts`:
- Around line 79-87: The TEXT_MESSAGE_CONTENT branch fails to set the message
identifier so SummarizationResult.id remains empty when only AG-UI events
arrive; update the handler for chunk.type === 'TEXT_MESSAGE_CONTENT' to extract
the AG-UI message id (chunk.messageId) and assign it to the same id variable
used by the legacy content path (the one set from chunk.id), e.g., set id =
chunk.messageId (or id ||= chunk.messageId) alongside assigning summary and
model, ensuring the SummarizationResult (and types like TextMessageContentEvent)
carries the messageId.
In `@packages/typescript/ai-openrouter/src/adapters/text.ts`:
- Around line 453-458: The code currently swallows JSON.parse errors when
parsing tc.arguments into parsedInput; change the catch so it preserves/exports
the raw malformed input and logs or includes it in the TOOL_CALL_END payload for
debugging: keep parsedInput = {} on parse failure but set a rawArguments (or
rawArgumentsOnError) variable to tc.arguments and call the existing logger/event
emitter to record a warning including tc.arguments and the parse error, and add
that rawArguments field to the TOOL_CALL_END event payload so consumers can
inspect the original malformed string; refer to parsedInput, tc.arguments and
TOOL_CALL_END to locate where to add the log/event and the new field.
In `@packages/typescript/ai-openrouter/tests/openrouter-adapter.test.ts`:
- Around line 389-439: The test currently assumes the first yielded StreamChunk
is an AG-UI RUN_STARTED which breaks if legacy chunks interleave; update the
test that consumes adapter.chatStream (the loop producing chunks:
Array<StreamChunk>) to first filter or map the collected chunks to only AG-UI
event types (look for chunk.type matching RUN_STARTED, RUN_ENDED, etc.) and then
assert that the first AG-UI event's type is 'RUN_STARTED' and validate its runId
and model (refer to adapter.chatStream, StreamChunk, and the 'RUN_STARTED'
symbol when locating where to change assertions).
π§Ή Nitpick comments (1)
packages/typescript/ai-grok/src/adapters/text.ts (1)
324-332: Consider logging JSON parse failures for tool call arguments.When
JSON.parse(toolCall.arguments)fails, the error is silently caught andparsedInputdefaults to{}. While this prevents stream interruption, it could mask malformed responses from the API. Consider adding a debug-level log to aid troubleshooting.π§ Suggested improvement
try { parsedInput = toolCall.arguments ? JSON.parse(toolCall.arguments) : {} } catch { + console.warn( + `[Grok Adapter] Failed to parse tool call arguments for ${toolCall.name}:`, + toolCall.arguments?.slice(0, 100), + ) parsedInput = {} }
| // AG-UI TEXT_MESSAGE_CONTENT event | ||
| else if (chunk.type === 'TEXT_MESSAGE_CONTENT') { | ||
| if (chunk.content) { | ||
| summary = chunk.content | ||
| } else { | ||
| summary += chunk.delta | ||
| } | ||
| model = chunk.model || model | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
π§© Analysis chain
π Script executed:
#!/bin/bash
# Inspect StreamChunk type and AG-UI chunk fields
echo "=== StreamChunk Type Definition ==="
rg -n --type=ts 'type StreamChunk|interface StreamChunk' -C3
echo -e "\n=== AG-UI Chunk Types ==="
rg -n --type=ts 'TEXT_MESSAGE_CONTENT|RUN_STARTED|RUN_FINISHED' -C3
echo -e "\n=== Summarize Adapters ID Handling ==="
fd -e ts 'summarize.ts' packages/typescript/Repository: TanStack/ai
Length of output: 50367
π Script executed:
#!/bin/bash
# Inspect StreamChunk type and AG-UI chunk fields
echo "=== StreamChunk Type Definition ==="
rg -n --type=ts 'type StreamChunk|interface StreamChunk' -C5
echo -e "\n=== AG-UI Chunk Types ==="
rg -n --type=ts 'TEXT_MESSAGE_CONTENT|RUN_STARTED|RUN_FINISHED' -C3
echo -e "\n=== Summarize Adapter Files ==="
find packages/typescript -name 'summarize.ts' -type fRepository: TanStack/ai
Length of output: 50367
π Script executed:
#!/bin/bash
echo "=== Full ai-grok summarize.ts file ==="
cat -n packages/typescript/ai-grok/src/adapters/summarize.ts | head -100
echo -e "\n=== ai-openai summarize ID handling ==="
sed -n '65,95p' packages/typescript/ai-openai/src/adapters/summarize.ts | cat -n
echo -e "\n=== SummarizationResult type definition ==="
rg -n 'interface SummarizationResult|type SummarizationResult' -A5 packages/typescript/ai/src/types.tsRepository: TanStack/ai
Length of output: 5087
Populate SummarizationResult.id from AG-UI RUN_FINISHED.runId.
When only AG-UI events are emitted, id remains empty. Extract chunk.runId from the RUN_FINISHED event branch to populate the result id field, matching the pattern in the legacy 'done' event handler that sets id = chunk.id.
π€ Prompt for AI Agents
In `@packages/typescript/ai-grok/src/adapters/summarize.ts` around lines 71 - 79,
The RUN_FINISHED event branch currently doesn't set the SummarizationResult.id,
so when only AG-UI events are emitted the id stays empty; update the
RUN_FINISHED handler (the branch checking chunk.type === 'RUN_FINISHED') to
assign id = chunk.runId (or id = chunk.runId || id) similar to how the legacy
'done' handler sets id = chunk.id, ensuring the final result object produced by
this adapter includes the runId. Also verify the variable used to hold the
result id (the same id referenced later when returning the SummarizationResult)
is updated in that branch so TEXT_MESSAGE_CONTENT handling remains unchanged.
| // AG-UI TEXT_MESSAGE_CONTENT event | ||
| else if (chunk.type === 'TEXT_MESSAGE_CONTENT') { | ||
| if (chunk.content) { | ||
| summary = chunk.content | ||
| } else { | ||
| summary += chunk.delta | ||
| } | ||
| model = chunk.model || model | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
π§© Analysis chain
π Script executed:
# First, locate the summarize.ts file
fd -t f "summarize.ts" packages/typescript/ai-openrouter/src/adapters/Repository: TanStack/ai
Length of output: 114
π Script executed:
# Once located, examine the code around lines 58-107
cat -n packages/typescript/ai-openrouter/src/adapters/summarize.ts | sed -n '55,110p'Repository: TanStack/ai
Length of output: 2157
π Script executed:
# Search for TEXT_MESSAGE_CONTENT type definition
rg -n "TEXT_MESSAGE_CONTENT" packages/typescript --type=ts -B 3 -A 10 | head -100Repository: TanStack/ai
Length of output: 7724
π Script executed:
# Search for type definitions of chunk or event structures in AI-related files
rg -n "type.*Content.*=" packages/typescript/ai-openrouter/src --type=ts -A 5 | head -80Repository: TanStack/ai
Length of output: 37
π Script executed:
# Search for RUN_STARTED and other AG-UI events that might provide ID
rg -n "RUN_STARTED\|TextMessageStartEvent\|BaseAGUIEvent" packages/typescript/ai/src/types.ts -A 8 | head -60Repository: TanStack/ai
Length of output: 37
π Script executed:
# Also check if messageId is optional or required in TextMessageContentEvent
rg -n "interface.*Event\|type.*Event" packages/typescript/ai/src/types.ts -A 6 | grep -A 6 "TextMessageContent\|RUN_START"Repository: TanStack/ai
Length of output: 37
π Script executed:
# Let's look at the types.ts file directly around the event definitions
cat -n packages/typescript/ai/src/types.ts | sed -n '750,850p'Repository: TanStack/ai
Length of output: 3105
π Script executed:
# Search for BaseAGUIEvent and related event definitions
rg -n "BaseAGUIEvent\|interface.*Start\|interface.*Finished" packages/typescript/ai/src/types.ts -B 2 -A 5Repository: TanStack/ai
Length of output: 37
π Script executed:
# Let's check if there are any other files that handle summarizeStream to see how they handle ID
rg -n "summarizeStream" packages/typescript/ai-openrouter/src/adapters/summarize.ts -A 20 | head -40Repository: TanStack/ai
Length of output: 832
π Script executed:
# Also check what fields the legacy 'content' chunk type has
rg -n "interface.*ContentEvent\|interface.*ChunkType\|type.*Chunk" packages/typescript/ai/src/types.ts -B 2 -A 6 | head -80Repository: TanStack/ai
Length of output: 37
π Script executed:
# Search for ContentChunk or legacy chunk definitions
rg -n "type.*ContentChunk\|interface.*ContentChunk\|type.*LegacyChunk" packages/typescript/ai/src/types.ts -A 8Repository: TanStack/ai
Length of output: 37
π Script executed:
# Look at the StreamChunk type union to see all possible chunk types
rg -n "type StreamChunk\|export type.*Chunk" packages/typescript/ai/src/types.ts -A 20 | head -80Repository: TanStack/ai
Length of output: 37
π Script executed:
# Let's find where StreamChunk is defined
rg -n "StreamChunk" packages/typescript/ai/src/types.ts | head -20Repository: TanStack/ai
Length of output: 1027
π Script executed:
# Get the actual definition area
cat -n packages/typescript/ai/src/types.ts | sed -n '690,750p'Repository: TanStack/ai
Length of output: 2114
π Script executed:
# Let's see the ContentStreamChunk definition
cat -n packages/typescript/ai/src/types.ts | sed -n '918,926p'Repository: TanStack/ai
Length of output: 406
π Script executed:
# And let's check what the StreamChunk union type includes
cat -n packages/typescript/ai/src/types.ts | sed -n '990,1010p'Repository: TanStack/ai
Length of output: 841
π Script executed:
# Verify if messageId is a required field (not optional) in TextMessageContentEvent
cat -n packages/typescript/ai/src/types.ts | sed -n '782,790p'Repository: TanStack/ai
Length of output: 378
π Script executed:
# Check if RUN_STARTED has a runId that could be used as fallback
cat -n packages/typescript/ai/src/types.ts | sed -n '729,735p'Repository: TanStack/ai
Length of output: 301
Missing messageId assignment for AG-UI TEXT_MESSAGE_CONTENT events.
The legacy content event handler assigns id = chunk.id (line 76), but the TEXT_MESSAGE_CONTENT handler doesn't extract the message identifier. If only AG-UI events are received, the returned SummarizationResult will have an empty id. The TextMessageContentEvent interface defines messageId: string as a required field that should be captured.
Update the handler to extract messageId:
Proposed fix
// AG-UI TEXT_MESSAGE_CONTENT event
else if (chunk.type === 'TEXT_MESSAGE_CONTENT') {
+ id = chunk.messageId
if (chunk.content) {
summary = chunk.content
} else {
summary += chunk.delta
}
model = chunk.model || model
}π Committable suggestion
βΌοΈ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // AG-UI TEXT_MESSAGE_CONTENT event | |
| else if (chunk.type === 'TEXT_MESSAGE_CONTENT') { | |
| if (chunk.content) { | |
| summary = chunk.content | |
| } else { | |
| summary += chunk.delta | |
| } | |
| model = chunk.model || model | |
| } | |
| // AG-UI TEXT_MESSAGE_CONTENT event | |
| else if (chunk.type === 'TEXT_MESSAGE_CONTENT') { | |
| id = chunk.messageId | |
| if (chunk.content) { | |
| summary = chunk.content | |
| } else { | |
| summary += chunk.delta | |
| } | |
| model = chunk.model || model | |
| } |
π€ Prompt for AI Agents
In `@packages/typescript/ai-openrouter/src/adapters/summarize.ts` around lines 79
- 87, The TEXT_MESSAGE_CONTENT branch fails to set the message identifier so
SummarizationResult.id remains empty when only AG-UI events arrive; update the
handler for chunk.type === 'TEXT_MESSAGE_CONTENT' to extract the AG-UI message
id (chunk.messageId) and assign it to the same id variable used by the legacy
content path (the one set from chunk.id), e.g., set id = chunk.messageId (or id
||= chunk.messageId) alongside assigning summary and model, ensuring the
SummarizationResult (and types like TextMessageContentEvent) carries the
messageId.
| let parsedInput: unknown = {} | ||
| try { | ||
| parsedInput = tc.arguments ? JSON.parse(tc.arguments) : {} | ||
| } catch { | ||
| parsedInput = {} | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Silent JSON.parse failure hides malformed tool arguments.
When tc.arguments contains invalid JSON, the error is silently swallowed and parsedInput defaults to {}. This could hide issues with malformed arguments from the LLM and make debugging difficult.
Consider including the raw arguments in the event or logging a warning:
Proposed fix
let parsedInput: unknown = {}
try {
parsedInput = tc.arguments ? JSON.parse(tc.arguments) : {}
} catch {
- parsedInput = {}
+ // Keep raw string if parsing fails
+ parsedInput = tc.arguments || {}
}Alternatively, add a rawArguments field to TOOL_CALL_END for debugging:
yield {
type: 'TOOL_CALL_END',
toolCallId: tc.id,
toolName: tc.name,
model: meta.model,
timestamp: meta.timestamp,
input: parsedInput,
+ rawArguments: tc.arguments,
}π Committable suggestion
βΌοΈ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| let parsedInput: unknown = {} | |
| try { | |
| parsedInput = tc.arguments ? JSON.parse(tc.arguments) : {} | |
| } catch { | |
| parsedInput = {} | |
| } | |
| let parsedInput: unknown = {} | |
| try { | |
| parsedInput = tc.arguments ? JSON.parse(tc.arguments) : {} | |
| } catch { | |
| // Keep raw string if parsing fails | |
| parsedInput = tc.arguments || {} | |
| } |
π€ Prompt for AI Agents
In `@packages/typescript/ai-openrouter/src/adapters/text.ts` around lines 453 -
458, The code currently swallows JSON.parse errors when parsing tc.arguments
into parsedInput; change the catch so it preserves/exports the raw malformed
input and logs or includes it in the TOOL_CALL_END payload for debugging: keep
parsedInput = {} on parse failure but set a rawArguments (or
rawArgumentsOnError) variable to tc.arguments and call the existing logger/event
emitter to record a warning including tc.arguments and the parse error, and add
that rawArguments field to the TOOL_CALL_END event payload so consumers can
inspect the original malformed string; refer to parsedInput, tc.arguments and
TOOL_CALL_END to locate where to add the log/event and the new field.
| describe('OpenRouter AG-UI event emission', () => { | ||
| beforeEach(() => { | ||
| vi.clearAllMocks() | ||
| }) | ||
|
|
||
| it('emits RUN_STARTED as the first event', async () => { | ||
| const streamChunks = [ | ||
| { | ||
| id: 'chatcmpl-123', | ||
| model: 'openai/gpt-4o-mini', | ||
| choices: [ | ||
| { | ||
| delta: { content: 'Hello' }, | ||
| finishReason: null, | ||
| }, | ||
| ], | ||
| }, | ||
| { | ||
| id: 'chatcmpl-123', | ||
| model: 'openai/gpt-4o-mini', | ||
| choices: [ | ||
| { | ||
| delta: {}, | ||
| finishReason: 'stop', | ||
| }, | ||
| ], | ||
| usage: { | ||
| promptTokens: 5, | ||
| completionTokens: 1, | ||
| totalTokens: 6, | ||
| }, | ||
| }, | ||
| ] | ||
|
|
||
| setupMockSdkClient(streamChunks) | ||
| const adapter = createAdapter() | ||
| const chunks: Array<StreamChunk> = [] | ||
|
|
||
| for await (const chunk of adapter.chatStream({ | ||
| model: 'openai/gpt-4o-mini', | ||
| messages: [{ role: 'user', content: 'Hello' }], | ||
| })) { | ||
| chunks.push(chunk) | ||
| } | ||
|
|
||
| expect(chunks[0]?.type).toBe('RUN_STARTED') | ||
| if (chunks[0]?.type === 'RUN_STARTED') { | ||
| expect(chunks[0].runId).toBeDefined() | ||
| expect(chunks[0].model).toBe('openai/gpt-4o-mini') | ||
| } | ||
| }) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Avoid hard-coding RUN_STARTED at index 0 if legacy chunks can interleave.
If legacy StreamChunk types are still emitted, positional asserts can be brittle; consider filtering to AGβUI events before ordering checks.
β»οΈ Suggested adjustment to avoid legacy chunk ordering brittleness
- expect(chunks[0]?.type).toBe('RUN_STARTED')
- if (chunks[0]?.type === 'RUN_STARTED') {
- expect(chunks[0].runId).toBeDefined()
- expect(chunks[0].model).toBe('openai/gpt-4o-mini')
- }
+ const aguiChunks = chunks.filter((c) =>
+ [
+ 'RUN_STARTED',
+ 'TEXT_MESSAGE_START',
+ 'TEXT_MESSAGE_CONTENT',
+ 'TEXT_MESSAGE_END',
+ 'RUN_FINISHED',
+ 'RUN_ERROR',
+ 'TOOL_CALL_START',
+ 'TOOL_CALL_ARGS',
+ 'TOOL_CALL_END',
+ 'STEP_STARTED',
+ 'STEP_FINISHED',
+ ].includes(c.type),
+ )
+ expect(aguiChunks[0]?.type).toBe('RUN_STARTED')
+ if (aguiChunks[0]?.type === 'RUN_STARTED') {
+ expect(aguiChunks[0].runId).toBeDefined()
+ expect(aguiChunks[0].model).toBe('openai/gpt-4o-mini')
+ }π€ Prompt for AI Agents
In `@packages/typescript/ai-openrouter/tests/openrouter-adapter.test.ts` around
lines 389 - 439, The test currently assumes the first yielded StreamChunk is an
AG-UI RUN_STARTED which breaks if legacy chunks interleave; update the test that
consumes adapter.chatStream (the loop producing chunks: Array<StreamChunk>) to
first filter or map the collected chunks to only AG-UI event types (look for
chunk.type matching RUN_STARTED, RUN_ENDED, etc.) and then assert that the first
AG-UI event's type is 'RUN_STARTED' and validate its runId and model (refer to
adapter.chatStream, StreamChunk, and the 'RUN_STARTED' symbol when locating
where to change assertions).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 13
Caution
Some comments are outside the diff and canβt be posted inline due to platform limitations.
β οΈ Outside diff range comments (4)
packages/typescript/ai-ollama/src/adapters/summarize.ts (1)
107-155: MissingRUN_STARTEDeventβAG-UI protocol compliance issue.The
summarizeStreammethod starts emitting events withTEXT_MESSAGE_CONTENTbut never emits the requiredRUN_STARTEDevent at the beginning of the stream. Other adapters (Grok, OpenRouter) delegate to their text adapters which correctly emitRUN_STARTEDfirst. The Ollama summarize adapter directly implements streaming and omits this required protocol event.Add a
RUN_STARTEDevent as the first yield before processing the stream:const stream = await this.client.generate({ model, prompt, options: { temperature: 0.3, num_predict: options.maxLength ?? 500, }, stream: true, }) + yield { + type: 'RUN_STARTED', + runId: id, + model, + timestamp: Date.now(), + } + for await (const chunk of stream) {packages/typescript/ai/src/stream-to-response.ts (1)
202-214: SamerunIdconcern applies totoHttpStreamerror handler.This error handler has the same pattern as
toServerSentEventsStreamβemittingRUN_ERRORwithoutrunId. Ensure consistency once the interface requirements are clarified.packages/typescript/ai/src/activities/summarize/index.ts (1)
140-143: Update docstring example to use AG-UI event type.The streaming example still references
chunk.type === 'content', but the implementation now emitsTEXT_MESSAGE_CONTENTevents. Users following this example would not receive any output.π Suggested fix
* `@example` Streaming summarization * ```ts * for await (const chunk of summarize({ * adapter: openaiSummarize('gpt-4o-mini'), * text: 'Long article text here...', * stream: true * })) { - * if (chunk.type === 'content') { + * if (chunk.type === 'TEXT_MESSAGE_CONTENT') { * process.stdout.write(chunk.delta) * } * } * ```packages/python/tanstack-ai/src/tanstack_ai/chat.py (1)
538-546: Docstring example uses outdated chunk type.The example in the docstring still references
chunk["type"] == "content", which should be updated to"TEXT_MESSAGE_CONTENT"to align with the AG-UI protocol changes in this PR.π Proposed fix
Example: >>> async for chunk in chat( ... adapter=anthropic_adapter, ... model="claude-3-5-sonnet-20241022", ... messages=[{"role": "user", "content": "Hello!"}], ... tools=[weather_tool], ... ): - ... if chunk["type"] == "content": + ... if chunk["type"] == "TEXT_MESSAGE_CONTENT": ... print(chunk["delta"], end="", flush=True)
π€ Fix all issues with AI agents
In `@docs/protocol/chunk-definitions.md`:
- Around line 293-302: Add a language specifier to the fenced code block that
contains the protocol example starting with RUN_STARTED / TOOL_CALL_START (name:
"get_weather") so the block is marked (e.g., ```text) instead of a plain fence;
locate the fenced block that shows RUN_STARTED, TOOL_CALL_START, TOOL_CALL_ARGS,
TEXT_MESSAGE_START/CONTENT/END and RUN_FINISHED and modify its opening backticks
to include the language specifier.
- Around line 321-332: The fenced code block showing the protocol example
(containing tokens like RUN_STARTED, TOOL_CALL_START, TOOL_CALL_END,
TEXT_MESSAGE_START, TEXT_MESSAGE_CONTENT, RUN_FINISHED) needs a language
specifier for correct rendering; update the opening fence from ``` to ```text so
the block becomes ```text and preserves formatting/monospace rendering for those
entries.
- Around line 281-291: The fenced code block showing the event sequence
(starting with RUN_STARTED and STEP_STARTED (stepType: "thinking")) is missing a
language specifier; update that triple-backtick fence to include a language
token such as text or plaintext (e.g., ```text) so the linter stops flagging it
and the sequence/flow diagram renders as plain text.
- Around line 304-316: The fenced code block showing the client tool approval
sequence (containing tokens like RUN_STARTED, TOOL_CALL_START (name:
"send_email"), CUSTOM (name: "approval-requested"), and RUN_FINISHED) lacks a
language specifier; update the opening fence from ``` to ```text (or another
appropriate language) in docs/protocol/chunk-definitions.md so the block becomes
a labeled plain-text code fence to enable correct formatting and highlighting.
In `@packages/python/tanstack-ai/src/tanstack_ai/types.py`:
- Around line 177-185: TextMessageStartEvent currently uses a TypedDict that
makes all fields required; change its definition to use total=False so optional
keys like model are truly optional, and mark mandatory fields explicitly with
typing_extensions.Required (or Required from typing if available) for keys such
as type, messageId, timestamp, and roleβmirroring the pattern used in
RunStartedEventβto ensure required vs optional fields are correct.
- Around line 262-278: StateSnapshotEvent and StateDeltaEvent currently mark
`model` as a required field due to total=True; update both classes so `model`
can be omitted by declaring it as NotRequired[Optional[str]] instead of
Optional[str], and add the appropriate NotRequired import (from
typing_extensions or typing depending on supported Python version); modify the
`model` annotations in the StateSnapshotEvent and StateDeltaEvent TypedDicts
accordingly so other fields remain required while `model` may be absent or None.
- Around line 198-205: TextMessageEndEvent currently declares model as
Optional[str] but still counts it as a required key on the TypedDict; change the
TypedDict so model is non-required by using NotRequired (e.g., model:
NotRequired[Optional[str]]) on the TextMessageEndEvent class and add the
corresponding import (from typing_extensions import NotRequired or from typing
import NotRequired if supported), leaving other fields required.
- Around line 146-154: The RunStartedEvent TypedDict currently makes all fields
required; update the RunStartedEvent definition so model and threadId are
optional like the TypeScript version β either convert the TypedDict to
total=False (e.g., class RunStartedEvent(TypedDict, total=False)) and then mark
required fields explicitly with Required[...] for type, runId, and timestamp, or
keep total=True but wrap only the required keys with Required[]; if using
Required, add the appropriate import (from typing import Required for Python
3.11+ or from typing_extensions import Required).
In `@packages/typescript/smoke-tests/adapters/src/harness.ts`:
- Around line 232-243: The TOOL_CALL_ARGS branch uses "chunk.args || ..." which
treats an empty string as missing and preserves stale args; update the
assignment in the block that handles chunk.type === 'TOOL_CALL_ARGS'
(referencing toolCallsInProgress, existing.args, chunk.args, chunk.delta, and
chunkData.*) to check for undefined explicitlyβset existing.args to chunk.args
if chunk.args !== undefined, otherwise append chunk.delta (using a nullish check
for chunk.delta to allow empty strings) so empty args are preserved and deltas
still apply.
- Around line 199-216: The code drops the first text delta when chunk.content is
undefined because assistantDraft is initialized with chunk.content while delta
is appended only on later chunks; to fix, compute a unified fallback delta
(e.g., let delta = chunk.delta || chunk.content || '') and use that for both
fullResponse and assistantDraft initialization/update: add delta to
fullResponse, set assistantDraft.content = delta when creating the draft, and
append delta to assistantDraft.content for subsequent chunks; update references
to chunk.delta/chunk.content accordingly in the TEXT_MESSAGE_CONTENT handling
block (assistantDraft, fullResponse, chunkData).
In `@packages/typescript/smoke-tests/e2e/src/routes/index.tsx`:
- Around line 1-4: Reorder the named imports to satisfy the linter: alphabetize
members within each import so useMemo follows useState in the React import and
fetchServerSentEvents follows useChat in the `@tanstack/ai-react` import; keep the
import groups intact (React, `@tanstack/`* packages, types) and ensure the import
lines reference the same symbols (useState, useMemo, createFileRoute, useChat,
fetchServerSentEvents, UIMessage) so the sort-imports rule is satisfied.
In `@packages/typescript/smoke-tests/e2e/src/routes/mock.tsx`:
- Around line 1-19: Change the array type annotation from the bracket form to
the generic form for VALID_SCENARIOS to satisfy the lint rule: update the
declaration that references MockScenario so it uses Array<MockScenario> (e.g.,
change the type on the VALID_SCENARIOS constant from MockScenario[] to
Array<MockScenario>), leaving the values unchanged and keeping the MockScenario
type intact.
In `@packages/typescript/smoke-tests/e2e/tests/chat.spec.ts`:
- Line 1: The import groups are wrong: move the type-only Page into a separate
type import and alphabetize the value imports; change "import { test, expect,
Page } from '@playwright/test'" to value imports in alphabetical order (import {
expect, test } from '@playwright/test') and add a separate type import (import
type { Page } from '@playwright/test') so Page is a type-only import and value
imports are sorted.
π§Ή Nitpick comments (10)
packages/typescript/ai-ollama/src/adapters/summarize.ts (1)
141-153: AddTEXT_MESSAGE_ENDevent beforeRUN_FINISHEDin summarize adapter.The AG-UI protocol requires a message lifecycle:
TEXT_MESSAGE_START,TEXT_MESSAGE_CONTENT,TEXT_MESSAGE_END, andRUN_FINISHED. The summarize adapter emitsTEXT_MESSAGE_CONTENTdirectly followed byRUN_FINISHED, skipping theTEXT_MESSAGE_ENDevent. This pattern is inconsistent with the text adapter and other provider implementations, which properly close messages before finishing runs.packages/typescript/ai/src/activities/chat/tools/tool-calls.ts (1)
72-95: Consider a secondary index for O(1) lookup bytoolCallId.Both
addToolCallArgsEventandcompleteToolCalliterate over all entries to find a tool call bytoolCallId. While this is acceptable for typical usage (few concurrent tool calls), a secondaryMap<string, ToolCall>keyed bytoolCallIdwould provide O(1) lookups.β»οΈ Suggested optimization
export class ToolCallManager { private toolCallsMap = new Map<number, ToolCall>() + private toolCallsById = new Map<string, ToolCall>() private tools: ReadonlyArray<Tool> // In addToolCallStartEvent: + this.toolCallsById.set(event.toolCallId, toolCall) // In addToolCallArgsEvent/completeToolCall: - for (const [, toolCall] of this.toolCallsMap.entries()) { - if (toolCall.id === event.toolCallId) { - // ... - break - } - } + const toolCall = this.toolCallsById.get(event.toolCallId) + if (toolCall) { + // ... + } // In clear(): + this.toolCallsById.clear()docs/guides/streaming.md (1)
79-92: Verify theSTEP_FINISHEDevent structure in the code example.The code example shows accessing
chunk.contentandchunk.deltaon aSTEP_FINISHEDevent. Based on the test file (stream-to-response.test.tslines 636-643),STEP_FINISHEDdoes includedeltaandcontentfields for thinking content. However, the documentation should clarify that these fields represent the accumulated thinking content, not incremental updates likeTEXT_MESSAGE_CONTENT.Consider adding a brief note to distinguish the semantics of
delta/contentin thinking steps vs. text messages, as developers may expect incremental deltas here.packages/typescript/ai/tests/stream-to-response.test.ts (1)
458-492: Consider adding proper typing to theparseSSEStreamhelper.The helper function returns
Array<StreamChunk>but the parsed JSON is cast without validation. While acceptable for test code, consider using type guards or Zod validation to ensure the parsed chunks conform to expected shapes, reducing the need foras anycasts in assertions.π‘ Example with type narrowing
function isTextMessageContent(chunk: StreamChunk): chunk is TextMessageContentEvent { return chunk.type === 'TEXT_MESSAGE_CONTENT' } // Then in tests: if (isTextMessageContent(parsedChunks[0])) { expect(parsedChunks[0].messageId).toBe('msg-1') }packages/typescript/ai-client/tests/test-utils.ts (1)
149-212: Consider emitting TOOL_CALL_END events for complete tool call sequences.The helper emits
TOOL_CALL_STARTandTOOL_CALL_ARGSbut notTOOL_CALL_END. According to the AG-UI protocol,TOOL_CALL_ENDshould mark the completion of a tool call. If tests expect complete tool call lifecycles, this may cause test failures or incomplete coverage.β»οΈ Proposed addition after TOOL_CALL_ARGS
// TOOL_CALL_ARGS event chunks.push({ type: 'TOOL_CALL_ARGS', toolCallId: toolCall.id, model, timestamp: Date.now(), delta: toolCall.arguments, }) + + // TOOL_CALL_END event + chunks.push({ + type: 'TOOL_CALL_END', + toolCallId: toolCall.id, + toolName: toolCall.name, + model, + timestamp: Date.now(), + input: parsedInput, + })docs/protocol/chunk-definitions.md (1)
40-43: Document STATE_SNAPSHOT, STATE_DELTA, and CUSTOM event shapes.
STATE_SNAPSHOT,STATE_DELTA, andCUSTOMare listed inAGUIEventTypebut lack interface definitions in this document. Adding their shapes would complete the protocol documentation.π Suggested additions after STEP_FINISHED section
--- ### STATE_SNAPSHOT Emitted to provide a full state synchronization. ```typescript interface StateSnapshotEvent extends BaseAGUIEvent { type: 'STATE_SNAPSHOT'; state: Record<string, unknown>; }
STATE_DELTA
Emitted to provide an incremental state update.
interface StateDeltaEvent extends BaseAGUIEvent { type: 'STATE_DELTA'; delta: Record<string, unknown>; }
CUSTOM
Custom event for extensibility. Used for application-specific events like approval requests.
interface CustomEvent extends BaseAGUIEvent { type: 'CUSTOM'; name: string; data?: unknown; }Example (approval request):
{ "type": "CUSTOM", "name": "approval-requested", "timestamp": 1701234567890, "data": { "toolCallId": "call_abc123", "toolName": "send_email", "input": { "to": "[email protected]" }, "approval": { "id": "approval_xyz", "needsApproval": true } } }</details> </blockquote></details> <details> <summary>packages/typescript/ai/src/types.ts (1)</summary><blockquote> `714-725`: **Consider a shared `TokenUsage` type to reduce duplication.** This keeps usage fields in sync across events/chunks. <details> <summary>β»οΈ Proposed refactor</summary> ```diff +export interface TokenUsage { + promptTokens: number + completionTokens: number + totalTokens: number +} + export interface RunFinishedEvent extends BaseAGUIEvent { type: 'RUN_FINISHED' /** Run identifier */ runId: string /** Why the generation stopped */ finishReason: 'stop' | 'length' | 'content_filter' | 'tool_calls' | null /** Token usage statistics */ - usage?: { - promptTokens: number - completionTokens: number - totalTokens: number - } + usage?: TokenUsage } export interface TextCompletionChunk { id: string model: string content: string role?: 'assistant' finishReason?: 'stop' | 'length' | 'content_filter' | null - usage?: { - promptTokens: number - completionTokens: number - totalTokens: number - } + usage?: TokenUsage }Also applies to: 902-906
packages/typescript/smoke-tests/e2e/src/routes/index.tsx (2)
14-63: Prefer StreamProcessor-based SSE parsing over manual JSON parsing.
The custom reader drops invalid JSON rather than buffering/repairing partial chunks; using the built-in streaming utilities avoids subtle edge cases and keeps parsing consistent with the rest of the codebase.As per coding guidelines, Implement stream processing with StreamProcessor for handling chunked responses and support partial JSON parsing for streaming AI responses.β»οΈ Possible refactor (reuse built-in SSE connection)
-function createMockConnection(scenario: MockScenario) { - return { - async *connect( - messages: Array<any>, - body: Record<string, any>, - abortSignal?: AbortSignal, - ) { - const response = await fetch('/api/mock-chat', { - method: 'POST', - headers: { 'Content-Type': 'application/json' }, - body: JSON.stringify({ ...body, messages, scenario }), - signal: abortSignal, - }) - ... - }, - } -} +// Reuse fetchServerSentEvents (StreamProcessor-backed) +const getMockConnection = () => fetchServerSentEvents('/api/mock-chat')- const connection = useMemo(() => { - if (apiMode === 'mock') { - return createMockConnection(mockScenario) - } - return fetchServerSentEvents('/api/tanchat') - }, [apiMode, mockScenario]) + const connection = useMemo(() => { + return apiMode === 'mock' + ? getMockConnection() + : fetchServerSentEvents('/api/tanchat') + }, [apiMode])- const { messages, sendMessage, isLoading, stop, error } = useChat({ - connection, - }) + const { messages, sendMessage, isLoading, stop, error } = useChat({ + connection, + body: apiMode === 'mock' ? { scenario: mockScenario } : undefined, + })
65-97: Deduplicate getMessageStats with /mock route.
This helper is identical inmock.tsx; consider extracting to a shared test utility to avoid divergence.packages/typescript/smoke-tests/e2e/src/routes/mock.tsx (1)
21-53: Deduplicate getMessageStats with index route.
Consider exporting this helper from a shared test util to keep stats logic consistent.
| 2. **With Thinking:** | ||
| ``` | ||
| ThinkingStreamChunk (delta: "I need to...") | ||
| ThinkingStreamChunk (delta: " check the weather") | ||
| ContentStreamChunk (delta: "Let me check") | ||
| DoneStreamChunk (finishReason: "stop") | ||
| RUN_STARTED | ||
| STEP_STARTED (stepType: "thinking") | ||
| STEP_FINISHED (delta: "I need to...") | ||
| STEP_FINISHED (delta: " check the weather") | ||
| TEXT_MESSAGE_START | ||
| TEXT_MESSAGE_CONTENT (delta: "Let me check") | ||
| TEXT_MESSAGE_END | ||
| RUN_FINISHED (finishReason: "stop") | ||
| ``` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add language specifier to fenced code block.
The linter flagged this code block as missing a language specifier. Since this is a sequence/flow diagram, consider using a text or plaintext identifier.
π©Ή Suggested fix
-```
+```text
RUN_STARTED
STEP_STARTED (stepType: "thinking")
...π§° Tools
πͺ markdownlint-cli2 (0.18.1)
282-282: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
π€ Prompt for AI Agents
In `@docs/protocol/chunk-definitions.md` around lines 281 - 291, The fenced code
block showing the event sequence (starting with RUN_STARTED and STEP_STARTED
(stepType: "thinking")) is missing a language specifier; update that
triple-backtick fence to include a language token such as text or plaintext
(e.g., ```text) so the linter stops flagging it and the sequence/flow diagram
renders as plain text.
| 3. **Tool Usage:** | ||
| ``` | ||
| ToolCallStreamChunk (name: "get_weather") | ||
| ToolResultStreamChunk (content: "{...}") | ||
| ContentStreamChunk (delta: "The weather is...") | ||
| DoneStreamChunk (finishReason: "stop") | ||
| RUN_STARTED | ||
| TOOL_CALL_START (name: "get_weather") | ||
| TOOL_CALL_ARGS / TOOL_CALL_END (result: "{...}") | ||
| TEXT_MESSAGE_START | ||
| TEXT_MESSAGE_CONTENT (delta: "The weather is...") | ||
| TEXT_MESSAGE_END | ||
| RUN_FINISHED (finishReason: "stop") | ||
| ``` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add language specifier to fenced code block.
π©Ή Suggested fix
-```
+```text
RUN_STARTED
TOOL_CALL_START (name: "get_weather")
...π Committable suggestion
βΌοΈ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| 3. **Tool Usage:** | |
| ``` | |
| ToolCallStreamChunk (name: "get_weather") | |
| ToolResultStreamChunk (content: "{...}") | |
| ContentStreamChunk (delta: "The weather is...") | |
| DoneStreamChunk (finishReason: "stop") | |
| RUN_STARTED | |
| TOOL_CALL_START (name: "get_weather") | |
| TOOL_CALL_ARGS / TOOL_CALL_END (result: "{...}") | |
| TEXT_MESSAGE_START | |
| TEXT_MESSAGE_CONTENT (delta: "The weather is...") | |
| TEXT_MESSAGE_END | |
| RUN_FINISHED (finishReason: "stop") | |
| ``` | |
| 3. **Tool Usage:** |
π§° Tools
πͺ markdownlint-cli2 (0.18.1)
294-294: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
π€ Prompt for AI Agents
In `@docs/protocol/chunk-definitions.md` around lines 293 - 302, Add a language
specifier to the fenced code block that contains the protocol example starting
with RUN_STARTED / TOOL_CALL_START (name: "get_weather") so the block is marked
(e.g., ```text) instead of a plain fence; locate the fenced block that shows
RUN_STARTED, TOOL_CALL_START, TOOL_CALL_ARGS, TEXT_MESSAGE_START/CONTENT/END and
RUN_FINISHED and modify its opening backticks to include the language specifier.
| 4. **Client Tool with Approval:** | ||
| ``` | ||
| ToolCallStreamChunk (name: "send_email") | ||
| ApprovalRequestedStreamChunk (toolName: "send_email") | ||
| RUN_STARTED | ||
| TOOL_CALL_START (name: "send_email") | ||
| TOOL_CALL_ARGS / TOOL_CALL_END | ||
| CUSTOM (name: "approval-requested") | ||
| [User approves] | ||
| ToolInputAvailableStreamChunk (toolName: "send_email") | ||
| [Client executes] | ||
| ToolResultStreamChunk (content: "{\"sent\":true}") | ||
| ContentStreamChunk (delta: "Email sent successfully") | ||
| DoneStreamChunk (finishReason: "stop") | ||
| TEXT_MESSAGE_START | ||
| TEXT_MESSAGE_CONTENT (delta: "Email sent successfully") | ||
| TEXT_MESSAGE_END | ||
| RUN_FINISHED (finishReason: "stop") | ||
| ``` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add language specifier to fenced code block.
π©Ή Suggested fix
-```
+```text
RUN_STARTED
TOOL_CALL_START (name: "send_email")
...π§° Tools
πͺ markdownlint-cli2 (0.18.1)
305-305: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
π€ Prompt for AI Agents
In `@docs/protocol/chunk-definitions.md` around lines 304 - 316, The fenced code
block showing the client tool approval sequence (containing tokens like
RUN_STARTED, TOOL_CALL_START (name: "send_email"), CUSTOM (name:
"approval-requested"), and RUN_FINISHED) lacks a language specifier; update the
opening fence from ``` to ```text (or another appropriate language) in
docs/protocol/chunk-definitions.md so the block becomes a labeled plain-text
code fence to enable correct formatting and highlighting.
|
|
||
| ``` | ||
| ToolCallStreamChunk (index: 0, name: "get_weather") | ||
| ToolCallStreamChunk (index: 1, name: "get_time") | ||
| ToolResultStreamChunk (toolCallId: "call_1") | ||
| ToolResultStreamChunk (toolCallId: "call_2") | ||
| ContentStreamChunk (delta: "Based on the data...") | ||
| DoneStreamChunk (finishReason: "stop") | ||
| RUN_STARTED | ||
| TOOL_CALL_START (index: 0, name: "get_weather") | ||
| TOOL_CALL_START (index: 1, name: "get_time") | ||
| TOOL_CALL_END (toolCallId: "call_1", result: "...") | ||
| TOOL_CALL_END (toolCallId: "call_2", result: "...") | ||
| TEXT_MESSAGE_START | ||
| TEXT_MESSAGE_CONTENT (delta: "Based on the data...") | ||
| TEXT_MESSAGE_END | ||
| RUN_FINISHED (finishReason: "stop") | ||
| ``` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add language specifier to fenced code block.
π©Ή Suggested fix
-```
+```text
RUN_STARTED
TOOL_CALL_START (index: 0, name: "get_weather")
...π Committable suggestion
βΌοΈ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| ``` | |
| ToolCallStreamChunk (index: 0, name: "get_weather") | |
| ToolCallStreamChunk (index: 1, name: "get_time") | |
| ToolResultStreamChunk (toolCallId: "call_1") | |
| ToolResultStreamChunk (toolCallId: "call_2") | |
| ContentStreamChunk (delta: "Based on the data...") | |
| DoneStreamChunk (finishReason: "stop") | |
| RUN_STARTED | |
| TOOL_CALL_START (index: 0, name: "get_weather") | |
| TOOL_CALL_START (index: 1, name: "get_time") | |
| TOOL_CALL_END (toolCallId: "call_1", result: "...") | |
| TOOL_CALL_END (toolCallId: "call_2", result: "...") | |
| TEXT_MESSAGE_START | |
| TEXT_MESSAGE_CONTENT (delta: "Based on the data...") | |
| TEXT_MESSAGE_END | |
| RUN_FINISHED (finishReason: "stop") | |
| ``` |
π§° Tools
πͺ markdownlint-cli2 (0.18.1)
322-322: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
π€ Prompt for AI Agents
In `@docs/protocol/chunk-definitions.md` around lines 321 - 332, The fenced code
block showing the protocol example (containing tokens like RUN_STARTED,
TOOL_CALL_START, TOOL_CALL_END, TEXT_MESSAGE_START, TEXT_MESSAGE_CONTENT,
RUN_FINISHED) needs a language specifier for correct rendering; update the
opening fence from ``` to ```text so the block becomes ```text and preserves
formatting/monospace rendering for those entries.
| class RunStartedEvent(TypedDict): | ||
| """Emitted when a run starts. This is the first event in any streaming response.""" | ||
|
|
||
| type: Literal["RUN_STARTED"] | ||
| runId: str | ||
| timestamp: int | ||
| model: Optional[str] | ||
| threadId: Optional[str] | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
RunStartedEvent has incorrect field requirements.
model and threadId should be optional (as per the TypeScript definition), but this TypedDict uses total=True which makes all fields required. Consider using total=False or explicitly marking required fields with Required[].
π©Ή Suggested fix
-class RunStartedEvent(TypedDict):
+class RunStartedEvent(TypedDict, total=False):
"""Emitted when a run starts. This is the first event in any streaming response."""
- type: Literal["RUN_STARTED"]
- runId: str
- timestamp: int
- model: Optional[str]
- threadId: Optional[str]
+ type: Required[Literal["RUN_STARTED"]]
+ runId: Required[str]
+ timestamp: Required[int]
+ model: Optional[str]
+ threadId: Optional[str]Note: Requires from typing import Required (Python 3.11+) or from typing_extensions import Required.
π€ Prompt for AI Agents
In `@packages/python/tanstack-ai/src/tanstack_ai/types.py` around lines 146 - 154,
The RunStartedEvent TypedDict currently makes all fields required; update the
RunStartedEvent definition so model and threadId are optional like the
TypeScript version β either convert the TypedDict to total=False (e.g., class
RunStartedEvent(TypedDict, total=False)) and then mark required fields
explicitly with Required[...] for type, runId, and timestamp, or keep total=True
but wrap only the required keys with Required[]; if using Required, add the
appropriate import (from typing import Required for Python 3.11+ or from
typing_extensions import Required).
| // AG-UI TEXT_MESSAGE_CONTENT event | ||
| if (chunk.type === 'TEXT_MESSAGE_CONTENT') { | ||
| chunkData.delta = chunk.delta | ||
| chunkData.content = chunk.content | ||
| chunkData.role = chunk.role | ||
| const delta = chunk.delta || chunk.content || '' | ||
| chunkData.role = 'assistant' | ||
| const delta = chunk.delta || '' | ||
| fullResponse += delta | ||
|
|
||
| if (chunk.role === 'assistant') { | ||
| if (!assistantDraft) { | ||
| assistantDraft = { | ||
| role: 'assistant', | ||
| content: chunk.content || '', | ||
| toolCalls: [], | ||
| } | ||
| } else { | ||
| assistantDraft.content = (assistantDraft.content || '') + delta | ||
| if (!assistantDraft) { | ||
| assistantDraft = { | ||
| role: 'assistant', | ||
| content: chunk.content || '', | ||
| toolCalls: [], | ||
| } | ||
| } else { | ||
| assistantDraft.content = (assistantDraft.content || '') + delta | ||
| } | ||
| } else if (chunk.type === 'tool_call') { | ||
| const id = chunk.toolCall.id | ||
| const existing = toolCallMap.get(id) || { | ||
| id, | ||
| name: chunk.toolCall.function.name, | ||
| arguments: '', | ||
| } | ||
| existing.arguments += chunk.toolCall.function.arguments || '' | ||
| toolCallMap.set(id, existing) | ||
|
|
||
| chunkData.toolCall = chunk.toolCall | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Avoid dropping the first text delta when content is absent.
If chunk.content is undefined, the first chunk.delta never reaches assistantDraft.content because the draft is initialized with chunk.content and the delta is only appended on later chunks; fullResponse also ignores content when delta is missing. Initialize from a unified fallback.
π Proposed fix
- const delta = chunk.delta || ''
+ const delta =
+ chunk.delta !== undefined
+ ? chunk.delta
+ : chunk.content !== undefined
+ ? chunk.content
+ : ''
fullResponse += delta
if (!assistantDraft) {
assistantDraft = {
role: 'assistant',
- content: chunk.content || '',
+ content: delta,
toolCalls: [],
}
} else {
assistantDraft.content = (assistantDraft.content || '') + delta
}π€ Prompt for AI Agents
In `@packages/typescript/smoke-tests/adapters/src/harness.ts` around lines 199 -
216, The code drops the first text delta when chunk.content is undefined because
assistantDraft is initialized with chunk.content while delta is appended only on
later chunks; to fix, compute a unified fallback delta (e.g., let delta =
chunk.delta || chunk.content || '') and use that for both fullResponse and
assistantDraft initialization/update: add delta to fullResponse, set
assistantDraft.content = delta when creating the draft, and append delta to
assistantDraft.content for subsequent chunks; update references to
chunk.delta/chunk.content accordingly in the TEXT_MESSAGE_CONTENT handling block
(assistantDraft, fullResponse, chunkData).
| // AG-UI TOOL_CALL_ARGS event | ||
| else if (chunk.type === 'TOOL_CALL_ARGS') { | ||
| const id = chunk.toolCallId | ||
| const existing = toolCallsInProgress.get(id) | ||
| if (existing) { | ||
| existing.args = chunk.args || existing.args + (chunk.delta || '') | ||
| } | ||
| } else if (chunk.type === 'tool_result') { | ||
|
|
||
| chunkData.toolCallId = chunk.toolCallId | ||
| chunkData.content = chunk.content | ||
| toolResults.push({ | ||
| toolCallId: chunk.toolCallId, | ||
| content: chunk.content, | ||
| }) | ||
| reconstructedMessages.push({ | ||
| role: 'tool', | ||
| toolCallId: chunk.toolCallId, | ||
| content: chunk.content, | ||
| chunkData.delta = chunk.delta | ||
| chunkData.args = chunk.args | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Preserve explicit empty args in TOOL_CALL_ARGS.
Using || treats '' as missing and can keep stale args. Prefer an undefined check.
π Proposed fix
- if (existing) {
- existing.args = chunk.args || existing.args + (chunk.delta || '')
- }
+ if (existing) {
+ if (chunk.args !== undefined) {
+ existing.args = chunk.args
+ } else if (chunk.delta !== undefined) {
+ existing.args = existing.args + chunk.delta
+ }
+ }π€ Prompt for AI Agents
In `@packages/typescript/smoke-tests/adapters/src/harness.ts` around lines 232 -
243, The TOOL_CALL_ARGS branch uses "chunk.args || ..." which treats an empty
string as missing and preserves stale args; update the assignment in the block
that handles chunk.type === 'TOOL_CALL_ARGS' (referencing toolCallsInProgress,
existing.args, chunk.args, chunk.delta, and chunkData.*) to check for undefined
explicitlyβset existing.args to chunk.args if chunk.args !== undefined,
otherwise append chunk.delta (using a nullish check for chunk.delta to allow
empty strings) so empty args are preserved and deltas still apply.
| import { useState, useMemo } from 'react' | ||
| import { createFileRoute } from '@tanstack/react-router' | ||
| import { useChat, fetchServerSentEvents } from '@tanstack/ai-react' | ||
| import type { UIMessage } from '@tanstack/ai-client' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix linted import ordering.
Static analysis reports sort-imports errors for useMemo and fetchServerSentEvents.
π§ Suggested fix
-import { useState, useMemo } from 'react'
+import { useMemo, useState } from 'react'
import { createFileRoute } from '@tanstack/react-router'
-import { useChat, fetchServerSentEvents } from '@tanstack/ai-react'
+import { fetchServerSentEvents, useChat } from '@tanstack/ai-react'π Committable suggestion
βΌοΈ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| import { useState, useMemo } from 'react' | |
| import { createFileRoute } from '@tanstack/react-router' | |
| import { useChat, fetchServerSentEvents } from '@tanstack/ai-react' | |
| import type { UIMessage } from '@tanstack/ai-client' | |
| import { useMemo, useState } from 'react' | |
| import { createFileRoute } from '@tanstack/react-router' | |
| import { fetchServerSentEvents, useChat } from '@tanstack/ai-react' | |
| import type { UIMessage } from '@tanstack/ai-client' |
π§° Tools
πͺ ESLint
[error] 1-1: Member 'useMemo' of the import declaration should be sorted alphabetically.
(sort-imports)
[error] 3-3: Member 'fetchServerSentEvents' of the import declaration should be sorted alphabetically.
(sort-imports)
π€ Prompt for AI Agents
In `@packages/typescript/smoke-tests/e2e/src/routes/index.tsx` around lines 1 - 4,
Reorder the named imports to satisfy the linter: alphabetize members within each
import so useMemo follows useState in the React import and fetchServerSentEvents
follows useChat in the `@tanstack/ai-react` import; keep the import groups intact
(React, `@tanstack/`* packages, types) and ensure the import lines reference the
same symbols (useState, useMemo, createFileRoute, useChat,
fetchServerSentEvents, UIMessage) so the sort-imports rule is satisfied.
| import { useMemo, useState } from 'react' | ||
| import { createFileRoute, useSearch } from '@tanstack/react-router' | ||
| import { useChat, fetchServerSentEvents } from '@tanstack/ai-react' | ||
| import type { UIMessage } from '@tanstack/ai-client' | ||
|
|
||
| type MockScenario = | ||
| | 'simple-text' | ||
| | 'tool-call' | ||
| | 'multi-tool' | ||
| | 'text-tool-text' | ||
| | 'error' | ||
|
|
||
| const VALID_SCENARIOS: MockScenario[] = [ | ||
| 'simple-text', | ||
| 'tool-call', | ||
| 'multi-tool', | ||
| 'text-tool-text', | ||
| 'error', | ||
| ] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
π§© Analysis chain
π Script executed:
# Find ESLint configuration files
find . -type f -name ".eslintrc*" -o -name "eslint.config.*" | head -20Repository: TanStack/ai
Length of output: 676
π Script executed:
# Check the project structure
ls -la packages/typescript/smoke-tests/Repository: TanStack/ai
Length of output: 338
π Script executed:
# Look for ESLint config in the workspace root
ls -la | grep -i eslintRepository: TanStack/ai
Length of output: 123
π Script executed:
# Read the root ESLint config
cat eslint.config.jsRepository: TanStack/ai
Length of output: 603
π Script executed:
# Check if there's an ESLint config in smoke-tests
find packages/typescript/smoke-tests -name "eslint.config.*" -o -name ".eslintrc*"Repository: TanStack/ai
Length of output: 37
π Script executed:
# Check the actual target file to confirm import order
cat packages/typescript/smoke-tests/e2e/src/routes/mock.tsx | head -20Repository: TanStack/ai
Length of output: 515
π Script executed:
# Check the `@tanstack/eslint-config` package
find node_modules -path "*/@tanstack/eslint-config*" -type f -name "*.js" -o -name "*.json" | head -20Repository: TanStack/ai
Length of output: 2632
π Script executed:
# Search for rules related to import/order or array types in the codebase
rg "import/order|array-type|prefer-array.*type" --type js --type json -iRepository: TanStack/ai
Length of output: 37
π Script executed:
# Check if there's a package.json in smoke-tests for any special config
cat packages/typescript/smoke-tests/package.jsonRepository: TanStack/ai
Length of output: 101
π Script executed:
# Look for `@tanstack/eslint-config` in node_modules
ls -la node_modules/.pnpm | grep eslint-config | head -10Repository: TanStack/ai
Length of output: 228
π Script executed:
# Read the import rules configuration
cat "node_modules/.pnpm/@[email protected]_@[email protected][email protected][email protected]__b544848c059229b1bd2850611abc5e4a/node_modules/@tanstack/eslint-config/src/import.js"Repository: TanStack/ai
Length of output: 921
π Script executed:
# Read the TypeScript rules configuration
cat "node_modules/.pnpm/@[email protected]_@[email protected][email protected][email protected]__b544848c059229b1bd2850611abc5e4a/node_modules/@tanstack/eslint-config/src/typescript.js"Repository: TanStack/ai
Length of output: 2906
π Script executed:
# Also check the main index to see all included configs
cat "node_modules/.pnpm/@[email protected]_@[email protected][email protected][email protected]__b544848c059229b1bd2850611abc5e4a/node_modules/@tanstack/eslint-config/src/index.js"Repository: TanStack/ai
Length of output: 1945
π Script executed:
# Check if alphabetize is mentioned anywhere in eslint configs
rg "alphabetize" node_modules/.pnpm/@[email protected]_@[email protected][email protected][email protected]__b544848c059229b1bd2850611abc5e4a/node_modules/@tanstack/eslint-config/ 2>/dev/null || echo "No alphabetize found"Repository: TanStack/ai
Length of output: 75
π Script executed:
# Check if there are any other ESLint config overrides in the project root
grep -r "alphabetize\|import/order" eslint.config.js packages/*/eslint.config.js 2>/dev/null | head -20Repository: TanStack/ai
Length of output: 37
π Script executed:
# Verify the stylistic.js file in case it has import ordering rules
cat "node_modules/.pnpm/@[email protected]_@[email protected][email protected][email protected]__b544848c059229b1bd2850611abc5e4a/node_modules/@tanstack/eslint-config/src/stylistic.js"Repository: TanStack/ai
Length of output: 285
Change array type syntax to generic format.
The @typescript-eslint/array-type rule requires Array<MockScenario> instead of MockScenario[].
π§ Suggested fix
-const VALID_SCENARIOS: MockScenario[] = [
+const VALID_SCENARIOS: Array<MockScenario> = [
'simple-text',
'tool-call',
'multi-tool',
'text-tool-text',
'error',
]π§° Tools
πͺ ESLint
[error] 3-3: Member 'fetchServerSentEvents' of the import declaration should be sorted alphabetically.
(sort-imports)
[error] 13-13: Array type using 'MockScenario[]' is forbidden. Use 'Array' instead.
(@typescript-eslint/array-type)
π€ Prompt for AI Agents
In `@packages/typescript/smoke-tests/e2e/src/routes/mock.tsx` around lines 1 - 19,
Change the array type annotation from the bracket form to the generic form for
VALID_SCENARIOS to satisfy the lint rule: update the declaration that references
MockScenario so it uses Array<MockScenario> (e.g., change the type on the
VALID_SCENARIOS constant from MockScenario[] to Array<MockScenario>), leaving
the values unchanged and keeping the MockScenario type intact.
| @@ -1,11 +1,95 @@ | |||
| import { test, expect } from '@playwright/test' | |||
| import { test, expect, Page } from '@playwright/test' | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
π§© Analysis chain
π Script executed:
# Check if the file exists and its current state
cat -n packages/typescript/smoke-tests/e2e/tests/chat.spec.ts | head -20Repository: TanStack/ai
Length of output: 764
π Script executed:
# Look for ESLint configuration files
fd -e "eslintrc*" -e ".eslintignore" -o -name "eslint.config.*" -o -name "package.json" | head -20Repository: TanStack/ai
Length of output: 285
π Script executed:
# Check the root package.json for ESLint rules
head -100 package.jsonRepository: TanStack/ai
Length of output: 3448
π Script executed:
# Look for TypeScript and ESLint configuration patterns
find . -maxdepth 3 -type f \( -name ".eslintrc*" -o -name "eslint.config.*" -o -name "tsconfig.json" \) 2>/dev/null | head -20Repository: TanStack/ai
Length of output: 313
π Script executed:
cat eslint.config.jsRepository: TanStack/ai
Length of output: 603
π Script executed:
# Check if 'test' is used in the file
grep -n "test\(" packages/typescript/smoke-tests/e2e/tests/chat.spec.ts | head -5Repository: TanStack/ai
Length of output: 78
π Script executed:
# Search for how Page is exported from `@playwright/test`
# First, let's check if there's a node_modules or type definitions we can inspect
find packages/typescript/smoke-tests -name "*.d.ts" -o -name "package.json" | grep playwrightRepository: TanStack/ai
Length of output: 37
π Script executed:
# Check if 'test' is used in the file (using rg instead of grep for better escaping)
rg "test\." packages/typescript/smoke-tests/e2e/tests/chat.spec.ts | head -10Repository: TanStack/ai
Length of output: 354
π Script executed:
# Look at the full file to see imports and test usage
cat packages/typescript/smoke-tests/e2e/tests/chat.spec.tsRepository: TanStack/ai
Length of output: 10630
π Script executed:
# Check `@tanstack/eslint-config` to understand the import ordering rules
# First find where it's located
find node_modules -name "eslint-config" -type d 2>/dev/null | grep tanstack | head -1Repository: TanStack/ai
Length of output: 231
π Script executed:
# Check the eslint-config to understand import ordering rules
cat "node_modules/.pnpm/@[email protected]_@[email protected][email protected][email protected]__b544848c059229b1bd2850611abc5e4a/node_modules/@tanstack/eslint-config/dist/index.js" | head -100Repository: TanStack/ai
Length of output: 277
π Script executed:
# Look for import sorting rules in the eslint config
rg "import" "node_modules/.pnpm/@[email protected]_@[email protected][email protected][email protected]__b544848c059229b1bd2850611abc5e4a/node_modules/@tanstack/eslint-config" -A 5 | head -50Repository: TanStack/ai
Length of output: 9760
π Script executed:
# Check if Page from `@playwright/test` is actually a type-only export
# We need to check the `@playwright/test` module definition
rg "export.*Page" node_modules/@playwright/test/index.d.ts 2>/dev/null | head -20Repository: TanStack/ai
Length of output: 37
π Script executed:
# Look for the `@playwright/test` type definitions
find node_modules/@playwright -name "index.d.ts" 2>/dev/null | head -5Repository: TanStack/ai
Length of output: 37
π Script executed:
# Check if Page is exported as type-only
rg "type Page\|export.*Page" node_modules/@playwright/test 2>/dev/null | head -20Repository: TanStack/ai
Length of output: 37
π Script executed:
# Let's find the playwright package and check its structure
find node_modules -path "*/@playwright/test" -type d 2>/dev/null | head -3Repository: TanStack/ai
Length of output: 37
π Script executed:
# List the files in the `@playwright/test` directory
ls -la node_modules/@playwright/test/ 2>/dev/null | head -20Repository: TanStack/ai
Length of output: 37
π Script executed:
# Try searching for Page in a different way
grep -r "export.*Page" node_modules/@playwright/test/ 2>/dev/null | head -10Repository: TanStack/ai
Length of output: 37
π Script executed:
# Check the eslint-plugin-import-x configuration in the tanstack config
cat "node_modules/.pnpm/@[email protected]_@[email protected][email protected][email protected]__b544848c059229b1bd2850611abc5e4a/node_modules/@tanstack/eslint-config/src/index.js" 2>/dev/null | head -150Repository: TanStack/ai
Length of output: 1945
π Script executed:
# Let's check the import-x plugin rules
rg "import-x" "node_modules/.pnpm/@[email protected]_@[email protected][email protected][email protected]__b544848c059229b1bd2850611abc5e4a/node_modules/@tanstack/eslint-config/src" -A 10 | head -100Repository: TanStack/ai
Length of output: 5117
π Script executed:
# Look at all the source files in the eslint config
ls -la "node_modules/.pnpm/@[email protected]_@[email protected][email protected][email protected]__b544848c059229b1bd2850611abc5e4a/node_modules/@tanstack/eslint-config/src/"Repository: TanStack/ai
Length of output: 588
π Script executed:
# Get the full import.js file
cat "node_modules/.pnpm/@[email protected]_@[email protected][email protected][email protected]__b544848c059229b1bd2850611abc5e4a/node_modules/@tanstack/eslint-config/src/import.js"Repository: TanStack/ai
Length of output: 921
π Script executed:
# Also check the typescript.js for consistent-type-imports rule
cat "node_modules/.pnpm/@[email protected]_@[email protected][email protected][email protected]__b544848c059229b1bd2850611abc5e4a/node_modules/@tanstack/eslint-config/src/typescript.js" | grep -A 5 "consistent-type-imports"Repository: TanStack/ai
Length of output: 271
π Script executed:
# Now let's verify if the suggested fix is correct by checking the Playwright test documentation
web_search "playwright test Page type export"Repository: TanStack/ai
Length of output: 105
π Script executed:
# Let me verify the import ordering more carefully by checking if there's any custom eslint config override for this file
find packages/typescript/smoke-tests -name ".eslintrc*" -o -name "eslint.config.*"Repository: TanStack/ai
Length of output: 37
π Script executed:
# Check if there's a .eslintignore
find packages/typescript/smoke-tests -name ".eslintignore"Repository: TanStack/ai
Length of output: 37
Separate type-only import and order value imports alphabetically.
Page is used only as a type annotation and must be imported with import type. The import/order rule requires type imports to appear after value imports, and value imports must be sorted alphabetically.
π§ Suggested fix
-import { test, expect, Page } from '@playwright/test'
+import { expect, test } from '@playwright/test'
+import type { Page } from '@playwright/test'π Committable suggestion
βΌοΈ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| import { test, expect, Page } from '@playwright/test' | |
| import { expect, test } from '@playwright/test' | |
| import type { Page } from '@playwright/test' |
π§° Tools
πͺ ESLint
[error] 1-1: Imports "Page" are only used as type.
(@typescript-eslint/consistent-type-imports)
[error] 1-1: Member 'expect' of the import declaration should be sorted alphabetically.
(sort-imports)
π€ Prompt for AI Agents
In `@packages/typescript/smoke-tests/e2e/tests/chat.spec.ts` at line 1, The import
groups are wrong: move the type-only Page into a separate type import and
alphabetize the value imports; change "import { test, expect, Page } from
'@playwright/test'" to value imports in alphabetical order (import { expect,
test } from '@playwright/test') and add a separate type import (import type {
Page } from '@playwright/test') so Page is a type-only import and value imports
are sorted.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 6
Caution
Some comments are outside the diff and canβt be posted inline due to platform limitations.
β οΈ Outside diff range comments (1)
testing/panel/src/routes/summarize.tsx (1)
72-105: Handle partial SSE chunks with buffering between reads.The per-line JSON parse ignores incomplete SSE events; if a
data:line is split across reads, that event is lost and the summary misses deltas. Buffer incomplete lines between reads:π§ Buffering fix
let accumulatedSummary = '' + let buffer = '' - const text = decoder.decode(value) - const lines = text.split('\n') + buffer += decoder.decode(value, { stream: true }) + const lines = buffer.split('\n') + buffer = lines.pop() ?? ''Note: StreamProcessor is designed for structured event objects from the AI SDK, not SSE-formatted text streams. Since this endpoint wraps events in SSE format on the server, buffering incomplete lines is the appropriate fix.
π€ Fix all issues with AI agents
In @.changeset/ag-ui-events.md:
- Around line 14-19: Update the changeset text to document all AG-UI protocol
events introduced in the PR by adding the missing event types and short
descriptions: include RUN_ERROR (error lifecycle event), STEP_STARTED and
STEP_FINISHED (step lifecycle events), STATE_SNAPSHOT and STATE_DELTA (state
events), and CUSTOM (custom events), alongside the already listed
RUN_STARTED/RUN_FINISHED, TEXT_MESSAGE_*, and TOOL_CALL_* entries; ensure each
event name (e.g., RUN_ERROR, STEP_STARTED, STATE_SNAPSHOT, CUSTOM) appears in
the list with a one-line description so the changelog fully reflects the
protocol surface.
- Around line 2-9: The changeset currently marks several packages
('@tanstack/ai', '@tanstack/ai-client', '@tanstack/ai-openai',
'@tanstack/ai-anthropic', '@tanstack/ai-gemini', '@tanstack/ai-grok',
'@tanstack/ai-ollama', '@tanstack/ai-openrouter') as "minor" but the diff and
content state that legacy chunk formats are no longer accepted (breaking API),
so update the changeset to mark each of those package entries as "major" instead
of "minor" and ensure the changeset message reflects the breaking change so the
release tooling will perform a major bump for these packages.
- Around line 2-9: The changeset erroneously includes '@tanstack/ai-client'
which lacks AG-UI event implementations; either remove '@tanstack/ai-client'
from the changeset or implement the missing AG-UI event handlers (RUN_STARTED,
TEXT_MESSAGE_START, TOOL_CALL_START, RUN_FINISHED, RUN_ERROR) inside the
'@tanstack/ai-client' adapter (add the corresponding event-emission or handling
code and tests to match the other adapters such as '@tanstack/ai-grok' and
'@tanstack/ai-openrouter'); ensure the adapter exports or wires the same event
constants/handlers so the release contains consistent AG-UI support.
In `@scripts/distribute-keys.ts`:
- Around line 30-34: Change explicit square-bracket array type annotations to
the project's preferred generic form: in function findExampleEnvFiles replace
the return type and local declaration annotations from string[] to Array<string>
(update the function signature and the envFiles variable). Do the same for the
other occurrence around lines 78-80 (replace any string[] annotations with
Array<string>) so the code conforms to `@typescript-eslint/array-type`.
In `@testing/panel/src/routes/api.structured.ts`:
- Around line 61-104: The code currently uses unsafe "as any" casts when passing
actualModel to adapter factories (see defaultModels, actualModel, and adapter
factory functions like anthropicText, openaiText, geminiText, grokText,
ollamaText, openRouterText and createChatOptions); remove those casts and
introduce a discriminated union mapping type (e.g., ModelByProvider) that maps
each Provider literal to its allowed model union, then narrow actualModel to
ModelByProvider[typeof provider] (or use a typed switch on provider) so each
branch passes a properly typed model to anthropicText/openaiText/etc. without
casting, preserving per-provider compile-time validation.
In `@testing/panel/src/routes/api.summarize.ts`:
- Around line 31-74: The adapters are currently passed actualModel using `as
any` which bypasses compile-time model validation; remove all `as any` casts in
the adapterConfig and instead narrow actualModel to a provider-specific model
type before calling each adapter factory (e.g., introduce a conditional type
like ValidSummarizeModel<P extends Provider> and compute const validModel =
actualModel as ValidSummarizeModel<typeof provider> or implement a
discriminated-union mapping of providerβmodel), then pass that typed
`validModel` into createSummarizeOptions calls (for functions such as
openaiSummarize, anthropicSummarize, geminiSummarize, grokSummarize,
ollamaSummarize, openRouterSummarize) so TypeScript enforces per-provider model
correctness at compile time.
π§Ή Nitpick comments (4)
scripts/distribute-keys.ts (1)
17-25: RenameSTATIC_ENV_LOCAL_PATHSto camelCase.The constant name violates the camelCase naming rule for variables.
β»οΈ Proposed change
-const STATIC_ENV_LOCAL_PATHS = [ +const staticEnvLocalPaths = [ 'testing/panel/.env.local', 'packages/typescript/smoke-tests/e2e/.env.local', 'packages/typescript/smoke-tests/adapters/.env.local', 'packages/typescript/ai-code-mode/.env.local', 'packages/typescript/ai-anthropic/live-tests/.env.local', 'packages/typescript/ai-openai/live-tests/.env.local', ] @@ - const allEnvPaths = [...STATIC_ENV_LOCAL_PATHS, ...exampleEnvFiles] + const allEnvPaths = [...staticEnvLocalPaths, ...exampleEnvFiles]As per coding guidelines,
**/*.{ts,tsx,js,jsx}: Use camelCase for function and variable names throughout the codebase.Also applies to: 117-117
testing/panel/src/routes/api.chat.ts (2)
56-62: Consider reusing the sharedProvidertype to avoid drift.
ImportingProviderfromtesting/panel/src/lib/model-selection.tskeeps UI options and API routing in sync.
195-198: Avoidas anyfor OpenRouter model selection.
Use a provider-specific model union (or model-meta mapping) soopenRouterTextremains type-safe.As per coding guidelines: Use type-safe per-model configuration with provider options typed based on selected model to ensure compile-time safety.
testing/panel/src/routes/api.summarize.ts (1)
5-8: This import pattern aligns with the current package exportsβwhile the tree-shakeable/adapterssubpath pattern is specified in architectural guidelines, it has not yet been implemented in the provider packages'package.jsonexports. The root-level imports (from '@tanstack/ai-grok', etc.) are currently the only available approach. Consider adding/adapterssubpath exports to each provider package'spackage.json(similar to the core@tanstack/aipackage) as a separate refactoring effort to enable tree-shaking benefits.
| '@tanstack/ai': minor | ||
| '@tanstack/ai-client': minor | ||
| '@tanstack/ai-openai': minor | ||
| '@tanstack/ai-anthropic': minor | ||
| '@tanstack/ai-gemini': minor | ||
| '@tanstack/ai-grok': minor | ||
| '@tanstack/ai-ollama': minor | ||
| '@tanstack/ai-openrouter': minor |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
BREAKING: This must be a major version bump, not minor.
The change explicitly states "previous legacy chunk formats are no longer accepted" (line 20), which is a breaking API change. Per semantic versioning, breaking changes require a major version bump, not minor.
π Proposed fix
---
-'@tanstack/ai': minor
-'@tanstack/ai-client': minor
-'@tanstack/ai-openai': minor
-'@tanstack/ai-anthropic': minor
-'@tanstack/ai-gemini': minor
-'@tanstack/ai-grok': minor
-'@tanstack/ai-ollama': minor
-'@tanstack/ai-openrouter': minor
+'@tanstack/ai': major
+'@tanstack/ai-client': major
+'@tanstack/ai-openai': major
+'@tanstack/ai-anthropic': major
+'@tanstack/ai-gemini': major
+'@tanstack/ai-grok': major
+'@tanstack/ai-ollama': major
+'@tanstack/ai-openrouter': major
---π Committable suggestion
βΌοΈ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| '@tanstack/ai': minor | |
| '@tanstack/ai-client': minor | |
| '@tanstack/ai-openai': minor | |
| '@tanstack/ai-anthropic': minor | |
| '@tanstack/ai-gemini': minor | |
| '@tanstack/ai-grok': minor | |
| '@tanstack/ai-ollama': minor | |
| '@tanstack/ai-openrouter': minor | |
| --- | |
| '@tanstack/ai': major | |
| '@tanstack/ai-client': major | |
| '@tanstack/ai-openai': major | |
| '@tanstack/ai-anthropic': major | |
| '@tanstack/ai-gemini': major | |
| '@tanstack/ai-grok': major | |
| '@tanstack/ai-ollama': major | |
| '@tanstack/ai-openrouter': major |
π€ Prompt for AI Agents
In @.changeset/ag-ui-events.md around lines 2 - 9, The changeset currently marks
several packages ('@tanstack/ai', '@tanstack/ai-client', '@tanstack/ai-openai',
'@tanstack/ai-anthropic', '@tanstack/ai-gemini', '@tanstack/ai-grok',
'@tanstack/ai-ollama', '@tanstack/ai-openrouter') as "minor" but the diff and
content state that legacy chunk formats are no longer accepted (breaking API),
so update the changeset to mark each of those package entries as "major" instead
of "minor" and ensure the changeset message reflects the breaking change so the
release tooling will perform a major bump for these packages.
π§© Analysis chain
π Script executed:
# First, let's understand the repository structure
find . -maxdepth 3 -type d -name "packages" | head -5
ls -la packages/typescript/ 2>/dev/null | grep "ai" || echo "Checking alternate structure..."Repository: TanStack/ai
Length of output: 1433
π Script executed:
#!/bin/bash
# Description: Verify AG-UI event implementation in all listed adapters.
# Search for AG-UI event type usage in each adapter package
for pkg in ai ai-client ai-openai ai-anthropic ai-gemini ai-grok ai-ollama ai-openrouter; do
echo "=== Checking `@tanstack/`$pkg ==="
# Look for AG-UI event emissions (RUN_STARTED, TEXT_MESSAGE_START, etc.)
rg -n --type=ts -C2 'RUN_STARTED|TEXT_MESSAGE_START|TOOL_CALL_START|RUN_FINISHED|RUN_ERROR' \
"packages/typescript/$pkg/src/" 2>/dev/null || echo " β οΈ No AG-UI events found"
echo ""
doneRepository: TanStack/ai
Length of output: 47126
Remove @tanstack/ai-client from the changeset or add AG-UI event implementations.
Verification found that @tanstack/ai-client does not contain AG-UI event implementations (RUN_STARTED, TEXT_MESSAGE_START, TOOL_CALL_START, RUN_FINISHED, RUN_ERROR), unlike the other 8 adapter packages. Either this package should be removed from the changeset, or the missing AG-UI event handling should be implemented before release.
The initially questioned packages (@tanstack/ai-grok and @tanstack/ai-openrouter) do have AG-UI event implementations and can remain in the changeset.
π€ Prompt for AI Agents
In @.changeset/ag-ui-events.md around lines 2 - 9, The changeset erroneously
includes '@tanstack/ai-client' which lacks AG-UI event implementations; either
remove '@tanstack/ai-client' from the changeset or implement the missing AG-UI
event handlers (RUN_STARTED, TEXT_MESSAGE_START, TOOL_CALL_START, RUN_FINISHED,
RUN_ERROR) inside the '@tanstack/ai-client' adapter (add the corresponding
event-emission or handling code and tests to match the other adapters such as
'@tanstack/ai-grok' and '@tanstack/ai-openrouter'); ensure the adapter exports
or wires the same event constants/handlers so the release contains consistent
AG-UI support.
| All text adapters now emit AG-UI protocol events only: | ||
|
|
||
| - `RUN_STARTED` / `RUN_FINISHED` - Run lifecycle events | ||
| - `TEXT_MESSAGE_START` / `TEXT_MESSAGE_CONTENT` / `TEXT_MESSAGE_END` - Text message streaming | ||
| - `TOOL_CALL_START` / `TOOL_CALL_ARGS` / `TOOL_CALL_END` - Tool call streaming | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Document all AG-UI event types introduced in this PR.
The changeset omits several event types mentioned in the broader PR changes. Based on the AI summary, the following event types are also part of the AG-UI protocol but are not documented here:
RUN_ERRORβ error lifecycle eventSTEP_STARTED/STEP_FINISHEDβ step lifecycle eventsSTATE_SNAPSHOT/STATE_DELTAβ state eventsCUSTOMβ custom events
Include all introduced event types in the changeset description to provide a complete changelog for users.
π€ Prompt for AI Agents
In @.changeset/ag-ui-events.md around lines 14 - 19, Update the changeset text
to document all AG-UI protocol events introduced in the PR by adding the missing
event types and short descriptions: include RUN_ERROR (error lifecycle event),
STEP_STARTED and STEP_FINISHED (step lifecycle events), STATE_SNAPSHOT and
STATE_DELTA (state events), and CUSTOM (custom events), alongside the already
listed RUN_STARTED/RUN_FINISHED, TEXT_MESSAGE_*, and TOOL_CALL_* entries; ensure
each event name (e.g., RUN_ERROR, STEP_STARTED, STATE_SNAPSHOT, CUSTOM) appears
in the list with a one-line description so the changelog fully reflects the
protocol surface.
| function findExampleEnvFiles(projectRoot: string): string[] { | ||
| const examplesDir = path.join(projectRoot, 'examples') | ||
| if (!fs.existsSync(examplesDir)) return [] | ||
|
|
||
| const envFiles: string[] = [] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix ESLint array-type violations.
The explicit string[] annotations violate @typescript-eslint/array-type and will fail lint.
π§Ή Proposed fix
-function findExampleEnvFiles(projectRoot: string): string[] {
+function findExampleEnvFiles(projectRoot: string): Array<string> {
@@
- const envFiles: string[] = []
+ const envFiles: Array<string> = []
@@
-function serializeEnvFile(entries: Map<string, string>): string {
- const lines: string[] = []
+function serializeEnvFile(entries: Map<string, string>): string {
+ const lines: Array<string> = []Also applies to: 78-80
π§° Tools
πͺ ESLint
[error] 30-30: Array type using 'string[]' is forbidden. Use 'Array' instead.
(@typescript-eslint/array-type)
[error] 34-34: Array type using 'string[]' is forbidden. Use 'Array' instead.
(@typescript-eslint/array-type)
π€ Prompt for AI Agents
In `@scripts/distribute-keys.ts` around lines 30 - 34, Change explicit
square-bracket array type annotations to the project's preferred generic form:
in function findExampleEnvFiles replace the return type and local declaration
annotations from string[] to Array<string> (update the function signature and
the envFiles variable). Do the same for the other occurrence around lines 78-80
(replace any string[] annotations with Array<string>) so the code conforms to
`@typescript-eslint/array-type`.
| // Don't set a global default - let each adapter use its own default model | ||
| const model: string | undefined = data.model || body.model | ||
|
|
||
| try { | ||
| // Default models per provider | ||
| const defaultModels: Record<Provider, string> = { | ||
| anthropic: 'claude-sonnet-4-5', | ||
| gemini: 'gemini-2.0-flash', | ||
| grok: 'grok-3-mini', | ||
| ollama: 'mistral:7b', | ||
| openai: 'gpt-4o', | ||
| openrouter: 'openai/gpt-4o', | ||
| } | ||
|
|
||
| // Determine the actual model being used | ||
| const actualModel = model || defaultModels[provider] | ||
|
|
||
| // Pre-define typed adapter configurations with full type inference | ||
| // Model is passed to the adapter factory function for type-safe autocomplete | ||
| const adapterConfig = { | ||
| anthropic: () => | ||
| createChatOptions({ | ||
| adapter: anthropicText( | ||
| (model || 'claude-sonnet-4-5-20250929') as any, | ||
| ), | ||
| adapter: anthropicText(actualModel as any), | ||
| }), | ||
| gemini: () => | ||
| createChatOptions({ | ||
| adapter: geminiText((model || 'gemini-2.0-flash-exp') as any), | ||
| adapter: geminiText(actualModel as any), | ||
| }), | ||
| grok: () => | ||
| createChatOptions({ | ||
| adapter: grokText(actualModel as any), | ||
| }), | ||
| ollama: () => | ||
| createChatOptions({ | ||
| adapter: ollamaText((model || 'mistral:7b') as any), | ||
| adapter: ollamaText(actualModel as any), | ||
| }), | ||
| openai: () => | ||
| createChatOptions({ | ||
| adapter: openaiText((model || 'gpt-4o') as any), | ||
| adapter: openaiText(actualModel as any), | ||
| }), | ||
| openrouter: () => | ||
| createChatOptions({ | ||
| adapter: openRouterText(actualModel as any), | ||
| }), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
π§© Analysis chain
π Script executed:
#!/bin/bash
set -euo pipefail
# First, run the provided search for adapter exports
echo "=== Searching for adapter exports ==="
rg -nP "export (const|function) (anthropicText|geminiText|grokText|openaiText|openRouterText|ollamaText)" -g '*.ts'Repository: TanStack/ai
Length of output: 859
π Script executed:
#!/bin/bash
set -euo pipefail
# Search for files that might contain adapter implementations
echo "=== Finding adapter-related files ==="
fd -e ts -e tsx | grep -E "(adapter|anthropic|gemini|grok|openai|ollama|openrouter)" | head -20Repository: TanStack/ai
Length of output: 1263
π Script executed:
#!/bin/bash
set -euo pipefail
# Look for the specific file mentioned in the review
echo "=== Checking the reviewed file ==="
cat -n testing/panel/src/routes/api.structured.ts | sed -n '61,104p'Repository: TanStack/ai
Length of output: 2033
π Script executed:
#!/bin/bash
set -euo pipefail
# Search for model-meta files
echo "=== Searching for model metadata files ==="
fd -e ts "model-meta" || rg "model-meta" -g '*.ts' | head -5Repository: TanStack/ai
Length of output: 4665
π Script executed:
#!/bin/bash
set -euo pipefail
# Check the anthropic adapter implementation
echo "=== Anthropic adapter (first 50 lines) ==="
head -50 packages/typescript/ai-anthropic/src/adapters/text.tsRepository: TanStack/ai
Length of output: 1381
π Script executed:
#!/bin/bash
set -euo pipefail
# Check what ANTHROPIC_MODELS constant is
echo "=== Finding ANTHROPIC_MODELS ==="
rg "ANTHROPIC_MODELS" -g '*.ts' -B 2 -A 2 | head -30Repository: TanStack/ai
Length of output: 2117
π Script executed:
#!/bin/bash
set -euo pipefail
# Check the openai adapter for comparison
echo "=== Finding OPENAI_CHAT_MODELS ==="
rg "OPENAI_CHAT_MODELS" -g '*.ts' -B 2 -A 2 | head -30Repository: TanStack/ai
Length of output: 2069
π Script executed:
#!/bin/bash
set -euo pipefail
# Look at the anthropic model-meta file
echo "=== Anthropic model metadata ==="
cat packages/typescript/ai-anthropic/src/model-meta.ts | head -80Repository: TanStack/ai
Length of output: 2003
π Script executed:
#!/bin/bash
set -euo pipefail
# Check the full anthropic text adapter export signature
echo "=== Anthropic text adapter export (around line 714) ==="
sed -n '710,730p' packages/typescript/ai-anthropic/src/adapters/text.tsRepository: TanStack/ai
Length of output: 590
π Script executed:
#!/bin/bash
set -euo pipefail
# Check how defaultModels are constrained
echo "=== Looking for type patterns in testing panel ==="
head -100 testing/panel/src/routes/api.structured.ts | tail -50Repository: TanStack/ai
Length of output: 1986
π Script executed:
#!/bin/bash
set -euso pipefail
# Check if ANTHROPIC_MODELS, OPENAI_CHAT_MODELS are exported
echo "=== Checking what's exported from anthropic model-meta ==="
rg "^export" packages/typescript/ai-anthropic/src/model-meta.tsRepository: TanStack/ai
Length of output: 390
π Script executed:
#!/bin/bash
set -euo pipefail
# Look for any type narrowing patterns in the codebase
echo "=== Searching for provider-based type narrowing examples ==="
rg "provider.*as.*const\|match.*provider\|switch.*provider" -g '*.ts' -A 3 | head -40Repository: TanStack/ai
Length of output: 115
π Script executed:
#!/bin/bash
set -euo pipefail
# Look for patterns showing how to properly type models per provider
echo "=== Checking other adapter usage patterns ==="
rg "anthropicText|openaiText|geminiText" -g '*.ts' -B 2 -A 2 | grep -v "^--$" | head -60Repository: TanStack/ai
Length of output: 4871
π Script executed:
#!/bin/bash
set -euo pipefail
# Check if there's a Provider type definition and how it's used
echo "=== Looking for Provider type definition ==="
rg "type Provider|enum Provider" -g '*.ts' -A 10 | head -40Repository: TanStack/ai
Length of output: 2534
π Script executed:
#!/bin/bash
set -euo pipefail
# See if Parameters helper pattern is used elsewhere
echo "=== Checking if Parameters type extraction is used elsewhere ==="
rg "Parameters<typeof" -g '*.ts' | head -20Repository: TanStack/ai
Length of output: 213
π Script executed:
#!/bin/bash
set -euo pipefail
# Check the full context of the api.structured.ts to see how adapterConfig is used
echo "=== Full api.structured.ts file ==="
wc -l testing/panel/src/routes/api.structured.tsRepository: TanStack/ai
Length of output: 137
π Script executed:
#!/bin/bash
set -euo pipefail
# Check the full api.structured.ts file to see context around the problematic code
echo "=== Full context of api.structured.ts ==="
cat -n testing/panel/src/routes/api.structured.tsRepository: TanStack/ai
Length of output: 7936
π Script executed:
#!/bin/bash
set -euo pipefail
# Check what ANTHROPIC_MODELS, OPENAI_CHAT_MODELS etc. contain
echo "=== ANTHROPIC_MODELS content ==="
sed -n '/^export const ANTHROPIC_MODELS/,/^]/p' packages/typescript/ai-anthropic/src/model-meta.ts | head -20Repository: TanStack/ai
Length of output: 333
π Script executed:
#!/bin/bash
set -euo pipefail
# Check a similar file that might have better type handling
echo "=== api.chat.ts for comparison ==="
cat testing/panel/src/routes/api.chat.tsRepository: TanStack/ai
Length of output: 9180
Remove as any casts to preserve per-model type safety.
The casts bypass compile-time model validation and allow passing models intended for another provider. Extract per-provider model types using a discriminated union pattern: create a type that narrows actualModel based on the provider literal type. For example:
type ModelByProvider = {
anthropic: typeof ANTHROPIC_MODELS[number]
openai: typeof OPENAI_CHAT_MODELS[number]
gemini: typeof GEMINI_MODELS[number]
grok: typeof GROK_CHAT_MODELS[number]
ollama: string
openrouter: OpenRouterTextModels
}
const actualModel = model || defaultModels[provider] as ModelByProvider[typeof provider]Then pass actualModel directly without casting:
adapter: anthropicText(actualModel) // type-safe when provider === 'anthropic'This ensures the TypeScript compiler validates that each adapter receives only valid models for its provider.
π€ Prompt for AI Agents
In `@testing/panel/src/routes/api.structured.ts` around lines 61 - 104, The code
currently uses unsafe "as any" casts when passing actualModel to adapter
factories (see defaultModels, actualModel, and adapter factory functions like
anthropicText, openaiText, geminiText, grokText, ollamaText, openRouterText and
createChatOptions); remove those casts and introduce a discriminated union
mapping type (e.g., ModelByProvider) that maps each Provider literal to its
allowed model union, then narrow actualModel to ModelByProvider[typeof provider]
(or use a typed switch on provider) so each branch passes a properly typed model
to anthropicText/openaiText/etc. without casting, preserving per-provider
compile-time validation.
| const { toolCallId, toolName, input, approval } = chunk.data as { | ||
| toolCallId: string | ||
| toolName: string | ||
| input: any | ||
| approval: { id: string; needsApproval: boolean } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can we somehow type this properly instead of casting it?
| ) | ||
| // Handle client tool input availability - trigger client-side execution | ||
| if (chunk.name === 'tool-input-available' && chunk.data) { | ||
| const { toolCallId, toolName, input } = chunk.data as { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can we somehow type this properly instead of casting it?
AlemTuzlak
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Have some questions but all in all LGTM
β¦to feat/ag-ui-events
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
π€ Fix all issues with AI agents
In `@packages/typescript/ai-vue/src/use-chat.ts`:
- Around line 41-48: The watcher currently only reacts to reference changes of
options.body; update the watch invocation that observes () => options.body so it
deep-watches nested mutations and handles refs by using unref (i.e., pass {
deep: true } as the options and unref the body in the getter), then call
client.updateOptions({ body: newBody }) as before so nested property updates on
options.body trigger the watcher; locate the watch surrounding options.body and
client.updateOptions to apply this change.
| // Sync body changes to the client | ||
| // This allows dynamic body values (like model selection) to be updated without recreating the client | ||
| watch( | ||
| () => options.body, | ||
| (newBody) => { | ||
| client.updateOptions({ body: newBody }) | ||
| }, | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
π§© Analysis chain
π Web query:
Vue 3 watch reactive object nested mutations deep watch behavior
π‘ Result:
-
watch()is shallow by default: it only re-runs when the watched source value is replaced (reference change). Nested mutations (e.g.,obj.nested.x++) wonβt trigger unless you use a deep watcher. [1]watch(() => state.obj, () => {}, { deep: true })
-
Deep watchers (
deep: true) traverse nested properties so the callback fires on nested mutations. A key gotcha: on nested mutations,newValue === oldValueis typically true because the object wasnβt replacedβboth parameters point to the same reactive object. [1] -
Watching a
reactive()object directly is implicitly deep (you donβt need{ deep: true }in that case): [1]const obj = reactive({ nested: { count: 0 } }) watch(obj, (newVal, oldVal) => { // fires on obj.nested.count++ // newVal and oldVal are the same reference on nested mutations }) obj.nested.count++
-
Vue 3.5+ supports
deepas a max traversal depth number (e.g.,deep: 1for βshallow-ishβ deep watch). [2][3] -
Arrays in Vue 3: watching an array triggers only when the array is replaced, not mutated (e.g.,
push/splice), unless you specifydeep. Vue 3.5+ recommendsdeep: 1for array replacement + mutation behavior. [2]
Sources: Vue Watchers guide (deep watchers + implicit deep on reactive objects) [1], Vue 3 Migration Guide (array watch behavior + deep: 1 in 3.5+) [2], Vue reactivity API docs (deep option, including numeric depth in 3.5+) [3].
π Script executed:
cat -n packages/typescript/ai-vue/src/use-chat.ts | head -60Repository: TanStack/ai
Length of output: 2620
π Script executed:
cat -n packages/typescript/ai-vue/src/types.tsRepository: TanStack/ai
Length of output: 3423
π Script executed:
cd packages/typescript/ai-client && grep -A 20 "export.*ChatRequestBody\|type.*ChatRequestBody" src/types.ts | head -40Repository: TanStack/ai
Length of output: 651
Watch won't detect nested mutations on the body object.
watch(() => options.body, ...) only reacts when the body reference changes. Nested mutations (e.g., options.body.data['key'] = value) won't trigger the watcher. Add { deep: true } and unref() to detect nested property changes:
-import { onScopeDispose, readonly, shallowRef, useId, watch } from 'vue'
+import { onScopeDispose, readonly, shallowRef, useId, watch, unref } from 'vue'
@@
watch(
- () => options.body,
+ () => unref(options.body),
(newBody) => {
client.updateOptions({ body: newBody })
},
+ { deep: true },
)π Committable suggestion
βΌοΈ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Sync body changes to the client | |
| // This allows dynamic body values (like model selection) to be updated without recreating the client | |
| watch( | |
| () => options.body, | |
| (newBody) => { | |
| client.updateOptions({ body: newBody }) | |
| }, | |
| ) | |
| // Sync body changes to the client | |
| // This allows dynamic body values (like model selection) to be updated without recreating the client | |
| watch( | |
| () => unref(options.body), | |
| (newBody) => { | |
| client.updateOptions({ body: newBody }) | |
| }, | |
| { deep: true }, | |
| ) |
π€ Prompt for AI Agents
In `@packages/typescript/ai-vue/src/use-chat.ts` around lines 41 - 48, The watcher
currently only reacts to reference changes of options.body; update the watch
invocation that observes () => options.body so it deep-watches nested mutations
and handles refs by using unref (i.e., pass { deep: true } as the options and
unref the body in the getter), then call client.updateOptions({ body: newBody })
as before so nested property updates on options.body trigger the watcher; locate
the watch surrounding options.body and client.updateOptions to apply this
change.
π― Changes
β Checklist
pnpm run test:pr.π Release Impact
Summary by CodeRabbit
βοΈ Tip: You can customize this high-level summary in your review settings.