ci: Version Packages#533
Merged
AlemTuzlak merged 1 commit intomainfrom May 8, 2026
Merged
Conversation
00c9568 to
14d5bc5
Compare
14d5bc5 to
86643e3
Compare
AlemTuzlak
approved these changes
May 8, 2026
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
This PR was opened by the Changesets release GitHub action. When you're ready to do a release, you can merge this and the packages will be published to npm automatically. If you're not ready to do a release yet, that's fine, whenever you add more changesets to main, this PR will be updated.
Releases
@tanstack/ai@0.15.0
Minor Changes
OpenTelemetry middleware.
otelMiddleware({ tracer, meter?, captureContent?, redact?, ... })emits GenAI-semantic-convention traces and metrics for everychat()call. (#500)chat()+ child span per agent-loop iteration (namedchat <model> #<iteration>) + grandchild span per tool call.gen_ai.client.operation.duration(seconds) recorded once perchat()call;gen_ai.client.token.usage(tokens) recorded per iteration (one input + one output record). Metric attributes are kept low-cardinality —gen_ai.response.modelandgen_ai.response.idare intentionally excluded.captureContent: trueattaches prompt/completion content asgen_ai.{user,system,assistant,tool}.messageandgen_ai.choicespan events. Redactor failures fail closed to a"[redaction_failed]"sentinel — raw content never leaks. Assistant text is capped atmaxContentLength(default 100 000).console.warnwith a label so failures remain diagnosable.@opentelemetry/apiis an optional peer dependency. The middleware is exported from the dedicated subpath@tanstack/ai/middlewares/otelso that importing@tanstack/ai/middlewaresdoes not eagerly require OTel.See
docs/advanced/otel.mdfor the full guide.Fix thinking blocks getting merged across steps and lost on turn 2+ of Anthropic tool loops. (#391)
Each thinking step emitted by the adapter now produces its own
ThinkingParton theUIMessageinstead of being merged into a single part, and thinking content + Anthropic signatures are preserved in server-side message history so multi-turn tool flows with extended thinking work correctly.This includes a public callback signature change:
StreamProcessorEvents.onThinkingUpdatenow receives(messageId, stepId, content)instead of(messageId, content).ChatClienthas been updated to handle the newstepIdargument internally, but consumers implementingStreamProcessorEventsdirectly need to add the new parameter.@tanstack/ai:ThinkingPartgains optionalstepIdandsignaturefields.ModelMessagegains an optionalthinking?: Array<{ content; signature? }>field so prior thinking can be replayed in subsequent turns.StepFinishedEventgains an optionalsignaturefield for provider-supplied thinking signatures.StreamProcessortracks thinking per-step viastepIdand keeps step ordering.getState().thinking/getResult().thinkingconcatenate step contents in order.onThinkingUpdatecallback onStreamProcessorEventsnow receives(messageId, stepId, content)— consumers implementing it directly must add thestepIdparameter.TextEngineaccumulates thinking + signatures per iteration and includes them in assistant messages with tool calls so the next turn can replay them.@tanstack/ai-anthropic:signature_deltastream events and emits the finalSTEP_FINISHEDwith the signature oncontent_block_stop.formatMessagesfor multi-turn history.betas: ['interleaved-thinking-2025-05-14']to thebeta.messages.createcall site when a thinking budget is configured. The beta flag is scoped to the streaming path only, sostructuredOutput(which uses the non-betamessages.createendpoint) is unaffected.@tanstack/ai-client:ChatClient's internalonThinkingUpdatewiring is updated for the newstepIdparameter.Patch Changes
Fix
tool_use.name: String should have at least 1 character400 from Anthropic when sending a follow-up message after approving a tool that needs approval (issue Tool name is undefined in messages after executing a tool that needs approval #532). (#536)The agent loop's continuation re-emit of
TOOL_CALL_STARTafter a server-side post-approval execution now includes the AG-UI spec fieldtoolCallNamealongside the deprecatedtoolNamealias, so the client'sStreamProcessorrecords a tool-call part with a definednameinstead ofundefined. As a defensive measure,StreamProcessoralso accepts the deprecatedtoolNamefield as a fallback whentoolCallNameis missing.The post-approval execution also now replaces the
pendingExecution: trueplaceholder tool message in the agent loop's message history with the real tool result, instead of appending a duplicate. This prevents the Anthropic adapter'stool_resultde-dup (which keeps the first match) from discarding the real result, so the model sees the actual tool output during the post-approval streaming response.Updated dependencies []:
@tanstack/ai-client@0.9.0
Minor Changes
Fix thinking blocks getting merged across steps and lost on turn 2+ of Anthropic tool loops. (#391)
Each thinking step emitted by the adapter now produces its own
ThinkingParton theUIMessageinstead of being merged into a single part, and thinking content + Anthropic signatures are preserved in server-side message history so multi-turn tool flows with extended thinking work correctly.This includes a public callback signature change:
StreamProcessorEvents.onThinkingUpdatenow receives(messageId, stepId, content)instead of(messageId, content).ChatClienthas been updated to handle the newstepIdargument internally, but consumers implementingStreamProcessorEventsdirectly need to add the new parameter.@tanstack/ai:ThinkingPartgains optionalstepIdandsignaturefields.ModelMessagegains an optionalthinking?: Array<{ content; signature? }>field so prior thinking can be replayed in subsequent turns.StepFinishedEventgains an optionalsignaturefield for provider-supplied thinking signatures.StreamProcessortracks thinking per-step viastepIdand keeps step ordering.getState().thinking/getResult().thinkingconcatenate step contents in order.onThinkingUpdatecallback onStreamProcessorEventsnow receives(messageId, stepId, content)— consumers implementing it directly must add thestepIdparameter.TextEngineaccumulates thinking + signatures per iteration and includes them in assistant messages with tool calls so the next turn can replay them.@tanstack/ai-anthropic:signature_deltastream events and emits the finalSTEP_FINISHEDwith the signature oncontent_block_stop.formatMessagesfor multi-turn history.betas: ['interleaved-thinking-2025-05-14']to thebeta.messages.createcall site when a thinking budget is configured. The beta flag is scoped to the streaming path only, sostructuredOutput(which uses the non-betamessages.createendpoint) is unaffected.@tanstack/ai-client:ChatClient's internalonThinkingUpdatewiring is updated for the newstepIdparameter.Patch Changes
Fixes a race condition in ChatClient.streamResponse() where this.abortController.signal could reference a stale or null controller by the time it is passed to this.connection.connect() (#377)
Updated dependencies [
a4e2c55,82078bd,b2d3cc1]:@tanstack/ai-isolate-cloudflare@0.2.0
Minor Changes
Port the Cloudflare worker driver from
unsafe_evaltoworker_loader(Dynamic Workers). (#523)Cloudflare gates the
unsafe_evalbinding for all customer prod accounts; the previous driver was unusable in production and broken inwrangler devon current Wrangler 4.x. The supported replacement is theworker_loaderbinding (GA-beta'd 2026-03-24).Breaking: the worker now requires the
LOADERbinding instead ofUNSAFE_EVAL. Update yourwrangler.toml:The HTTP tool-callback protocol and public driver API are unchanged. Workers Paid plan is required for any edge usage (deploy or
wrangler dev --remote); localwrangler devworks on the Free plan.Closes ai-isolate-cloudflare: unsafe_eval is gated by Cloudflare; port to worker_loader #522.
Patch Changes
fix(ai-isolate-cloudflare): accumulate
toolResultsacross rounds in the driver round-trip (#524)The Cloudflare isolate driver was wiping
toolResultsbetween rounds.wrap-codeuses sequentialtc_<idx>ids that are re-derived every round when the Worker re-executes user code, so prior-round results must remain in the cache. With the wipe, multi-tool programs (e.g.await A(); await B();) would ping-pong between{tc_0}and{tc_1}and exhaustmaxToolRounds, surfacing asMaxRoundsExceeded.Single-tool code worked because only one cache entry was ever needed in a given round. Existing tests covered single-round flows only and did not exercise real
wrap-codeids end-to-end, so the regression slipped through.Added a
tc_<idx>-shaped regression test that fails on the prior implementation and passes with the merge.Updated dependencies []:
@tanstack/ai-anthropic@0.8.4
Patch Changes
Fix thinking blocks getting merged across steps and lost on turn 2+ of Anthropic tool loops. (#391)
Each thinking step emitted by the adapter now produces its own
ThinkingParton theUIMessageinstead of being merged into a single part, and thinking content + Anthropic signatures are preserved in server-side message history so multi-turn tool flows with extended thinking work correctly.This includes a public callback signature change:
StreamProcessorEvents.onThinkingUpdatenow receives(messageId, stepId, content)instead of(messageId, content).ChatClienthas been updated to handle the newstepIdargument internally, but consumers implementingStreamProcessorEventsdirectly need to add the new parameter.@tanstack/ai:ThinkingPartgains optionalstepIdandsignaturefields.ModelMessagegains an optionalthinking?: Array<{ content; signature? }>field so prior thinking can be replayed in subsequent turns.StepFinishedEventgains an optionalsignaturefield for provider-supplied thinking signatures.StreamProcessortracks thinking per-step viastepIdand keeps step ordering.getState().thinking/getResult().thinkingconcatenate step contents in order.onThinkingUpdatecallback onStreamProcessorEventsnow receives(messageId, stepId, content)— consumers implementing it directly must add thestepIdparameter.TextEngineaccumulates thinking + signatures per iteration and includes them in assistant messages with tool calls so the next turn can replay them.@tanstack/ai-anthropic:signature_deltastream events and emits the finalSTEP_FINISHEDwith the signature oncontent_block_stop.formatMessagesfor multi-turn history.betas: ['interleaved-thinking-2025-05-14']to thebeta.messages.createcall site when a thinking budget is configured. The beta flag is scoped to the streaming path only, sostructuredOutput(which uses the non-betamessages.createendpoint) is unaffected.@tanstack/ai-client:ChatClient's internalonThinkingUpdatewiring is updated for the newstepIdparameter.Updated dependencies [
a4e2c55,82078bd,b2d3cc1]:@tanstack/ai-code-mode@0.1.9
Patch Changes
a4e2c55,82078bd,b2d3cc1]:@tanstack/ai-code-mode-skills@0.1.9
Patch Changes
a4e2c55,82078bd,b2d3cc1]:@tanstack/ai-devtools-core@0.3.26
Patch Changes
a4e2c55,82078bd,b2d3cc1]:@tanstack/ai-elevenlabs@0.2.1
Patch Changes
a4e2c55,82078bd,b2d3cc1,13cceae]:@tanstack/ai-event-client@0.2.9
Patch Changes
a4e2c55,82078bd,b2d3cc1]:@tanstack/ai-fal@0.7.1
Patch Changes
a4e2c55,82078bd,b2d3cc1]:@tanstack/ai-gemini@0.10.1
Patch Changes
a4e2c55,82078bd,b2d3cc1]:@tanstack/ai-grok@0.7.1
Patch Changes
a4e2c55,82078bd,b2d3cc1]:@tanstack/ai-groq@0.1.9
Patch Changes
a4e2c55,82078bd,b2d3cc1]:@tanstack/ai-isolate-node@0.1.9
Patch Changes
@tanstack/ai-isolate-quickjs@0.1.9
Patch Changes
@tanstack/ai-ollama@0.6.11
Patch Changes
a4e2c55,82078bd,b2d3cc1]:@tanstack/ai-openai@0.8.3
Patch Changes
a4e2c55,82078bd,b2d3cc1,13cceae]:@tanstack/ai-openrouter@0.8.3
Patch Changes
a4e2c55,82078bd,b2d3cc1]:@tanstack/ai-preact@0.6.21
Patch Changes
a4e2c55,82078bd,b2d3cc1,13cceae]:@tanstack/ai-react@0.8.1
Patch Changes
a4e2c55,82078bd,b2d3cc1,13cceae]:@tanstack/ai-react-ui@0.6.3
Patch Changes
b2d3cc1,13cceae]:@tanstack/ai-solid@0.7.1
Patch Changes
a4e2c55,82078bd,b2d3cc1,13cceae]:@tanstack/ai-solid-ui@0.6.3
Patch Changes
b2d3cc1,13cceae]:@tanstack/ai-svelte@0.7.1
Patch Changes
a4e2c55,82078bd,b2d3cc1,13cceae]:@tanstack/ai-vue@0.7.1
Patch Changes
a4e2c55,82078bd,b2d3cc1,13cceae]:@tanstack/ai-vue-ui@0.1.32
Patch Changes
@tanstack/preact-ai-devtools@0.1.30
Patch Changes
@tanstack/react-ai-devtools@0.2.30
Patch Changes
@tanstack/solid-ai-devtools@0.2.30
Patch Changes
ts-svelte-chat@0.1.39
Patch Changes
a4e2c55,82078bd,b2d3cc1,13cceae]:ts-vue-chat@0.1.39
Patch Changes
a4e2c55,82078bd,b2d3cc1,13cceae]:vanilla-chat@0.0.36
Patch Changes
b2d3cc1,13cceae]:@tanstack/ai-code-mode-models-eval@0.0.13
Patch Changes
a4e2c55,82078bd,b2d3cc1]: