Skip to content

ci: Version Packages#533

Merged
AlemTuzlak merged 1 commit intomainfrom
changeset-release/main
May 8, 2026
Merged

ci: Version Packages#533
AlemTuzlak merged 1 commit intomainfrom
changeset-release/main

Conversation

@github-actions
Copy link
Copy Markdown
Contributor

@github-actions github-actions Bot commented May 6, 2026

This PR was opened by the Changesets release GitHub action. When you're ready to do a release, you can merge this and the packages will be published to npm automatically. If you're not ready to do a release yet, that's fine, whenever you add more changesets to main, this PR will be updated.

Releases

@tanstack/ai@0.15.0

Minor Changes

  • OpenTelemetry middleware. otelMiddleware({ tracer, meter?, captureContent?, redact?, ... }) emits GenAI-semantic-convention traces and metrics for every chat() call. (#500)

    • Root span per chat() + child span per agent-loop iteration (named chat <model> #<iteration>) + grandchild span per tool call.
    • gen_ai.client.operation.duration (seconds) recorded once per chat() call; gen_ai.client.token.usage (tokens) recorded per iteration (one input + one output record). Metric attributes are kept low-cardinality — gen_ai.response.model and gen_ai.response.id are intentionally excluded.
    • captureContent: true attaches prompt/completion content as gen_ai.{user,system,assistant,tool}.message and gen_ai.choice span events. Redactor failures fail closed to a "[redaction_failed]" sentinel — raw content never leaks. Assistant text is capped at maxContentLength (default 100 000).
    • Four extension points for custom attributes, names, span-options, and end-of-span callbacks. Thrown callbacks are caught and logged to console.warn with a label so failures remain diagnosable.
    • @opentelemetry/api is an optional peer dependency. The middleware is exported from the dedicated subpath @tanstack/ai/middlewares/otel so that importing @tanstack/ai/middlewares does not eagerly require OTel.

    See docs/advanced/otel.md for the full guide.

  • Fix thinking blocks getting merged across steps and lost on turn 2+ of Anthropic tool loops. (#391)

    Each thinking step emitted by the adapter now produces its own ThinkingPart on the UIMessage instead of being merged into a single part, and thinking content + Anthropic signatures are preserved in server-side message history so multi-turn tool flows with extended thinking work correctly.

    This includes a public callback signature change: StreamProcessorEvents.onThinkingUpdate now receives (messageId, stepId, content) instead of (messageId, content). ChatClient has been updated to handle the new stepId argument internally, but consumers implementing StreamProcessorEvents directly need to add the new parameter.

    @tanstack/ai:

    • ThinkingPart gains optional stepId and signature fields.
    • ModelMessage gains an optional thinking?: Array<{ content; signature? }> field so prior thinking can be replayed in subsequent turns.
    • StepFinishedEvent gains an optional signature field for provider-supplied thinking signatures.
    • StreamProcessor tracks thinking per-step via stepId and keeps step ordering. getState().thinking / getResult().thinking concatenate step contents in order.
    • The onThinkingUpdate callback on StreamProcessorEvents now receives (messageId, stepId, content) — consumers implementing it directly must add the stepId parameter.
    • TextEngine accumulates thinking + signatures per iteration and includes them in assistant messages with tool calls so the next turn can replay them.

    @tanstack/ai-anthropic:

    • Captures signature_delta stream events and emits the final STEP_FINISHED with the signature on content_block_stop.
    • Includes thinking blocks with signatures in formatMessages for multi-turn history.
    • Passes betas: ['interleaved-thinking-2025-05-14'] to the beta.messages.create call site when a thinking budget is configured. The beta flag is scoped to the streaming path only, so structuredOutput (which uses the non-beta messages.create endpoint) is unaffected.

    @tanstack/ai-client:

    • ChatClient's internal onThinkingUpdate wiring is updated for the new stepId parameter.

Patch Changes

  • Fix tool_use.name: String should have at least 1 character 400 from Anthropic when sending a follow-up message after approving a tool that needs approval (issue Tool name is undefined in messages after executing a tool that needs approval #532). (#536)

    The agent loop's continuation re-emit of TOOL_CALL_START after a server-side post-approval execution now includes the AG-UI spec field toolCallName alongside the deprecated toolName alias, so the client's StreamProcessor records a tool-call part with a defined name instead of undefined. As a defensive measure, StreamProcessor also accepts the deprecated toolName field as a fallback when toolCallName is missing.

    The post-approval execution also now replaces the pendingExecution: true placeholder tool message in the agent loop's message history with the real tool result, instead of appending a duplicate. This prevents the Anthropic adapter's tool_result de-dup (which keeps the first match) from discarding the real result, so the model sees the actual tool output during the post-approval streaming response.

  • Updated dependencies []:

    • @tanstack/ai-event-client@0.2.9

@tanstack/ai-client@0.9.0

Minor Changes

  • Fix thinking blocks getting merged across steps and lost on turn 2+ of Anthropic tool loops. (#391)

    Each thinking step emitted by the adapter now produces its own ThinkingPart on the UIMessage instead of being merged into a single part, and thinking content + Anthropic signatures are preserved in server-side message history so multi-turn tool flows with extended thinking work correctly.

    This includes a public callback signature change: StreamProcessorEvents.onThinkingUpdate now receives (messageId, stepId, content) instead of (messageId, content). ChatClient has been updated to handle the new stepId argument internally, but consumers implementing StreamProcessorEvents directly need to add the new parameter.

    @tanstack/ai:

    • ThinkingPart gains optional stepId and signature fields.
    • ModelMessage gains an optional thinking?: Array<{ content; signature? }> field so prior thinking can be replayed in subsequent turns.
    • StepFinishedEvent gains an optional signature field for provider-supplied thinking signatures.
    • StreamProcessor tracks thinking per-step via stepId and keeps step ordering. getState().thinking / getResult().thinking concatenate step contents in order.
    • The onThinkingUpdate callback on StreamProcessorEvents now receives (messageId, stepId, content) — consumers implementing it directly must add the stepId parameter.
    • TextEngine accumulates thinking + signatures per iteration and includes them in assistant messages with tool calls so the next turn can replay them.

    @tanstack/ai-anthropic:

    • Captures signature_delta stream events and emits the final STEP_FINISHED with the signature on content_block_stop.
    • Includes thinking blocks with signatures in formatMessages for multi-turn history.
    • Passes betas: ['interleaved-thinking-2025-05-14'] to the beta.messages.create call site when a thinking budget is configured. The beta flag is scoped to the streaming path only, so structuredOutput (which uses the non-beta messages.create endpoint) is unaffected.

    @tanstack/ai-client:

    • ChatClient's internal onThinkingUpdate wiring is updated for the new stepId parameter.

Patch Changes

  • Fixes a race condition in ChatClient.streamResponse() where this.abortController.signal could reference a stale or null controller by the time it is passed to this.connection.connect() (#377)

  • Updated dependencies [a4e2c55, 82078bd, b2d3cc1]:

    • @tanstack/ai@0.15.0
    • @tanstack/ai-event-client@0.2.9

@tanstack/ai-isolate-cloudflare@0.2.0

Minor Changes

  • Port the Cloudflare worker driver from unsafe_eval to worker_loader (Dynamic Workers). (#523)

    Cloudflare gates the unsafe_eval binding for all customer prod accounts; the previous driver was unusable in production and broken in wrangler dev on current Wrangler 4.x. The supported replacement is the worker_loader binding (GA-beta'd 2026-03-24).

    Breaking: the worker now requires the LOADER binding instead of UNSAFE_EVAL. Update your wrangler.toml:

    # before
    [[unsafe.bindings]]
    name = "UNSAFE_EVAL"
    type = "unsafe_eval"
    
    # after
    [[worker_loaders]]
    binding = "LOADER"

    The HTTP tool-callback protocol and public driver API are unchanged. Workers Paid plan is required for any edge usage (deploy or wrangler dev --remote); local wrangler dev works on the Free plan.

    Closes ai-isolate-cloudflare: unsafe_eval is gated by Cloudflare; port to worker_loader #522.

Patch Changes

  • fix(ai-isolate-cloudflare): accumulate toolResults across rounds in the driver round-trip (#524)

    The Cloudflare isolate driver was wiping toolResults between rounds. wrap-code uses sequential tc_<idx> ids that are re-derived every round when the Worker re-executes user code, so prior-round results must remain in the cache. With the wipe, multi-tool programs (e.g. await A(); await B();) would ping-pong between {tc_0} and {tc_1} and exhaust maxToolRounds, surfacing as MaxRoundsExceeded.

    Single-tool code worked because only one cache entry was ever needed in a given round. Existing tests covered single-round flows only and did not exercise real wrap-code ids end-to-end, so the regression slipped through.

    Added a tc_<idx>-shaped regression test that fails on the prior implementation and passes with the merge.

  • Updated dependencies []:

    • @tanstack/ai-code-mode@0.1.9

@tanstack/ai-anthropic@0.8.4

Patch Changes

  • Fix thinking blocks getting merged across steps and lost on turn 2+ of Anthropic tool loops. (#391)

    Each thinking step emitted by the adapter now produces its own ThinkingPart on the UIMessage instead of being merged into a single part, and thinking content + Anthropic signatures are preserved in server-side message history so multi-turn tool flows with extended thinking work correctly.

    This includes a public callback signature change: StreamProcessorEvents.onThinkingUpdate now receives (messageId, stepId, content) instead of (messageId, content). ChatClient has been updated to handle the new stepId argument internally, but consumers implementing StreamProcessorEvents directly need to add the new parameter.

    @tanstack/ai:

    • ThinkingPart gains optional stepId and signature fields.
    • ModelMessage gains an optional thinking?: Array<{ content; signature? }> field so prior thinking can be replayed in subsequent turns.
    • StepFinishedEvent gains an optional signature field for provider-supplied thinking signatures.
    • StreamProcessor tracks thinking per-step via stepId and keeps step ordering. getState().thinking / getResult().thinking concatenate step contents in order.
    • The onThinkingUpdate callback on StreamProcessorEvents now receives (messageId, stepId, content) — consumers implementing it directly must add the stepId parameter.
    • TextEngine accumulates thinking + signatures per iteration and includes them in assistant messages with tool calls so the next turn can replay them.

    @tanstack/ai-anthropic:

    • Captures signature_delta stream events and emits the final STEP_FINISHED with the signature on content_block_stop.
    • Includes thinking blocks with signatures in formatMessages for multi-turn history.
    • Passes betas: ['interleaved-thinking-2025-05-14'] to the beta.messages.create call site when a thinking budget is configured. The beta flag is scoped to the streaming path only, so structuredOutput (which uses the non-beta messages.create endpoint) is unaffected.

    @tanstack/ai-client:

    • ChatClient's internal onThinkingUpdate wiring is updated for the new stepId parameter.
  • Updated dependencies [a4e2c55, 82078bd, b2d3cc1]:

    • @tanstack/ai@0.15.0

@tanstack/ai-code-mode@0.1.9

Patch Changes

@tanstack/ai-code-mode-skills@0.1.9

Patch Changes

@tanstack/ai-devtools-core@0.3.26

Patch Changes

@tanstack/ai-elevenlabs@0.2.1

Patch Changes

@tanstack/ai-event-client@0.2.9

Patch Changes

@tanstack/ai-fal@0.7.1

Patch Changes

@tanstack/ai-gemini@0.10.1

Patch Changes

@tanstack/ai-grok@0.7.1

Patch Changes

@tanstack/ai-groq@0.1.9

Patch Changes

@tanstack/ai-isolate-node@0.1.9

Patch Changes

  • Updated dependencies []:
    • @tanstack/ai-code-mode@0.1.9

@tanstack/ai-isolate-quickjs@0.1.9

Patch Changes

  • Updated dependencies []:
    • @tanstack/ai-code-mode@0.1.9

@tanstack/ai-ollama@0.6.11

Patch Changes

@tanstack/ai-openai@0.8.3

Patch Changes

@tanstack/ai-openrouter@0.8.3

Patch Changes

@tanstack/ai-preact@0.6.21

Patch Changes

@tanstack/ai-react@0.8.1

Patch Changes

@tanstack/ai-react-ui@0.6.3

Patch Changes

  • Updated dependencies [b2d3cc1, 13cceae]:
    • @tanstack/ai-client@0.9.0
    • @tanstack/ai-react@0.8.1

@tanstack/ai-solid@0.7.1

Patch Changes

@tanstack/ai-solid-ui@0.6.3

Patch Changes

  • Updated dependencies [b2d3cc1, 13cceae]:
    • @tanstack/ai-client@0.9.0
    • @tanstack/ai-solid@0.7.1

@tanstack/ai-svelte@0.7.1

Patch Changes

@tanstack/ai-vue@0.7.1

Patch Changes

@tanstack/ai-vue-ui@0.1.32

Patch Changes

  • Updated dependencies []:
    • @tanstack/ai-vue@0.7.1

@tanstack/preact-ai-devtools@0.1.30

Patch Changes

  • Updated dependencies []:
    • @tanstack/ai-devtools-core@0.3.26

@tanstack/react-ai-devtools@0.2.30

Patch Changes

  • Updated dependencies []:
    • @tanstack/ai-devtools-core@0.3.26

@tanstack/solid-ai-devtools@0.2.30

Patch Changes

  • Updated dependencies []:
    • @tanstack/ai-devtools-core@0.3.26

ts-svelte-chat@0.1.39

Patch Changes

  • Updated dependencies [a4e2c55, 82078bd, b2d3cc1, 13cceae]:
    • @tanstack/ai@0.15.0
    • @tanstack/ai-anthropic@0.8.4
    • @tanstack/ai-client@0.9.0
    • @tanstack/ai-gemini@0.10.1
    • @tanstack/ai-ollama@0.6.11
    • @tanstack/ai-openai@0.8.3
    • @tanstack/ai-svelte@0.7.1

ts-vue-chat@0.1.39

Patch Changes

  • Updated dependencies [a4e2c55, 82078bd, b2d3cc1, 13cceae]:
    • @tanstack/ai@0.15.0
    • @tanstack/ai-anthropic@0.8.4
    • @tanstack/ai-client@0.9.0
    • @tanstack/ai-gemini@0.10.1
    • @tanstack/ai-ollama@0.6.11
    • @tanstack/ai-openai@0.8.3
    • @tanstack/ai-vue@0.7.1
    • @tanstack/ai-vue-ui@0.1.32

vanilla-chat@0.0.36

Patch Changes

@tanstack/ai-code-mode-models-eval@0.0.13

Patch Changes

  • Updated dependencies [a4e2c55, 82078bd, b2d3cc1]:
    • @tanstack/ai@0.15.0
    • @tanstack/ai-anthropic@0.8.4
    • @tanstack/ai-code-mode@0.1.9
    • @tanstack/ai-gemini@0.10.1
    • @tanstack/ai-grok@0.7.1
    • @tanstack/ai-groq@0.1.9
    • @tanstack/ai-ollama@0.6.11
    • @tanstack/ai-openai@0.8.3
    • @tanstack/ai-isolate-node@0.1.9

@github-actions github-actions Bot force-pushed the changeset-release/main branch 4 times, most recently from 00c9568 to 14d5bc5 Compare May 8, 2026 04:41
@github-actions github-actions Bot force-pushed the changeset-release/main branch from 14d5bc5 to 86643e3 Compare May 8, 2026 11:16
@AlemTuzlak AlemTuzlak merged commit 7205937 into main May 8, 2026
@AlemTuzlak AlemTuzlak deleted the changeset-release/main branch May 8, 2026 12:45
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

ai-isolate-cloudflare: unsafe_eval is gated by Cloudflare; port to worker_loader

1 participant