Skip to content

fix: keep responses reasoning adjacent to messages#12018

Open
yzlu0917 wants to merge 1 commit intocontinuedev:mainfrom
yzlu0917:codex/issue-11994-responses-order
Open

fix: keep responses reasoning adjacent to messages#12018
yzlu0917 wants to merge 1 commit intocontinuedev:mainfrom
yzlu0917:codex/issue-11994-responses-order

Conversation

@yzlu0917
Copy link
Copy Markdown

@yzlu0917 yzlu0917 commented Apr 3, 2026

Summary

  • emit assistant message items before function_call items when a Responses API assistant turn contains both text and tool calls
  • preserve the existing function-call behavior for tool-only turns while keeping the message and reasoning items adjacent
  • add a regression test covering a reasoning turn that emits both commentary text and a tool call in the same assistant response

Why

OpenAI Responses reasoning models require the reasoning item to stay adjacent to its assistant output item. Continue currently emits function_call items before the assistant message item when a turn contains both text and tool calls, which can separate the reasoning item from the message it belongs to and trigger:

Item 'msg_...' was provided without its required 'reasoning' item: 'rs_...'.

Reordering these items keeps the reasoning/message pair intact without changing the tool-call payloads themselves.

Validation

  • ran npm test -- --runInBand openaiTypeConverters.test.ts in core

Closes #11994


Summary by cubic

Reorders Responses output so assistant messages are emitted before function_call items when a turn has both text and tool calls. This keeps reasoning next to its message and fixes the “Item 'msg_...' was provided without its required 'reasoning' item” error (addresses #11994).

  • Bug Fixes
    • In toResponsesInput, emit the assistant message first and then tool calls; keeps reasoning/message adjacent and leaves tool-only turns unchanged.
    • Added a regression test for a mixed reasoning + tool-call turn to lock the new ordering.

Written for commit feff4df. Summary will update on new commits.

@yzlu0917 yzlu0917 requested a review from a team as a code owner April 3, 2026 06:50
@yzlu0917 yzlu0917 requested review from sestinj and removed request for a team April 3, 2026 06:50
@dosubot dosubot bot added the size:M This PR changes 30-99 lines, ignoring generated files. label Apr 3, 2026
Copy link
Copy Markdown
Contributor

@cubic-dev-ai cubic-dev-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No issues found across 2 files

@chatgpt-codex-connector
Copy link
Copy Markdown

💡 Codex Review

if (options.tools?.length && ollamaMessages.at(-1)?.role === "user") {

P1 Badge Skip tools when Ollama template lacks .Tools support

This change removes the guard that previously suppressed chatOptions.tools when /api/show indicated the model template does not support tools. As a result, any tool-enabled request now sends tool definitions to models that explicitly advertise no tool support, which can cause Ollama chat calls to fail for those models instead of degrading gracefully to normal text responses. This is user-impacting whenever a non-tool-capable Ollama model is configured with Continue features that attach tools.

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

size:M This PR changes 30-99 lines, ignoring generated files.

Projects

Status: Todo

Development

Successfully merging this pull request may close these issues.

[Bug] OpenAI Responses API 400 Bad Request: 'Missing reasoning item' with Reasoning Models

1 participant