Skip to content

[codex] fix(gui): respect model reasoning disablement#12021

Open
yzlu0917 wants to merge 1 commit intocontinuedev:mainfrom
yzlu0917:codex/issue-11265-ollama-reasoning
Open

[codex] fix(gui): respect model reasoning disablement#12021
yzlu0917 wants to merge 1 commit intocontinuedev:mainfrom
yzlu0917:codex/issue-11265-ollama-reasoning

Conversation

@yzlu0917
Copy link
Copy Markdown

@yzlu0917 yzlu0917 commented Apr 3, 2026

Summary

  • stop streamNormalInput from overriding chat models that explicitly set completionOptions.reasoning = false
  • keep the existing session-level reasoning toggle behavior for models that do allow reasoning overrides
  • add a GUI regression test that verifies chat requests do not send a forced reasoning flag for an Ollama model with reasoning disabled

Why

streamNormalInput always wrote completionOptions.reasoning from the session toggle whenever hasReasoningEnabled was set. That worked for most providers, but it broke Ollama chat models that explicitly disable reasoning in config, because the GUI would still send reasoning: true and Ollama would reject the request with errors like does not support thinking.

Validation

  • ran ./node_modules/.bin/vitest run src/redux/thunks/streamResponse.test.ts in gui
  • ran git diff --check
  • ran ./node_modules/.bin/tsc -p ./ --noEmit in gui and only hit the existing unrelated OPENROUTER_HEADERS export mismatch from ../core/llm/llms/OpenRouter.ts

Closes #11265


Summary by cubic

Respect model-level reasoning disablement in chat requests. Ollama models with completionOptions.reasoning: false no longer receive a forced reasoning flag; the session toggle still applies to models that allow overrides.

  • Bug Fixes
    • Short-circuit reasoning options when the model sets completionOptions.reasoning === false to avoid Ollama “does not support thinking” errors.
    • Add a GUI regression test to verify chat requests omit reasoning for disabled Ollama models.

Written for commit 85f7d1f. Summary will update on new commits.

@yzlu0917 yzlu0917 requested a review from a team as a code owner April 3, 2026 07:58
@yzlu0917 yzlu0917 requested review from sestinj and removed request for a team April 3, 2026 07:58
@dosubot dosubot bot added the size:M This PR changes 30-99 lines, ignoring generated files. label Apr 3, 2026
Copy link
Copy Markdown
Contributor

@cubic-dev-ai cubic-dev-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No issues found across 2 files

@chatgpt-codex-connector
Copy link
Copy Markdown

💡 Codex Review

if (options.tools?.length && ollamaMessages.at(-1)?.role === "user") {

P1 Badge Keep Ollama tool calls gated by template capability

_streamChat now always injects chatOptions.tools whenever tools are present and the last message is from the user, but this commit removed the templateSupportsTools !== false guard that was populated from /api/show. For Ollama models whose template does not include tool support, this sends tool payloads anyway and can make chat requests fail with provider errors instead of falling back to non-tool responses. This is especially likely for heuristically matched/custom models where native tool support is not guaranteed.

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

size:M This PR changes 30-99 lines, ignoring generated files.

Projects

Status: Todo

Development

Successfully merging this pull request may close these issues.

Unable to disable thinking/reasoning for Ollama models in Continue chat (JetBrains, v1.0.60)

1 participant