Skip to content

fix(openai-adapter): map max_tokens to max_completion_tokens for DeepSeek reasoner#11889

Merged
RomneyDa merged 1 commit intomainfrom
max-tokens-stuff
Mar 26, 2026
Merged

fix(openai-adapter): map max_tokens to max_completion_tokens for DeepSeek reasoner#11889
RomneyDa merged 1 commit intomainfrom
max-tokens-stuff

Conversation

@RomneyDa
Copy link
Copy Markdown
Contributor

@RomneyDa RomneyDa commented Mar 26, 2026

Based on the work by @BurakBebek1 in #10478 — extracting just the max_completion_tokens fix as requested in review comments.

Summary

  • DeepSeek Reasoner models require max_completion_tokens instead of max_tokens (similar to OpenAI's o-series models)
  • Automatically maps the parameter when the API base includes api.deepseek.com or the model name includes deepseek-reasoner

Test plan

  • Verify DeepSeek Reasoner models no longer fail due to max_tokens being sent
  • Verify no regression for standard OpenAI or other provider logic
  • Verify o-series handling still works as expected

Summary by cubic

Map max_tokens to max_completion_tokens for DeepSeek Reasoner models in the OpenAI adapter to prevent request failures. Applies when the API base includes api.deepseek.com or the model name includes deepseek-reasoner; standard OpenAI and o-series handling remain unchanged.

Written for commit 4f13740. Summary will update on new commits.

…Seek reasoner

DeepSeek Reasoner models require max_completion_tokens instead of max_tokens,
similar to OpenAI's o-series models. This maps the parameter automatically
when the API base is api.deepseek.com or the model name includes
"deepseek-reasoner".

Co-authored-by: Burak Bebek <BurakBebek1@users.noreply.github.com>
@RomneyDa RomneyDa requested a review from a team as a code owner March 26, 2026 18:08
@RomneyDa RomneyDa requested review from Patrick-Erichsen and removed request for a team March 26, 2026 18:08
@dosubot dosubot bot added the size:S This PR changes 10-29 lines, ignoring generated files. label Mar 26, 2026
@continue
Copy link
Copy Markdown
Contributor

continue bot commented Mar 26, 2026

📝 Documentation Review: No docs update needed.

This PR implements an internal fix that automatically maps max_tokens to max_completion_tokens for DeepSeek Reasoner models. This is handled transparently in the OpenAI adapter—users don't need to configure anything differently or be aware of this parameter mapping. The fix simply makes DeepSeek Reasoner models work correctly out of the box, which is the expected behavior and doesn't require documentation.

Copy link
Copy Markdown
Contributor

@cubic-dev-ai cubic-dev-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No issues found across 1 file

@github-project-automation github-project-automation bot moved this from Todo to In Progress in Issues and PRs Mar 26, 2026
@dosubot dosubot bot added the lgtm This PR has been approved by a maintainer label Mar 26, 2026
@RomneyDa RomneyDa merged commit 403f9f8 into main Mar 26, 2026
66 of 67 checks passed
@RomneyDa RomneyDa deleted the max-tokens-stuff branch March 26, 2026 18:51
@github-project-automation github-project-automation bot moved this from In Progress to Done in Issues and PRs Mar 26, 2026
@github-actions github-actions bot locked and limited conversation to collaborators Mar 26, 2026
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

lgtm This PR has been approved by a maintainer size:S This PR changes 10-29 lines, ignoring generated files.

Projects

Status: Done

Development

Successfully merging this pull request may close these issues.

2 participants