Skip to content

Conversation

@shanevcantwell
Copy link
Contributor

@shanevcantwell shanevcantwell commented Feb 13, 2026

Summary

  • Local/open-weight models respond to the default Apply prompt with prose summaries of changes instead of actual code, because defaultApplyPrompt returns a single-sentence plain string with no code-only constraint
  • Change defaultApplyPrompt to return ChatMessage[] with labeled ORIGINAL CODE / SUGGESTED EDIT sections, explicit "Output ONLY code" instructions, and an assistant prefill opening a code block
  • This routes through streamChat (instead of streamComplete), and the assistant prefill forces the model to continue with code rather than prose
  • The existing filterCodeBlockLines filter strips the closing fence from the output

Context

defaultApplyPrompt is the fallback Apply prompt used when no model-specific promptTemplates.apply is configured. It applies when a model calls edit_existing_file and the Apply step merges the suggested code into the original file.

The previous prompt was:

{original_code}\n\nThe following code was suggested as an edit:\n```\n{new_code}\n```\nPlease apply it to the previous code.

Local models (tested with gpt-oss-20b via LM Studio) interpreted this as a request to describe changes rather than output code. The new prompt with explicit rules and assistant prefill produces clean code diffs.

What this doesn't change

  • Claude Sonnet lazy-apply path (separate code path in core/edit/lazy/)
  • GPT/Claude edit prompts (gptEditPrompt, not defaultApplyPrompt)
  • Users with custom promptTemplates.apply in config (custom overrides defaultApplyPrompt)

Test plan

  • cd core && npm run vitest — 1654 tests pass, 0 failures
  • Tested with gpt-oss-20b (local model via LM Studio) — Apply produces code diffs instead of prose summaries
  • Verify cloud model Apply still works (they typically use model-specific prompts, not defaultApplyPrompt)

🤖 Generated with Claude Code


Summary by cubic

Make the default Apply prompt produce code (not prose) for local/open‑weight models by switching to a chat-based prompt with code-only instructions and an assistant prefill. Apply now routes through streamChat and returns the full modified file.

  • Bug Fixes
    • defaultApplyPrompt now returns ChatMessage[] with ORIGINAL CODE / SUGGESTED EDIT blocks and “output ONLY code” instructions.
    • Assistant prefill opens a code block to force code continuation; existing filter removes the closing fence.
    • No changes to model-specific prompts, lazy-apply path, or user overrides.

Written for commit 25026d7. Summary will update on new commits.

Local models respond to the existing single-sentence Apply prompt with
prose summaries instead of code. Return ChatMessage[] with explicit
code-only instructions and assistant prefill to force code output.

Co-Authored-By: Claude Opus 4.6 <[email protected]>
@shanevcantwell shanevcantwell requested a review from a team as a code owner February 13, 2026 05:23
@shanevcantwell shanevcantwell requested review from RomneyDa and removed request for a team February 13, 2026 05:23
@dosubot dosubot bot added the size:S This PR changes 10-29 lines, ignoring generated files. label Feb 13, 2026
Copy link
Contributor

@cubic-dev-ai cubic-dev-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No issues found across 1 file

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

size:S This PR changes 10-29 lines, ignoring generated files.

Projects

Status: Todo

Development

Successfully merging this pull request may close these issues.

1 participant