Skip to content

[codex] fix: improve local LM Studio connection errors#12013

Open
yzlu0917 wants to merge 1 commit intocontinuedev:mainfrom
yzlu0917:codex/issue-11818-lmstudio-errors
Open

[codex] fix: improve local LM Studio connection errors#12013
yzlu0917 wants to merge 1 commit intocontinuedev:mainfrom
yzlu0917:codex/issue-11818-lmstudio-errors

Conversation

@yzlu0917
Copy link
Copy Markdown

@yzlu0917 yzlu0917 commented Apr 3, 2026

Summary

  • switch the LM Studio default apiBase from localhost to 127.0.0.1 in both core and openai-adapters
  • add a dedicated LM Studio connection error in BaseLLM.fetch() for refused local connections
  • fix VS Code webview error handling so llm/streamChat returns friendly connection errors instead of a generic Connection error. message
  • add regression coverage for the LM Studio fetch error path, VS Code error forwarding, and openai-compatible provider defaults
  • document the new LM Studio default and the IPv6 loopback rationale

Why

Continue already had provider-specific connection guidance for local Ollama and Lemonade setups, but LM Studio users still often saw a generic Connection error.. That was made worse by two additional issues: LM Studio still defaulted to localhost, which can hit IPv6 loopback resolution problems on some systems, and the VS Code streamChat path returned before its friendlier error mapping ran.

This change makes local LM Studio setup more reliable by default and surfaces actionable errors when the local server is unavailable.

Validation

  • ran npm run vitest -- llm/index.fetch.vitest.ts llm/llms/OpenAI-compatible.vitest.ts llm/llms/OpenAI-compatible-core.vitest.ts in core
  • ran npm test -- src/test/main.test.ts in packages/openai-adapters
  • ran vitest against extensions/vscode/src/webviewProtocol.vitest.ts using the repository's existing core vitest install

Closes #11818


Summary by cubic

Improves LM Studio local connection reliability and error clarity. Addresses #11818 by defaulting to 127.0.0.1 and surfacing friendly ECONNREFUSED messages in core and the VS Code webview.

  • Bug Fixes
    • Default LM Studio apiBase to http://127.0.0.1:1234/ in core and packages/openai-adapters.
    • Map ECONNREFUSED on 127.0.0.1/localhost to a specific LM Studio message in BaseLLM.fetch().
    • Return friendlier connection errors in VS Code llm/streamChat via centralized message mapping.
    • Add tests for LM Studio fetch errors, VS Code error forwarding, and provider defaults; update docs with IPv6 loopback rationale.

Written for commit 3ecc016. Summary will update on new commits.

@yzlu0917 yzlu0917 requested a review from a team as a code owner April 3, 2026 05:45
@yzlu0917 yzlu0917 requested review from sestinj and removed request for a team April 3, 2026 05:45
@dosubot dosubot bot added the size:L This PR changes 100-499 lines, ignoring generated files. label Apr 3, 2026
Copy link
Copy Markdown
Contributor

@cubic-dev-ai cubic-dev-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No issues found across 10 files

@chatgpt-codex-connector
Copy link
Copy Markdown

💡 Codex Review

if (options.tools?.length && ollamaMessages.at(-1)?.role === "user") {

P1 Badge Restore Ollama template gate before sending tools

This change now attaches tools for every user-turn tool call request, but the same commit removed the earlier /api/show template check that set templateSupportsTools from .Tools. That means models whose Ollama template does not expose tool placeholders will now still receive tool payloads, which can cause runtime request failures or malformed prompting in tool-enabled chats. Reintroduce a runtime capability guard before populating chatOptions.tools.


if (e.message?.includes("https://proxy-server")) {

P2 Badge Key proxy onboarding prompt off the computed error text

The proxy guidance prompt is now gated by e.message, but getFriendlyErrorMessage can derive proxy-server details from error.cause.message when e.message is generic (for example, Connection error.). In that case users still get a proxy-related error response but no “Add API Key / Use Local Model” action prompt. Use the computed message (or helper output) for this branch so cause-derived proxy errors still trigger onboarding.

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

size:L This PR changes 100-499 lines, ignoring generated files.

Projects

Status: Todo

Development

Successfully merging this pull request may close these issues.

Generic "Connection error." across multiple providers (tracking issue)

1 participant