fix: serialize non-standard FunctionResponse dicts in AnthropicLlm#4807
fix: serialize non-standard FunctionResponse dicts in AnthropicLlm#4807giulio-leone wants to merge 2 commits intogoogle:mainfrom
Conversation
|
Warning You have reached your daily quota limit. Please wait up to 24 hours and I will start processing your requests again! |
|
Thanks for your pull request! It looks like this may be your first contribution to a Google open source project. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA). View this failed invocation of the CLA check for more information. For the most up to date status, view the checks section at the bottom of the pull request. |
|
Hi @giulio-leone , Thank you for your contribution! It appears you haven't yet signed the Contributor License Agreement (CLA). Please visit https://cla.developers.google.com/ to complete the signing process. Once the CLA is signed, we'll be able to proceed with the review of your PR. Thank you! |
db815f8 to
936a699
Compare
|
Rebased onto latest The upstream branch introduced improved Both the upstream PDF document tests and this PR's non-standard response test are included. |
…-2-preview The gemini-embedding-2-preview model requires the Vertex AI :embedContent endpoint instead of the legacy :predict endpoint used by older models (text-embedding-004, text-embedding-005). In google-genai <1.64.0, embed_content() unconditionally routed to :predict on Vertex AI, which returns FAILED_PRECONDITION for this model. v1.64.0 (googleapis/python-genai@af40cc6) introduced model-aware dispatch in embed_content(): models with "gemini" in the name are routed to :embedContent via t_is_vertex_embed_content_model(), while older text-embedding-* models continue to use :predict. This version also enforces a single-content-per-call limit for the embedContent API, which is why FilesRetrieval sets embed_batch_size=1. Co-authored-by: Xiang (Sean) Zhou <seanzhougoogle@google.com> PiperOrigin-RevId: 883689438
Rebased onto latest main, resolved conflicts with upstream json.dumps improvement. Keeps both upstream's json.dumps serialization for standard results AND the fallback path for non-standard response dicts (e.g. SkillToolset). Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
936a699 to
31301a3
Compare
|
@rohityan Thanks for the heads-up! I'll get the Google CLA signed. Will follow up once it's done. |
Summary
Fixes #4779
part_to_message_block()only handledfunction_responsedicts containing"content"or"result"keys. Any other dict structure (e.g. SkillToolset returning{"skill_name": ..., "instructions": ...}) fell through to an empty string, causing Claude to never see the tool output.Changes
anthropic_llm.py: Addedjson.dumps()fallback after the existing"content"/"result"checks, so non-standard response dicts are serialized to JSON text instead of being silently dropped.test_anthropic_llm.py: Added regression test verifying that a SkillToolset-style response dict is round-tripped correctly throughpart_to_message_block().Testing
test_anthropic_llm.pytests pass (including the new one)