-
|
I use azureOpenaiChatClient and use azure openai gpt-5 model(azure ai foundry), but I couldn't output the reasoning. Does azureOpenaiChatClient not support to extract reasoning content? I changed this sample code from OpenAIResponsesClient to AzureOpenAIChatClient or AzureOpenAIResponsesClient. from agent_framework import ChatAgent
from agent_framework.azure import AzureOpenAIChatClient
chat_client = AzureOpenAIChatClient(
deployment_name="gpt-5",
credential=DefaultAzureCredential(),
)
agent = ChatAgent(
name="MathHelper",
chat_client=chat_client,
instructions="You are a personal math tutor. When asked a math question, "
"reason over how best to approach the problem and share your thought process.",
options={
"temperature": 0.7,
"reasoning_effort": "high",
},
) |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
|
Hi @skaghzz, thanks for the question. The Chat Completions API (AzureOpenAIChatClient) has limited support for reasoning content extraction compared to the Responses API. To get full reasoning output (with effort levels and summaries), you should use AzureOpenAIResponsesClient instead. Here's how to adapt your sample: from agent_framework.azure import AzureOpenAIResponsesClient
from azure.identity import DefaultAzureCredential
agent = AzureOpenAIResponsesClient(credential=DefaultAzureCredential()).as_agent(
name="MathHelper",
instructions="You are a personal math tutor. When asked a math question, "
"reason over how best to approach the problem and share your thought process.",
default_options={"reasoning": {"effort": "high", "summary": "detailed"}},
)Then to extract the reasoning content from the response: response = await agent.run("Solve 3x + 11 = 14")
for msg in response.messages:
for content in msg.contents:
if content.type == "text_reasoning":
print(f"[Reasoning]: {content.text}")
elif content.type == "text":
print(f"[Answer]: {content.text}")The notable difference is that AzureOpenAIResponsesClient uses the OpenAI Responses API under the hood (which fully supports reasoning configuration like effort and summary), while AzureOpenAIChatClient uses the Chat Completions API which doesn't surface reasoning content in the same way. For a complete working example, see:
Make sure your Azure OpenAI deployment is using a reasoning-capable model (like gpt-5) and that your deployment supports the Responses API. To get the latest Azure OpenAI Responses API support, I've seen the need to do: from openai import AsyncOpenAI
client = AsyncOpenAI(
api_key=os.getenv("AZURE_OPENAI_API_KEY"),
base_url="https://YOUR-RESOURCE-NAME.openai.azure.com/openai/v1/"
)
# Then pass this client into AzureOpenAIResponsesClientSee these docs for more info: https://learn.microsoft.com/en-us/azure/ai-foundry/openai/api-version-lifecycle?view=foundry-classic&tabs=python#v1-api |
Beta Was this translation helpful? Give feedback.
Hi @skaghzz, thanks for the question.
The Chat Completions API (AzureOpenAIChatClient) has limited support for reasoning content extraction compared to the Responses API. To get full reasoning output (with effort levels and summaries), you should use AzureOpenAIResponsesClient instead.
Here's how to adapt your sample: