-
Notifications
You must be signed in to change notification settings - Fork 3.3k
[azure-ai-projects] Evaluation sample sample_agent_evaluation.py has bug #44306
Copy link
Copy link
Open
Labels
AI ProjectsService AttentionWorkflow: This issue is responsible by Azure service team.Workflow: This issue is responsible by Azure service team.customer-reportedIssues that are reported by GitHub users external to the Azure organization.Issues that are reported by GitHub users external to the Azure organization.needs-team-attentionWorkflow: This issue needs attention from Azure service team or SDK teamWorkflow: This issue needs attention from Azure service team or SDK teamquestionThe issue doesn't require a change to the product in order to be resolved. Most issues start as thatThe issue doesn't require a change to the product in order to be resolved. Most issues start as that
Metadata
Metadata
Assignees
Labels
AI ProjectsService AttentionWorkflow: This issue is responsible by Azure service team.Workflow: This issue is responsible by Azure service team.customer-reportedIssues that are reported by GitHub users external to the Azure organization.Issues that are reported by GitHub users external to the Azure organization.needs-team-attentionWorkflow: This issue needs attention from Azure service team or SDK teamWorkflow: This issue needs attention from Azure service team or SDK teamquestionThe issue doesn't require a change to the product in order to be resolved. Most issues start as thatThe issue doesn't require a change to the product in order to be resolved. Most issues start as that
Describe the bug
When run this sample, error: INVALID VALUE: (UserError) Missing inputs for line 1: 'data.response'. This is because the data mapping is wrong:
"response": "{{item.response}}"}, thus actually there is noresponseinitem. The correct mapping should be"response": "{{sample.output_text}}".BTW, I have one question for custom code-based evaluator. Do you have a full documentation on the custom evaluators? I wonder how the parameters should be and how to get responses of agent. I ran an evaluation with custom code based evaluator targeted on an agent, but I cannot get the responses of the agent. I tried
sample.get("output_text"),sample.output_text. Code of my evaluator:To Reproduce
Steps to reproduce the behavior:
1.
Expected behavior
A clear and concise description of what you expected to happen.
Screenshots
If applicable, add screenshots to help explain your problem.
Additional context
Add any other context about the problem here.