-
Notifications
You must be signed in to change notification settings - Fork 1.2k
Add metric and logging for activator-autoscaler connectivity #16318
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
Welcome @prashanthjos! It looks like this is your first PR to knative/serving 🎉 |
|
Hi @prashanthjos. Thanks for your PR. I'm waiting for a github.com member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: prashanthjos The full list of commands accepted by this bot can be found here. DetailsNeeds approval from an approver in each of these files:Approvers can indicate their approval by writing |
This change adds observability for the websocket connection between the activator and autoscaler components: - Add `activator_autoscaler_reachable` gauge metric (1=reachable, 0=not reachable) - Log ERROR when autoscaler is not reachable during stat sending - Add periodic connection status monitor (every 5s) to detect connection establishment failures - Add unit tests for the new AutoscalerConnectionStatusMonitor function The metric is recorded in two scenarios: 1. When SendRaw fails/succeeds during stat message sending 2. When the periodic status check detects connection not established This helps operators identify connectivity issues between activator and autoscaler that could impact autoscaling decisions.
|
/ok-to-test |
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #16318 +/- ##
==========================================
+ Coverage 80.09% 80.16% +0.07%
==========================================
Files 215 216 +1
Lines 13391 13429 +38
==========================================
+ Hits 10725 10766 +41
Misses 2304 2304
+ Partials 362 359 -3 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
|
/retest |
|
Related docs PR: |
linkvt
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for your PR! I understand the requirement but think we could simplify/change the current implementation a bit, see my comments.
I haven't worked that much with this code so far, a more experienced maintainer might have more thoughts about the changes, not sure if they are still in their winter break though.
|
|
||
| // AutoscalerConnectionStatusMonitor periodically checks if the autoscaler is reachable | ||
| // and emits metrics and logs accordingly. | ||
| func AutoscalerConnectionStatusMonitor(ctx context.Context, logger *zap.SugaredLogger, conn StatusChecker, mp metric.MeterProvider) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think we need this monitor as the stats are already reported every second, see
| const reportInterval = time.Second |
This means errors would be detected there already.
|
|
||
| meter := provider.Meter(scopeName) | ||
|
|
||
| m.autoscalerReachable, err = meter.Int64Gauge( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we should use counters with labels here instead of a gauge. If the connection is flaky we might get the case where we always check when the gauge is 1.
If we have a counter with result=success or result=error we would:
- not miss any errors anymore
- could create an alert based on the success rate e.g. during the last 5 minutes if success rate is e.g. below 95%
| @@ -0,0 +1,53 @@ | |||
| /* | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would it maybe make sense to add this to the existing metrics in https://github.com/knative/serving/blob/main/pkg/activator/handler/metrics.go ?
| } | ||
|
|
||
| // Give some time for the goroutine to process the error and log | ||
| time.Sleep(100 * time.Millisecond) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we actually need this?
I always try not to add Sleeps in my tests as they increase the duration of the whole test suite. Recent go versions also contain synctest to work around this but less code is always better 😀
Description
This PR adds observability for the websocket connection between the activator and autoscaler components. When the autoscaler is not reachable, operators currently have no easy way to identify this issue, which can lead to autoscaling failures.
Changes
New Metric
kn.activator.autoscaler.reachable1(reachable),0(not reachable)New Logging
"Autoscaler is not reachable from activator. Stats were not sent."(on send failure)"Autoscaler is not reachable from activator."(on connection check failure)How It Works
The metric is recorded in two scenarios:
Periodic check (every 5s):
On stat send:
Testing
go test ./pkg/activator/... -v)