|
| 1 | +## v0.13.0 |
| 2 | + |
| 3 | +Changes since `v0.12.0`: |
| 4 | + |
| 5 | +## Urgent Upgrade Notes |
| 6 | + |
| 7 | +### (No, really, you MUST read this before you upgrade) |
| 8 | + |
| 9 | +- Helm: |
| 10 | + |
| 11 | + - Fixed KueueViz installation when enableKueueViz=true is used with default values for the image specifying parameters. |
| 12 | + - Split the image specifying parameters into separate repository and tag, both for KueueViz backend and frontend. |
| 13 | + |
| 14 | + If you are using Helm charts and installing KueueViz using custom images, |
| 15 | + then you need to specify them by kueueViz.backend.image.repository, kueueViz.backend.image.tag, |
| 16 | + kueueViz.fontend.image.repository and kueueViz.frontend.image.tag parameters. (#5400, @mbobrovskyi) |
| 17 | +- ProvisioningRequest: Kueue now supports and manages ProvisioningRequests in v1 rather than v1beta1. |
| 18 | + |
| 19 | +if you are using ProvisioningRequests with ClusterAutoscaler |
| 20 | +ensure that your ClusterAutoscaler supports the v1 API (1.31.1+). (#4444, @kannon92) |
| 21 | +- TAS: Drop support for MostFreeCapacity mode |
| 22 | + |
| 23 | +The `TASProfileMostFreeCapacity` feature gate is no longer available. |
| 24 | +If you specify that, you must remove it from the `.featureGates` in your Kueue Config or kueue-controller-manager command-line flag, `--feature-gates`. (#5536, @lchrzaszcz) |
| 25 | +- The API Priority and Fairness configuration for the visibility endpoint is installed by default. |
| 26 | + |
| 27 | +If your cluster is using k8s 1.28 or older, you will need to either update your version of k8s (to 1.29+) or remove the FlowSchema and PriorityLevelConfiguration from the installation manifests of Kueue. (#5043, @mbobrovskyi) |
| 28 | + |
| 29 | +## Upgrading steps |
| 30 | + |
| 31 | +### 1. Backup Cohort Resources (skip if you are not using Cohorts API): |
| 32 | + |
| 33 | +kubectl get cohorts.kueue.x-k8s.io -o yaml > cohorts.yaml |
| 34 | + |
| 35 | + |
| 36 | +### 2. Update apiVersion in Backup File (skip if you are not using Cohort API): |
| 37 | +Replace `v1alpha1` with `v1beta1` in `cohorts.yaml` for all resources: |
| 38 | + |
| 39 | +sed -i -e 's/v1alpha1/v1beta1/g' cohorts.yaml |
| 40 | +sed -i -e 's/^ parent: \(\S*\)$/ parentName: \1/' cohorts.yaml |
| 41 | + |
| 42 | +### 3. Delete old CRDs: |
| 43 | + |
| 44 | +kubectl delete crd cohorts.kueue.x-k8s.io |
| 45 | + |
| 46 | + |
| 47 | +### 4. Install Kueue v0.13.x: |
| 48 | +Follow the instruction [here](https://kueue.sigs.k8s.io/docs/installation/#install-a-released-version) to install. |
| 49 | + |
| 50 | +### 5. Restore Cohorts Resources (skip if you are not using Cohorts API): |
| 51 | + |
| 52 | +kubectl apply -f cohorts.yaml |
| 53 | + |
| 54 | + |
| 55 | +## Changes by Kind |
| 56 | + |
| 57 | +### Deprecation |
| 58 | + |
| 59 | +- Promote Cohort CRD version to v1beta1 |
| 60 | + |
| 61 | + The Cohort CRD `v1alpha1` is no longer supported. |
| 62 | + The `.spec.parent` in Cohort `v1alpha1` was replaced with `.spec.parentName` in Cohort `v1beta1`. (#5595, @tenzen-y) |
| 63 | + |
| 64 | +### Feature |
| 65 | + |
| 66 | +- AFS: Introduce the "entry penalty" for newly admitted workloads in a LQ. |
| 67 | + This mechanism is designed to prevent exploiting a flaw in the previous design which allowed |
| 68 | + to submit and get admitted multiple workloads from a single LQ before their usage would be |
| 69 | + accounted by the admission fair sharing mechanism. (#5933, @IrvingMg) |
| 70 | +- AFS: preemption candidates are now ordered within ClusterQueue with respect to LQ's usage. |
| 71 | + The ordering of candidates coming from other ClusterQueues is unchanged. (#5632, @PBundyra) |
| 72 | +- Adds the `pods_ready_to_evicted_time_seconds` metric that measures the time between workload's start, |
| 73 | + based on the PodsReady condition, and its eviction. (#5923, @amy) |
| 74 | +- Flavor Fungibility: Introduces a new mode which allows to prefer preemption over borrowing when choosing a flavor. |
| 75 | + In this mode the preference is decided based on FavorFungibilityStrategy. This behavior is behind the |
| 76 | + FlavorFungibilityImplicitPreferenceDefault Alpha feature gate (disabled by default). (#6132, @pajakd) |
| 77 | +- Graduate ManagedJobNamespaceSelector to GA (#5987, @kannon92) |
| 78 | +- Helm: Allow setting the controller-manager's Pod `PriorityClassName` (#5631, @kaisoz) |
| 79 | +- Helm: introduce new parameters to configure KueueViz installation: |
| 80 | + - kueueViz.backend.ingress and kueueViz.frontend.ingress to configure ingress |
| 81 | + - kueueViz.imagePullSecrets and kueueViz.priorityClassName (#5815, @btwseeu78) |
| 82 | +- Helm: support for specifying nodeSelector and tolerations for all Kueue components (#5820, @zmalik) |
| 83 | +- Introduce the ManagedJobsNamespaceSelectorAlwaysRespected feature, which allows you to manage Jobs in the managed namespaces. Even if the Jobs have queue name label, this feature ignore those Jobs when the deployed namespaces are not managed by Kueue (#5638, @PannagaRao) |
| 84 | +- KueueViz: Add View YAML (#5992, @samzong) |
| 85 | +- Kueue_controller_version prometheus metric, that specifies the Git commit ID used to compile Kueue controller (#5846, @rsevilla87) |
| 86 | +- MultiKueue: Introduce the Dispatcher API which allows to provide an external dispatcher for nominating |
| 87 | + a subset of worker clusters for workload admission, instead of all clusters. |
| 88 | + |
| 89 | + The name of the dispatcher, either internal or external, is specified in the global config map under the |
| 90 | + `multikueue.dispatcherName` field. The following internal dispatchers are supported: |
| 91 | + - kueue.x-k8s.io/multikueue-dispatcher-all-at-once - nominates all clusters at once (default, used if the name is not specified) |
| 92 | + - kueue.x-k8s.io/multikueue-dispatcher-incremental - nominates clusters incrementally in constant time intervals |
| 93 | + |
| 94 | + **Important**: the current implementation requires implementations of external dispatchers to use |
| 95 | + `kueue-admission` as the field manager when patching the status.nominatedClusterNames field. (#5782, @mszadkow) |
| 96 | +- Promoted ObjectRetentionPolicies to Beta. (#6209, @mykysha) |
| 97 | +- Support for Elastic (Dynamically Sized Jobs) in Alpha as designed in [KEP-77](https://github.com/kubernetes-sigs/kueue/tree/main/keps/77-dynamically-sized-jobs). |
| 98 | + The implementation supports resizing (scale up and down) of batch/v1.Job and is behind the Alpha |
| 99 | + `ElasticJobsViaWorkloadSlices` feature gate. Jobs which are subject to resizing need to have the |
| 100 | + `kueue.x-k8s.io/elastic-job` annotation added at creation time. (#5510, @ichekrygin) |
| 101 | +- Support for Kubernetes 1.33 (#5123, @mbobrovskyi) |
| 102 | +- TAS: Add FailFast on Node's failure handling mode (#5861, @PBundyra) |
| 103 | +- TAS: Co-locate leader and workers in a single replica in LeaderWorkerSet (#5845, @lchrzaszcz) |
| 104 | +- TAS: Increase the maximal number of Topology Levels (`.spec.levels`) from 8 to 16. (#5635, @sohankunkerkar) |
| 105 | +- TAS: Introduce a mode for triggering node replacement as soon as the workload's Pods are terminating |
| 106 | + on the node which is not ready. This behavior is behind the ReplaceNodeOnPodTermination Alpha feature gate |
| 107 | + (disabled by default). (#5931, @pajakd) |
| 108 | +- TAS: Introduce two-level scheduling (#5353, @lchrzaszcz) |
| 109 | + |
| 110 | +### Bug or Regression |
| 111 | + |
| 112 | +- Emit the Workload event indicating eviction when LocalQueue is stopped (#5984, @amy) |
| 113 | +- Fix a bug that would allow a user to bypass localQueueDefaulting. (#5451, @dgrove-oss) |
| 114 | +- Fix a bug where the GroupKindConcurrency in Kueue Config is not propagated to the controllers (#5818, @tenzen-y) |
| 115 | +- Fix incorrect workload admission after CQ is deleted in a cohort reducing the amount of available quota. The culprit of the issue was that the cached amount of quota was not updated on CQ deletion. (#5985, @amy) |
| 116 | +- Fix the bug that Kueue, upon startup, would incorrectly admit and then immediately deactivate |
| 117 | + already deactivated Workloads. |
| 118 | + |
| 119 | + This bug also prevented the ObjectRetentionPolicies feature from deleting Workloads |
| 120 | + that were deactivated by Kueue before the feature was enabled. (#5625, @mbobrovskyi) |
| 121 | +- Fix the bug that the webhook certificate setting under `controllerManager.webhook.certDir` was ignored by the internal cert manager, effectively always defaulting to /tmp/k8s-webhook-server/serving-certs. (#5432, @ichekrygin) |
| 122 | +- Fixed bug that doesn't allow Kueue to admit Workload after queue-name label set. (#5047, @mbobrovskyi) |
| 123 | +- HC: Add Cohort Go client library (#5597, @tenzen-y) |
| 124 | +- Helm: Fix a templating bug when configuring managedJobsNamespaceSelector. (#5393, @mtparet) |
| 125 | +- MultiKueue: Fix a bug that batch/v1 Job final state is not synced from Workload cluster to Management cluster when disabling the `MultiKueueBatchJobWithManagedBy` feature gate. (#5615, @ichekrygin) |
| 126 | +- MultiKueue: Fix the bug that Job deleted on the manager cluster didn't trigger deletion of pods on the worker cluster. (#5484, @ichekrygin) |
| 127 | +- RBAC permissions for the Cohort API to update & read by admins are now created out of the box. (#5431, @vladikkuzn) |
| 128 | +- TAS: Fix a bug for the incompatible NodeFailureController name with Prometheus (#5819, @tenzen-y) |
| 129 | +- TAS: Fix a bug that Kueue unintentionally gives up a workload scheduling in LeastFreeCapacity if there is at least one unmatched domain. (#5803, @PBundyra) |
| 130 | +- TAS: Fix a bug that LeastFreeCapacity Algorithm does not respect level ordering (#5464, @tenzen-y) |
| 131 | +- TAS: Fix a bug that the tas-node-failure-controller unexpectedly is started under the HA mode even though the replica is not the leader. (#5848, @tenzen-y) |
| 132 | +- TAS: Fix bug which prevented admitting any workloads if the first resource flavor is reservation, and the fallback is using ProvisioningRequest. (#5426, @mimowo) |
| 133 | +- TAS: Fix the bug when Kueue crashes if the preemption target, due to quota, is using a node which is already deleted. (#5833, @mimowo) |
| 134 | +- TAS: fix the bug which would trigger unnecessary second pass scheduling for nodeToReplace |
| 135 | + in the following scenarios: |
| 136 | + 1. Finished workload |
| 137 | + 2. Evicted workload |
| 138 | + 3. node to replace is not present in the workload's TopologyAssignment domains (#5585, @mimowo) |
| 139 | +- TAS: fix the scenario when deleted workload still lives in the cache. (#5587, @mimowo) |
| 140 | +- Use simulation of preemption for more accurate flavor assignment. |
| 141 | + In particular, in certain scenarios when preemption while borrowing is enabled, |
| 142 | + the previous heuristic would wrongly state that preemption was possible. (#5529, @pajakd) |
| 143 | +- Use simulation of preemption for more accurate flavor assignment. |
| 144 | + In particular, the previous heuristic would wrongly state that preemption |
| 145 | + in a flavor was possible even if no preemption candidates could be found. |
| 146 | + |
| 147 | + Additionally, in scenarios when preemption while borrowing is enabled, |
| 148 | + the flavor in which reclaim is possible is preferred over flavor where |
| 149 | + priority-based preemption is required. This is consistent with prioritizing |
| 150 | + flavors when preemption without borrowing is used. (#5698, @gabesaba) |
| 151 | + |
| 152 | +### Other (Cleanup or Flake) |
| 153 | + |
| 154 | +- KueueViz: reduce the image size from 1.14 GB to 267MB, resulting in faster pull and shorter startup time. (#5860, @mbobrovskyi) |
0 commit comments