What were you trying to accomplish?
Delete the cluster. Prior, the envelope encryption key has been deleted (in my case, probably during an incomplete or failed deletion attempt of the surrounding infrastructure).
What happened?
Deleting the cluster fails:
eksctl delete cluster -f eksctl-ClusterConfig.yaml --disable-nodegroup-eviction
2025-12-01 10:53:59 [ℹ] deleting EKS cluster "xxx"
2025-12-01 10:54:00 [ℹ] deleted 0 Fargate profile(s)
2025-12-01 10:54:00 [✔] kubeconfig has been updated
2025-12-01 10:54:00 [ℹ] cleaning up AWS load balancers created by Kubernetes objects of Kind Service or Ingress
Error: cannot delete Kubernetes Ingress default/xxx: Internal error occurred: failed to decrypt DEK, error: rpc error: code = Unknown desc = failed to decrypt operation error KMS: Decrypt, https response error StatusCode: 400, RequestID: xxx, KMSInvalidStateException: arn:aws:kms:eu-central-1:xxx:key/xxxx is pending deletion.
With --force, it just gets stuck at cleaning up AWS load balancers created by Kubernetes objects of Kind Service or Ingress.
How to reproduce it?
Create a cluster with an ecryption secret.
secretsEncryption:
keyARN: arn:aws:kms:eu-central-1:xxx:key/xxxx
Then, first delete that KMS key, and then delete the cluster.
Anything else we need to know?
A good solution would be to add a timeout around the "cleaning up AWS load balancers created by Kubernetes objects of Kind Service or Ingress", and print a warning if there can be leftovers, but continue deleting the cluster with --force.
Versions
What were you trying to accomplish?
Delete the cluster. Prior, the envelope encryption key has been deleted (in my case, probably during an incomplete or failed deletion attempt of the surrounding infrastructure).
What happened?
Deleting the cluster fails:
With
--force, it just gets stuck atcleaning up AWS load balancers created by Kubernetes objects of Kind Service or Ingress.How to reproduce it?
Create a cluster with an ecryption secret.
Then, first delete that KMS key, and then delete the cluster.
Anything else we need to know?
A good solution would be to add a timeout around the "cleaning up AWS load balancers created by Kubernetes objects of Kind Service or Ingress", and print a warning if there can be leftovers, but continue deleting the cluster with
--force.Versions