generated from amazon-archives/__template_Apache-2.0
-
Notifications
You must be signed in to change notification settings - Fork 272
Closed
Labels
kind/bugCategorizes issue or PR as related to a bug.Categorizes issue or PR as related to a bug.service/elasticacheIndicates issues or PRs that are related to elasticache-controller.Indicates issues or PRs that are related to elasticache-controller.
Description
Describe the bug
Updating spec.userIDs on an existing UserGroup resource does not result in any change in AWS ElastiCache, even though the ACK controller detects drift and reports a successful update.
Steps to reproduce
- Create a
UserGroupmanifest with a single user (e.g.,user-1). - Apply the manifest (
kubectl apply). The resource is created successfully in AWS. - Edit the manifest to add a second user to the userIDs list (e.g., [
user-1,user-2]). - Apply the updated manifest.
- Observe the controller logs: it detects drift (A vs B) and reports updated resource.
- Describe the AWS resource (
aws elasticache describe-user-groups).
Logs(redacted)
Drift is detected, but the update is ineffective.
{"level":"info","msg":"desired resource state has changed","kind":"UserGroup","diff":[{"Path":{"Parts":["Spec","UserIDs"]},"A":["user-1","user-2"],"B":["user-1"]}]}
{"level":"info","msg":"updated resource","kind":"UserGroup","generation":2}
Expected outcome
The UserGroup should be updated to include both users. Instead, the newly added user is never applied, despite the controller reporting a successful update.
Environment
- Kubernetes version: v1.33.5
- Using EKS (yes/no), if so version? Yes, 1.33
- AWS service targeted: ElastiCache
- ACK elasticache-controller version: 1.3.2
Metadata
Metadata
Assignees
Labels
kind/bugCategorizes issue or PR as related to a bug.Categorizes issue or PR as related to a bug.service/elasticacheIndicates issues or PRs that are related to elasticache-controller.Indicates issues or PRs that are related to elasticache-controller.