-
Notifications
You must be signed in to change notification settings - Fork 1.6k
✨ (kustomize/v2): Add NamespaceTransformer to support multi-namespace… #5299
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
✨ (kustomize/v2): Add NamespaceTransformer to support multi-namespace… #5299
Conversation
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: AlirezaPourchali The full list of commands accepted by this bot can be found here. DetailsNeeds approval from an approver in each of these files:Approvers can indicate their approval by writing |
|
|
|
Welcome @AlirezaPourchali! |
|
Hi @AlirezaPourchali. Thanks for your PR. I'm waiting for a github.com member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
… RBAC Replaces hardcoded namespace field in config/default/kustomization.yaml with a NamespaceTransformer using unsetOnly: true. This preserves namespaces from RBAC markers while adding default namespace to other resources.
f346fbd to
e22404a
Compare
| control-plane: controller-manager | ||
| name: project-controller-manager-metrics-monitor | ||
| namespace: {{ .Release.Namespace }} | ||
| namespace: system |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We cannot do those changes in the Helm Charts.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've noted the Helm chart issue in the PR description and will handle it separately if needed. The core Kustomize fix is independent of Helm concerns.
I can handle it via v2-alpha plugin like the v1-alpha plugin.
Thanks again for your guidance! 🙏
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you please help me understand why these changes were required?
From what I can see, we can’t change this safely — if we merge it as-is, we’ll introduce a bug. The namespace needs to come from .Release.Namespace, since that value is defined at install time (e.g., when running helm install --namespace=<value>).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To be clear: this Helm side effect is a bug that needs to be fixed. The changes in the Helm charts are unintended consequences of the NamespaceTransformer approach. The core Kustomize fix is correct, but it exposed an existing inconsistency in how we handle the metrics service namespace.
Why This Worked Before But Breaks Now
Before (with hardcoded namespace: field):
metrics_service.yamlgenerated withnamespace: system- Kustomize's hardcoded
namespace: project-v4-systemforcefully overrode ALL namespaces - Final output:
namespace: project-v4-system(notsystem) - Helm plugin saw
project-v4-system, matched the pattern, replaced with{{ .Release.Namespace }}✅
Now (with NamespaceTransformer + unsetOnly: true):
metrics_service.yamlgenerated withnamespace: system- NamespaceTransformer sees namespace already set, so preserves it (because
unsetOnly: true) - Final output:
namespace: system(unchanged) - Helm plugin sees
system, doesn't matchproject-v4-systempattern, no replacement ❌
The aggressive namespace override was masking an existing inconsistency. The metrics_service.go template should have been using the same pattern as other resources all along, but nobody noticed because the hardcoded namespace: field was forcing everything to the correct value.
Now that we use unsetOnly: true (which correctly preserves explicit namespaces for multi-namespace RBAC), the inconsistency is exposed.
The Root Cause
The issue is in how metrics_service.go template is defined. Currently:
File: pkg/plugins/common/kustomize/v2/scaffolds/internal/templates/config/kdefault/metrics_service.go (line 52)
metadata:
name: controller-manager-metrics-service
namespace: system # ← Hardcoded "system"The Helm v2-alpha plugin's substituteNamespace() function expects the pattern <projectname>-system (e.g., project-v4-system):
File: pkg/plugins/optional/helm/v2alpha/scaffolds/internal/kustomize/helm_templater.go (line 114)
func (t *HelmTemplater) substituteNamespace(yamlContent string, resource *unstructured.Unstructured) string {
hardcodedNamespace := t.projectName + "-system" // ← Expects "project-v4-system"
namespaceTemplate := "{{ .Release.Namespace }}"
yamlContent = strings.ReplaceAll(yamlContent, hardcodedNamespace, namespaceTemplate)
return yamlContent
}When it encounters just system, it doesn't match the expected pattern, so it doesn't get replaced with {{ .Release.Namespace }}.
Two Options to Fix
Option 1: Fix metrics_service.go Template (Recommended)
Change the hardcoded system to use the project name pattern:
const metricsServiceTemplate = `apiVersion: v1
kind: Service
metadata:
labels:
control-plane: controller-manager
app.kubernetes.io/name: {{ .ProjectName }}
app.kubernetes.io/managed-by: kustomize
name: controller-manager-metrics-service
namespace: {{ .ProjectName }}-system # ← Use project name pattern
spec:
...This way:
- Kustomize with NamespaceTransformer will preserve
{{ .ProjectName }}-system(it's a template variable, not a literal namespace) - Helm v2-alpha plugin will match the pattern and convert to
{{ .Release.Namespace }}
Option 2: Update Helm v2-alpha Plugin
Add a fallback in substituteNamespace() to also handle plain system:
func (t *HelmTemplater) substituteNamespace(yamlContent string, resource *unstructured.Unstructured) string {
hardcodedNamespace := t.projectName + "-system"
namespaceTemplate := "{{ .Release.Namespace }}"
yamlContent = strings.ReplaceAll(yamlContent, hardcodedNamespace, namespaceTemplate)
// Fallback: also replace plain "system" (for metrics service)
yamlContent = strings.ReplaceAll(yamlContent, "namespace: system", "namespace: "+namespaceTemplate)
return yamlContent
}My Recommendation
I think Option 1 is cleaner - the metrics_service.go template should follow the same pattern as other resources. This would make the Helm plugin's logic simpler and more consistent.
This needs to be fixed before merging this PR. The core NamespaceTransformer changes are sound, but we can't introduce this Helm regression.
Should I:
- Update
metrics_service.goto use{{ .ProjectName }}-systempattern and regenerate testdata? (My preference) - Update the Helm v2-alpha plugin with the fallback for plain
system? - Revert to the hardcoded
namespace:field approach until we can coordinate both fixes?
I'm ready to implement whichever solution you prefer. Let me know and I'll get it done! 🙏
camilamacedo86
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @AlirezaPourchali 👋
Thank you for raising this and for the work on it — much appreciated.
I think the first step should be opening a PR against controller-tools.
See this comment for context:
#5148 (comment)
Once we have the fix in controller-runtime, we can then evaluate what is needed in kubebuilder. Having the fix in place will allow us to properly test and make sure the solution really works.
With the controller-runtime fix, I don’t think we need all of these changes in kubebuilder.
It seems we would only need one additional file:
config/default/namespace-transformer.yaml
(as shown in my example comment).
Also, instead of adding this to the default scaffold, it might be better to document it. This is probably not needed for ~90% of users, and adding a new default file may not help most people.
Possible alternatives:
- Add documentation in the FAQ: https://book.kubebuilder.io/faq
- Or create a small reference/tutorial explaining this use case
We could include a concrete example (similar to the CronJob references), with:
- a small mock project
- e2e tests
- and Helm validation
to ensure the approach works end-to-end.
What do you think? Does this approach make sense?
Thanks again for the contribution 🙏
|
Hi @camilamacedo86 👋 Thank you for the feedback! I wanted to clarify the situation with controller-tools after testing extensively. Controller-gen Already Works Correctly ✅I've verified that controller-gen from controller-tools is NOT the issue - it already correctly generates namespace fields in RBAC manifests. Here's the proof: Test SetupI have a controller with namespace-specific RBAC markers: // +kubebuilder:rbac:groups=apps,namespace=infrastructure,resources=deployments,verbs=get;list;watch;update;patch
// +kubebuilder:rbac:groups="",namespace=production,resources=secrets,verbs=get
// +kubebuilder:rbac:groups=coordination.k8s.io,namespace=production,resources=leases,verbs=get;list;watch;create;update;patch;delete
// +kubebuilder:rbac:groups="",namespace=production,resources=events,verbs=get;list;watch;create;update;patch;deleteTest ResultsStep 1: Fresh generation with controller-gen v0.19.0 rm config/rbac/role.yaml
make manifests # Uses controller-gen
cat config/rbac/role.yamlOutput - controller-gen DOES generate namespace fields: ---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: manager-role
namespace: infrastructure # ← Generated by controller-gen
rules:
- apiGroups:
- apps
resources:
- deployments
verbs:
- get
- list
- patch
- update
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: manager-role
namespace: production # ← Generated by controller-gen
rules:
- apiGroups:
- ""
resources:
- secrets
verbs:
- get
# ... more rulesStep 2: Testing with Kustomize's hardcoded # config/default/kustomization.yaml
namespace: test-override
namePrefix: osiris-
resources:
- ../rbackustomize build config/defaultResult: The hardcoded Step 3: Testing with NamespaceTransformer + # config/default/namespace-transformer.yaml
apiVersion: builtin
kind: NamespaceTransformer
metadata:
name: namespace-transformer
namespace: test-override
setRoleBindingSubjects: none
unsetOnly: true # Only transform if namespace not already set
fieldSpecs:
- path: metadata/namespace
create: true# config/default/kustomization.yaml
namePrefix: osiris-
resources:
- ../rbac
transformers:
- namespace-transformer.yamlkustomize build config/defaultResult: SUCCESS ✅ - Explicit namespaces are preserved: ---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: osiris-manager-role
namespace: infrastructure # ← Preserved
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: osiris-manager-role
namespace: production # ← PreservedConclusion
Regarding Your SuggestionYou mentioned:
After this testing, I believe controller-tools doesn't need changes. The namespace field is already in the generated YAML.
You're right that only one file is needed per project. However, this PR modifies how Kubebuilder scaffolds new projects - it generates that file automatically during Documentation vs Default ScaffoldingI understand your concern about this being needed for <10% of users. I'm happy to pivot to either approach: Option A: Documentation only (your preference)
Option B: Default scaffolding (current PR)
Which approach would you prefer? I'm happy to adjust the PR accordingly. About the Helm Side EffectI've noted the Helm chart issue in the PR description and will handle it separately if needed. The core Kustomize fix is independent of Helm concerns. Thanks again for your guidance! 🙏 |
1 similar comment
|
Hi @camilamacedo86 👋 Thank you for the feedback! I wanted to clarify the situation with controller-tools after testing extensively. Controller-gen Already Works Correctly ✅I've verified that controller-gen from controller-tools is NOT the issue - it already correctly generates namespace fields in RBAC manifests. Here's the proof: Test SetupI have a controller with namespace-specific RBAC markers: // +kubebuilder:rbac:groups=apps,namespace=infrastructure,resources=deployments,verbs=get;list;watch;update;patch
// +kubebuilder:rbac:groups="",namespace=production,resources=secrets,verbs=get
// +kubebuilder:rbac:groups=coordination.k8s.io,namespace=production,resources=leases,verbs=get;list;watch;create;update;patch;delete
// +kubebuilder:rbac:groups="",namespace=production,resources=events,verbs=get;list;watch;create;update;patch;deleteTest ResultsStep 1: Fresh generation with controller-gen v0.19.0 rm config/rbac/role.yaml
make manifests # Uses controller-gen
cat config/rbac/role.yamlOutput - controller-gen DOES generate namespace fields: ---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: manager-role
namespace: infrastructure # ← Generated by controller-gen
rules:
- apiGroups:
- apps
resources:
- deployments
verbs:
- get
- list
- patch
- update
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: manager-role
namespace: production # ← Generated by controller-gen
rules:
- apiGroups:
- ""
resources:
- secrets
verbs:
- get
# ... more rulesStep 2: Testing with Kustomize's hardcoded # config/default/kustomization.yaml
namespace: test-override
namePrefix: osiris-
resources:
- ../rbackustomize build config/defaultResult: The hardcoded Step 3: Testing with NamespaceTransformer + # config/default/namespace-transformer.yaml
apiVersion: builtin
kind: NamespaceTransformer
metadata:
name: namespace-transformer
namespace: test-override
setRoleBindingSubjects: none
unsetOnly: true # Only transform if namespace not already set
fieldSpecs:
- path: metadata/namespace
create: true# config/default/kustomization.yaml
namePrefix: osiris-
resources:
- ../rbac
transformers:
- namespace-transformer.yamlkustomize build config/defaultResult: SUCCESS ✅ - Explicit namespaces are preserved: ---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: osiris-manager-role
namespace: infrastructure # ← Preserved
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: osiris-manager-role
namespace: production # ← PreservedConclusion
Regarding Your SuggestionYou mentioned:
After this testing, I believe controller-tools doesn't need changes. The namespace field is already in the generated YAML.
You're right that only one file is needed per project. However, this PR modifies how Kubebuilder scaffolds new projects - it generates that file automatically during Documentation vs Default ScaffoldingI understand your concern about this being needed for <10% of users. I'm happy to pivot to either approach: Option A: Documentation only (your preference)
Option B: Default scaffolding (current PR)
Which approach would you prefer? I'm happy to adjust the PR accordingly. About the Helm Side EffectI've noted the Helm chart issue in the PR description and will handle it separately if needed. The core Kustomize fix is independent of Helm concerns. Thanks again for your guidance! 🙏 |
|
This is really helpful—thank you so much for the thorough explanations and for sharing your tests and research. I’d like a bit of time to review everything properly, and I’ll reply back here once I’ve gone through it. |
|
Sure, it's totally fine! |
|
PR needs rebase. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
Description
This PR fixes namespace conflicts when using namespace-scoped RBAC markers in multi-namespace scenarios.
Problem
When using RBAC markers with explicit namespaces:
The
controller-gencorrectly generates separate Role resources for each namespace. However, duringmake deploy, Kustomize's globalnamespace:field inconfig/default/kustomization.yamloverrides ALL namespaces, causing both Roles to end up in the same namespace with identical names → ID conflict error.Solution
Replaces the hardcoded
namespace:field with aNamespaceTransformerusingunsetOnly: true. This approach:namespace:field)Changes
Created:
pkg/plugins/common/kustomize/v2/scaffolds/internal/templates/config/kdefault/namespace_transformer.gonamespace-transformer.yamlwithunsetOnly: trueModified:
pkg/plugins/common/kustomize/v2/scaffolds/internal/templates/config/kdefault/kustomization.gonamespace:fieldtransformers:section referencingnamespace-transformer.yamlModified:
pkg/plugins/common/kustomize/v2/scaffolds/init.goNamespaceTransformerin templates sliceThe Helm chart generation was affected by this change. Some generated Helm templates changed from:
to:
Root cause: The
metrics_service.yamltemplate has a hardcodednamespace: systemfield. WithunsetOnly: true, the NamespaceTransformer preserves this existing namespace. The Helm v2-alpha plugin'ssubstituteNamespace()function inhelm_templater.goexpects the format<projectname>-system(e.g.,project-v4-system) but encounters justsystem, so it doesn't convert it to{{ .Release.Namespace }}.Question for maintainers: Should the Helm v2-alpha plugin be updated to also replace the
namespace: systempattern? I can make that change if needed, or if there's a preferred alternative solution. The v1-alpha plugin handles this viastrings.ReplaceAll(contentStr, "namespace: system", "namespace: {{ .Release.Namespace }}")inedit.go:442.Testing
make generate- All testdata and docs regenerated successfullymake lint-fix- No linting issuesmake test-unit- All unit tests passnamespace-transformer.yamlin test projectsBackward Compatibility
Existing projects are unaffected. New projects created with
kubebuilder initwill use the NamespaceTransformer approach. Users who prefer the old behavior can:namespace: {{ .ProjectName }}-systeminconfig/default/kustomization.yamltransformers:sectionnamespace-transformer.yamlFixes #5148