Skip to content

Conversation

@AlirezaPourchali
Copy link

Description

This PR fixes namespace conflicts when using namespace-scoped RBAC markers in multi-namespace scenarios.

Problem

When using RBAC markers with explicit namespaces:

// +kubebuilder:rbac:groups=apps,namespace=infrastructure,resources=deployments,verbs=get
// +kubebuilder:rbac:groups="",namespace=users,resources=secrets,verbs=get

The controller-gen correctly generates separate Role resources for each namespace. However, during make deploy, Kustomize's global namespace: field in config/default/kustomization.yaml overrides ALL namespaces, causing both Roles to end up in the same namespace with identical names → ID conflict error.

Solution

Replaces the hardcoded namespace: field with a NamespaceTransformer using unsetOnly: true. This approach:

  • ✅ Preserves explicit namespaces from RBAC markers
  • ✅ Adds default namespace to resources that don't specify one
  • ✅ Maintains backward compatibility (users can revert by uncommenting the old namespace: field)
  • ✅ Follows Kustomize best practices per NamespaceTransformer docs

Changes

  1. Created: pkg/plugins/common/kustomize/v2/scaffolds/internal/templates/config/kdefault/namespace_transformer.go

    • Scaffolds namespace-transformer.yaml with unsetOnly: true
  2. Modified: pkg/plugins/common/kustomize/v2/scaffolds/internal/templates/config/kdefault/kustomization.go

    • Commented out hardcoded namespace: field
    • Added transformers: section referencing namespace-transformer.yaml
    • Included documentation comments explaining the change and fallback option
  3. Modified: pkg/plugins/common/kustomize/v2/scaffolds/init.go

    • Registered NamespaceTransformer in templates slice

⚠️ Known Side Effect: Helm Charts

The Helm chart generation was affected by this change. Some generated Helm templates changed from:

namespace: {{ .Release.Namespace }}

to:

namespace: system

Root cause: The metrics_service.yaml template has a hardcoded namespace: system field. With unsetOnly: true, the NamespaceTransformer preserves this existing namespace. The Helm v2-alpha plugin's substituteNamespace() function in helm_templater.go expects the format <projectname>-system (e.g., project-v4-system) but encounters just system, so it doesn't convert it to {{ .Release.Namespace }}.

Question for maintainers: Should the Helm v2-alpha plugin be updated to also replace the namespace: system pattern? I can make that change if needed, or if there's a preferred alternative solution. The v1-alpha plugin handles this via strings.ReplaceAll(contentStr, "namespace: system", "namespace: {{ .Release.Namespace }}") in edit.go:442.

Testing

  • make generate - All testdata and docs regenerated successfully
  • make lint-fix - No linting issues
  • make test-unit - All unit tests pass
  • ✅ Manual verification of generated namespace-transformer.yaml in test projects

Backward Compatibility

Existing projects are unaffected. New projects created with kubebuilder init will use the NamespaceTransformer approach. Users who prefer the old behavior can:

  1. Uncomment namespace: {{ .ProjectName }}-system in config/default/kustomization.yaml
  2. Remove the transformers: section
  3. Delete namespace-transformer.yaml

Fixes #5148

@k8s-ci-robot k8s-ci-robot added the do-not-merge/invalid-commit-message Indicates that a PR should not merge because it has an invalid commit message. label Dec 27, 2025
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: AlirezaPourchali
Once this PR has been reviewed and has the lgtm label, please assign varshaprasad96 for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@linux-foundation-easycla
Copy link

linux-foundation-easycla bot commented Dec 27, 2025

CLA Signed

The committers listed above are authorized under a signed CLA.

  • ✅ login: AlirezaPourchali / name: Alireza (e22404a)

@k8s-ci-robot
Copy link
Contributor

Welcome @AlirezaPourchali!

It looks like this is your first PR to kubernetes-sigs/kubebuilder 🎉. Please refer to our pull request process documentation to help your PR have a smooth ride to approval.

You will be prompted by a bot to use commands during the review process. Do not be afraid to follow the prompts! It is okay to experiment. Here is the bot commands documentation.

You can also check if kubernetes-sigs/kubebuilder has its own contribution guidelines.

You may want to refer to our testing guide if you run into trouble with your tests not passing.

If you are having difficulty getting your pull request seen, please follow the recommended escalation practices. Also, for tips and tricks in the contribution process you may want to read the Kubernetes contributor cheat sheet. We want to make sure your contribution gets all the attention it needs!

Thank you, and welcome to Kubernetes. 😃

@k8s-ci-robot k8s-ci-robot added the needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. label Dec 27, 2025
@k8s-ci-robot
Copy link
Contributor

Hi @AlirezaPourchali. Thanks for your PR.

I'm waiting for a github.com member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@k8s-ci-robot k8s-ci-robot added cncf-cla: no Indicates the PR's author has not signed the CNCF CLA. size/XL Denotes a PR that changes 500-999 lines, ignoring generated files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. and removed cncf-cla: no Indicates the PR's author has not signed the CNCF CLA. labels Dec 27, 2025
… RBAC

Replaces hardcoded namespace field in config/default/kustomization.yaml
with a NamespaceTransformer using unsetOnly: true. This preserves
namespaces from RBAC markers while adding default namespace to other
resources.
@k8s-ci-robot k8s-ci-robot removed the do-not-merge/invalid-commit-message Indicates that a PR should not merge because it has an invalid commit message. label Dec 27, 2025
control-plane: controller-manager
name: project-controller-manager-metrics-monitor
namespace: {{ .Release.Namespace }}
namespace: system
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We cannot do those changes in the Helm Charts.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've noted the Helm chart issue in the PR description and will handle it separately if needed. The core Kustomize fix is independent of Helm concerns.
I can handle it via v2-alpha plugin like the v1-alpha plugin.

Thanks again for your guidance! 🙏

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you please help me understand why these changes were required?

From what I can see, we can’t change this safely — if we merge it as-is, we’ll introduce a bug. The namespace needs to come from .Release.Namespace, since that value is defined at install time (e.g., when running helm install --namespace=<value>).

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To be clear: this Helm side effect is a bug that needs to be fixed. The changes in the Helm charts are unintended consequences of the NamespaceTransformer approach. The core Kustomize fix is correct, but it exposed an existing inconsistency in how we handle the metrics service namespace.

Why This Worked Before But Breaks Now

Before (with hardcoded namespace: field):

  1. metrics_service.yaml generated with namespace: system
  2. Kustomize's hardcoded namespace: project-v4-system forcefully overrode ALL namespaces
  3. Final output: namespace: project-v4-system (not system)
  4. Helm plugin saw project-v4-system, matched the pattern, replaced with {{ .Release.Namespace }}

Now (with NamespaceTransformer + unsetOnly: true):

  1. metrics_service.yaml generated with namespace: system
  2. NamespaceTransformer sees namespace already set, so preserves it (because unsetOnly: true)
  3. Final output: namespace: system (unchanged)
  4. Helm plugin sees system, doesn't match project-v4-system pattern, no replacement ❌

The aggressive namespace override was masking an existing inconsistency. The metrics_service.go template should have been using the same pattern as other resources all along, but nobody noticed because the hardcoded namespace: field was forcing everything to the correct value.

Now that we use unsetOnly: true (which correctly preserves explicit namespaces for multi-namespace RBAC), the inconsistency is exposed.

The Root Cause

The issue is in how metrics_service.go template is defined. Currently:

File: pkg/plugins/common/kustomize/v2/scaffolds/internal/templates/config/kdefault/metrics_service.go (line 52)

metadata:
  name: controller-manager-metrics-service
  namespace: system    # ← Hardcoded "system"

The Helm v2-alpha plugin's substituteNamespace() function expects the pattern <projectname>-system (e.g., project-v4-system):

File: pkg/plugins/optional/helm/v2alpha/scaffolds/internal/kustomize/helm_templater.go (line 114)

func (t *HelmTemplater) substituteNamespace(yamlContent string, resource *unstructured.Unstructured) string {
	hardcodedNamespace := t.projectName + "-system"  // ← Expects "project-v4-system"
	namespaceTemplate := "{{ .Release.Namespace }}"
	
	yamlContent = strings.ReplaceAll(yamlContent, hardcodedNamespace, namespaceTemplate)
	return yamlContent
}

When it encounters just system, it doesn't match the expected pattern, so it doesn't get replaced with {{ .Release.Namespace }}.

Two Options to Fix

Option 1: Fix metrics_service.go Template (Recommended)

Change the hardcoded system to use the project name pattern:

const metricsServiceTemplate = `apiVersion: v1
kind: Service
metadata:
  labels:
    control-plane: controller-manager
    app.kubernetes.io/name: {{ .ProjectName }}
    app.kubernetes.io/managed-by: kustomize
  name: controller-manager-metrics-service
  namespace: {{ .ProjectName }}-system    # ← Use project name pattern
spec:
  ...

This way:

  • Kustomize with NamespaceTransformer will preserve {{ .ProjectName }}-system (it's a template variable, not a literal namespace)
  • Helm v2-alpha plugin will match the pattern and convert to {{ .Release.Namespace }}

Option 2: Update Helm v2-alpha Plugin

Add a fallback in substituteNamespace() to also handle plain system:

func (t *HelmTemplater) substituteNamespace(yamlContent string, resource *unstructured.Unstructured) string {
	hardcodedNamespace := t.projectName + "-system"
	namespaceTemplate := "{{ .Release.Namespace }}"
	
	yamlContent = strings.ReplaceAll(yamlContent, hardcodedNamespace, namespaceTemplate)
	
	// Fallback: also replace plain "system" (for metrics service)
	yamlContent = strings.ReplaceAll(yamlContent, "namespace: system", "namespace: "+namespaceTemplate)
	
	return yamlContent
}

My Recommendation

I think Option 1 is cleaner - the metrics_service.go template should follow the same pattern as other resources. This would make the Helm plugin's logic simpler and more consistent.

This needs to be fixed before merging this PR. The core NamespaceTransformer changes are sound, but we can't introduce this Helm regression.

Should I:

  1. Update metrics_service.go to use {{ .ProjectName }}-system pattern and regenerate testdata? (My preference)
  2. Update the Helm v2-alpha plugin with the fallback for plain system?
  3. Revert to the hardcoded namespace: field approach until we can coordinate both fixes?

I'm ready to implement whichever solution you prefer. Let me know and I'll get it done! 🙏

Copy link
Member

@camilamacedo86 camilamacedo86 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @AlirezaPourchali 👋

Thank you for raising this and for the work on it — much appreciated.

I think the first step should be opening a PR against controller-tools.
See this comment for context:
#5148 (comment)

Once we have the fix in controller-runtime, we can then evaluate what is needed in kubebuilder. Having the fix in place will allow us to properly test and make sure the solution really works.

With the controller-runtime fix, I don’t think we need all of these changes in kubebuilder.
It seems we would only need one additional file:
config/default/namespace-transformer.yaml
(as shown in my example comment).

Also, instead of adding this to the default scaffold, it might be better to document it. This is probably not needed for ~90% of users, and adding a new default file may not help most people.

Possible alternatives:

We could include a concrete example (similar to the CronJob references), with:

  • a small mock project
  • e2e tests
  • and Helm validation
    to ensure the approach works end-to-end.

What do you think? Does this approach make sense?

Thanks again for the contribution 🙏

@AlirezaPourchali
Copy link
Author

Hi @camilamacedo86 👋

Thank you for the feedback! I wanted to clarify the situation with controller-tools after testing extensively.

Controller-gen Already Works Correctly ✅

I've verified that controller-gen from controller-tools is NOT the issue - it already correctly generates namespace fields in RBAC manifests. Here's the proof:

Test Setup

I have a controller with namespace-specific RBAC markers:

// +kubebuilder:rbac:groups=apps,namespace=infrastructure,resources=deployments,verbs=get;list;watch;update;patch
// +kubebuilder:rbac:groups="",namespace=production,resources=secrets,verbs=get
// +kubebuilder:rbac:groups=coordination.k8s.io,namespace=production,resources=leases,verbs=get;list;watch;create;update;patch;delete
// +kubebuilder:rbac:groups="",namespace=production,resources=events,verbs=get;list;watch;create;update;patch;delete

Test Results

Step 1: Fresh generation with controller-gen v0.19.0

rm config/rbac/role.yaml
make manifests  # Uses controller-gen
cat config/rbac/role.yaml

Output - controller-gen DOES generate namespace fields:

---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: manager-role
  namespace: infrastructure    # ← Generated by controller-gen
rules:
- apiGroups:
  - apps
  resources:
  - deployments
  verbs:
  - get
  - list
  - patch
  - update
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: manager-role
  namespace: production    # ← Generated by controller-gen
rules:
- apiGroups:
  - ""
  resources:
  - secrets
  verbs:
  - get
# ... more rules

Step 2: Testing with Kustomize's hardcoded namespace: field

# config/default/kustomization.yaml
namespace: test-override
namePrefix: osiris-
resources:
- ../rbac
kustomize build config/default

Result:

Error: namespace transformation produces ID conflict: 
[{"kind":"Role","metadata":{"name":"manager-role","namespace":"test-override"}...},
 {"kind":"Role","metadata":{"name":"manager-role","namespace":"test-override"}...}]

The hardcoded namespace: field overrides both infrastructure and production to test-override, causing both Roles to have identical names in the same namespace → ID conflict.

Step 3: Testing with NamespaceTransformer + unsetOnly: true

# config/default/namespace-transformer.yaml
apiVersion: builtin
kind: NamespaceTransformer
metadata:
  name: namespace-transformer
  namespace: test-override
setRoleBindingSubjects: none
unsetOnly: true    # Only transform if namespace not already set
fieldSpecs:
- path: metadata/namespace
  create: true
# config/default/kustomization.yaml
namePrefix: osiris-
resources:
- ../rbac
transformers:
- namespace-transformer.yaml
kustomize build config/default

Result: SUCCESS ✅ - Explicit namespaces are preserved:

---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: osiris-manager-role
  namespace: infrastructure    # ← Preserved
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: osiris-manager-role
  namespace: production    # ← Preserved

Conclusion

  1. Controller-tools works correctly - no PR needed there
  2. The issue is in Kubebuilder's Kustomize scaffolding - the hardcoded namespace: field breaks multi-namespace RBAC
  3. This PR fixes the Kubebuilder scaffolding issue by using NamespaceTransformer with unsetOnly: true

Regarding Your Suggestion

You mentioned:

I think the first step should be opening a PR against controller-tools

After this testing, I believe controller-tools doesn't need changes. The namespace field is already in the generated YAML.

It seems we would only need one additional file: config/default/namespace-transformer.yaml

You're right that only one file is needed per project. However, this PR modifies how Kubebuilder scaffolds new projects - it generates that file automatically during kubebuilder init.

Documentation vs Default Scaffolding

I understand your concern about this being needed for <10% of users. I'm happy to pivot to either approach:

Option A: Documentation only (your preference)

  • Add FAQ entry showing how to manually create namespace-transformer.yaml
  • Include example in tutorial/reference
  • Don't change default scaffolding

Option B: Default scaffolding (current PR)

  • All new projects get namespace-transformer.yaml by default
  • Backward compatible (users can revert to hardcoded namespace:)
  • Follows Kustomize best practices

Which approach would you prefer? I'm happy to adjust the PR accordingly.

About the Helm Side Effect

I've noted the Helm chart issue in the PR description and will handle it separately if needed. The core Kustomize fix is independent of Helm concerns.

Thanks again for your guidance! 🙏

1 similar comment
@AlirezaPourchali
Copy link
Author

Hi @camilamacedo86 👋

Thank you for the feedback! I wanted to clarify the situation with controller-tools after testing extensively.

Controller-gen Already Works Correctly ✅

I've verified that controller-gen from controller-tools is NOT the issue - it already correctly generates namespace fields in RBAC manifests. Here's the proof:

Test Setup

I have a controller with namespace-specific RBAC markers:

// +kubebuilder:rbac:groups=apps,namespace=infrastructure,resources=deployments,verbs=get;list;watch;update;patch
// +kubebuilder:rbac:groups="",namespace=production,resources=secrets,verbs=get
// +kubebuilder:rbac:groups=coordination.k8s.io,namespace=production,resources=leases,verbs=get;list;watch;create;update;patch;delete
// +kubebuilder:rbac:groups="",namespace=production,resources=events,verbs=get;list;watch;create;update;patch;delete

Test Results

Step 1: Fresh generation with controller-gen v0.19.0

rm config/rbac/role.yaml
make manifests  # Uses controller-gen
cat config/rbac/role.yaml

Output - controller-gen DOES generate namespace fields:

---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: manager-role
  namespace: infrastructure    # ← Generated by controller-gen
rules:
- apiGroups:
  - apps
  resources:
  - deployments
  verbs:
  - get
  - list
  - patch
  - update
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: manager-role
  namespace: production    # ← Generated by controller-gen
rules:
- apiGroups:
  - ""
  resources:
  - secrets
  verbs:
  - get
# ... more rules

Step 2: Testing with Kustomize's hardcoded namespace: field

# config/default/kustomization.yaml
namespace: test-override
namePrefix: osiris-
resources:
- ../rbac
kustomize build config/default

Result:

Error: namespace transformation produces ID conflict: 
[{"kind":"Role","metadata":{"name":"manager-role","namespace":"test-override"}...},
 {"kind":"Role","metadata":{"name":"manager-role","namespace":"test-override"}...}]

The hardcoded namespace: field overrides both infrastructure and production to test-override, causing both Roles to have identical names in the same namespace → ID conflict.

Step 3: Testing with NamespaceTransformer + unsetOnly: true

# config/default/namespace-transformer.yaml
apiVersion: builtin
kind: NamespaceTransformer
metadata:
  name: namespace-transformer
  namespace: test-override
setRoleBindingSubjects: none
unsetOnly: true    # Only transform if namespace not already set
fieldSpecs:
- path: metadata/namespace
  create: true
# config/default/kustomization.yaml
namePrefix: osiris-
resources:
- ../rbac
transformers:
- namespace-transformer.yaml
kustomize build config/default

Result: SUCCESS ✅ - Explicit namespaces are preserved:

---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: osiris-manager-role
  namespace: infrastructure    # ← Preserved
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: osiris-manager-role
  namespace: production    # ← Preserved

Conclusion

  1. Controller-tools works correctly - no PR needed there
  2. The issue is in Kubebuilder's Kustomize scaffolding - the hardcoded namespace: field breaks multi-namespace RBAC
  3. This PR fixes the Kubebuilder scaffolding issue by using NamespaceTransformer with unsetOnly: true

Regarding Your Suggestion

You mentioned:

I think the first step should be opening a PR against controller-tools

After this testing, I believe controller-tools doesn't need changes. The namespace field is already in the generated YAML.

It seems we would only need one additional file: config/default/namespace-transformer.yaml

You're right that only one file is needed per project. However, this PR modifies how Kubebuilder scaffolds new projects - it generates that file automatically during kubebuilder init.

Documentation vs Default Scaffolding

I understand your concern about this being needed for <10% of users. I'm happy to pivot to either approach:

Option A: Documentation only (your preference)

  • Add FAQ entry showing how to manually create namespace-transformer.yaml
  • Include example in tutorial/reference
  • Don't change default scaffolding

Option B: Default scaffolding (current PR)

  • All new projects get namespace-transformer.yaml by default
  • Backward compatible (users can revert to hardcoded namespace:)
  • Follows Kustomize best practices

Which approach would you prefer? I'm happy to adjust the PR accordingly.

About the Helm Side Effect

I've noted the Helm chart issue in the PR description and will handle it separately if needed. The core Kustomize fix is independent of Helm concerns.

Thanks again for your guidance! 🙏

@camilamacedo86
Copy link
Member

Hi @AlirezaPourchali,

This is really helpful—thank you so much for the thorough explanations and for sharing your tests and research. I’d like a bit of time to review everything properly, and I’ll reply back here once I’ve gone through it.

@AlirezaPourchali
Copy link
Author

Sure, it's totally fine!
Thanks for the feedback, let me know when to implement the changes.

@k8s-ci-robot k8s-ci-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Jan 6, 2026
@k8s-ci-robot
Copy link
Contributor

PR needs rebase.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. size/XL Denotes a PR that changes 500-999 lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

namespace override while running make deploy

3 participants