Kubernetes access control becomes much easier to understand when you connect three ideas together:
- how a user authenticates to the Kubernetes API server
- how the Kubernetes API is structured into paths, groups, versions, and subresources
- how RBAC grants permissions to users, groups, and service accounts
In this article, we will create two users, alice and bob, using X.509 client certificates. Then we will configure kubectl contexts for both users and assign permissions using Kubernetes RBAC.
The examples are suitable for a local learning cluster such as Minikube. In a production environment, user authentication is often handled by an external identity provider such as OIDC, but client certificates are still very useful for understanding how Kubernetes authentication and authorization work under the hood.
The scenario
We will create two Kubernetes identities:
| User | Certificate subject | Group | Intended access |
|---|---|---|---|
alice | CN=alice,O=admins | admins | Admin-style pod management across namespaces |
bob | CN=bob,O=dev | dev | Read-only access to pods and ConfigMaps in the dev namespace |
Kubernetes does not have a normal built-in User object that you create with kubectl apply. Instead, a user identity is established by the authentication method. With X.509 client certificates, Kubernetes reads the username from the certificate CN field and group memberships from the certificate O fields.
Step 1: Generate private keys
First, create private keys for both users:
openssl genrsa -out alice.key 2048
openssl genrsa -out bob.key 2048
These files are sensitive. They are the private keys that alice and bob will use to authenticate to the Kubernetes API server.
Do not commit private keys to Git.
Step 2: Create certificate signing requests
Next, create certificate signing requests with the correct subject information.
openssl req -new \
-key alice.key \
-out alice.csr \
-subj "/CN=alice/O=admins"
openssl req -new \
-key bob.key \
-out bob.csr \
-subj "/CN=bob/O=dev"
The important part is the -subj value:
| Subject part | Meaning in Kubernetes |
|---|---|
CN=alice | Username becomes alice |
O=admins | User belongs to the admins group |
CN=bob | Username becomes bob |
O=dev | User belongs to the dev group |
Group names are case-sensitive. If the certificate contains O=admins, the RBAC binding must also reference admins.
Step 3: Create Kubernetes CertificateSigningRequest objects
Kubernetes can sign client certificates through the certificates.k8s.io API. To create a CertificateSigningRequest, the raw CSR must be base64 encoded and placed in spec.request.
Create base64 strings for both CSRs:
ALICE_CSR=$(cat alice.csr | base64 | tr -d '\n')
BOB_CSR=$(cat bob.csr | base64 | tr -d '\n')
Now create a file called csr.yaml:
apiVersion: certificates.k8s.io/v1
kind: CertificateSigningRequest
metadata:
name: alice
spec:
request: <paste-alice-base64-csr-here>
signerName: kubernetes.io/kube-apiserver-client
expirationSeconds: 86400
usages:
- client auth
---
apiVersion: certificates.k8s.io/v1
kind: CertificateSigningRequest
metadata:
name: bob
spec:
request: <paste-bob-base64-csr-here>
signerName: kubernetes.io/kube-apiserver-client
expirationSeconds: 86400
usages:
- client auth
Replace the placeholders with the values from ALICE_CSR and BOB_CSR.
The important field is:
signerName: kubernetes.io/kube-apiserver-client
This signer is intended for client certificates that authenticate to the Kubernetes API server. These CSRs are not automatically approved in the normal case, which is good from a security perspective. A certificate can authenticate as the identity requested in the CSR, so approvals should be deliberate.
Apply the file:
kubectl apply -f csr.yaml
kubectl get csr
You should see both requests in a Pending state.
Step 4: Approve the certificate requests
Approve both CSRs:
kubectl certificate approve alice bob
kubectl get csr
After approval and signing, the requests should show something like:
NAME AGE SIGNERNAME REQUESTOR CONDITION
alice 10s kubernetes.io/kube-apiserver-client ... Approved,Issued
bob 10s kubernetes.io/kube-apiserver-client ... Approved,Issued
Step 5: Extract the signed certificates
The signed certificate is stored in status.certificate.
kubectl get csr alice \
-o jsonpath='{.status.certificate}' \
| base64 --decode > alice.crt
kubectl get csr bob \
-o jsonpath='{.status.certificate}' \
| base64 --decode > bob.crt
On macOS, if base64 --decode does not work, use:
base64 -D
Verify the certificate subject:
openssl x509 -in alice.crt -noout -subject -issuer
openssl x509 -in bob.crt -noout -subject -issuer
You should see that alice has CN=alice and O=admins, while bob has CN=bob and O=dev.
Step 6: Configure kubeconfig users and contexts
Now configure kubectl so that it knows how to authenticate as alice and bob.
The default kubeconfig file is usually located at:
~/.kube/config
You can also use another file by setting the KUBECONFIG environment variable:
export KUBECONFIG=/path/to/another/config
For this lab, we will keep using the current kubeconfig file.
Create contexts for both users. This example assumes the cluster name is minikube:
kubectl config set-context alice \
--cluster=minikube \
--user=alice
kubectl config set-context bob \
--cluster=minikube \
--user=bob
Then set the credentials:
kubectl config set-credentials alice \
--client-certificate="$(pwd)/alice.crt" \
--client-key="$(pwd)/alice.key"
kubectl config set-credentials bob \
--client-certificate="$(pwd)/bob.crt" \
--client-key="$(pwd)/bob.key"
You can inspect the kubeconfig:
kubectl config view
You can also embed the certificate and key data directly into the kubeconfig file:
kubectl config set-credentials alice \
--client-certificate="$(pwd)/alice.crt" \
--client-key="$(pwd)/alice.key" \
--embed-certs=true
Embedding certificates can be useful when sharing a self-contained kubeconfig, but referencing files keeps the kubeconfig smaller and makes key rotation easier to manage.
Step 7: Test authentication before authorization
Switch to the alice context:
kubectl config use-context alice
kubectl config current-context
Now try to list pods:
kubectl get pods
You should receive a forbidden error similar to this:
Error from server (Forbidden): pods is forbidden:
User "alice" cannot list resource "pods" in API group "" in the namespace "default"
This is expected.
Authentication worked because Kubernetes recognized the user as alice. Authorization failed because we have not granted alice any RBAC permissions yet.
This distinction is critical:
| Step | Question | Result |
|---|---|---|
| Authentication | Who are you? | alice |
| Authorization | What are you allowed to do? | Nothing yet |
Understanding the Kubernetes API
Before writing RBAC rules, it helps to understand how Kubernetes organizes its API.
Every kubectl command eventually becomes an HTTP request to the Kubernetes API server. For example:
kubectl get pods -n default --v=8
The --v=8 flag prints verbose debug output, including the underlying HTTP request.
Kubernetes API resources are grouped by API path, API group, version, resource type, namespace, and name.
Core API group vs named API groups
Kubernetes has two broad API path patterns.
| Type | API path pattern | Manifest apiVersion example | Resources |
|---|---|---|---|
| Core API group | /api/v1 | v1 | Pods, Services, ConfigMaps, Secrets, Namespaces |
| Named API groups | /apis/<group>/<version> | apps/v1, rbac.authorization.k8s.io/v1 | Deployments, ReplicaSets, Roles, ClusterRoles |
A Pod uses the core API group:
apiVersion: v1
kind: Pod
A Deployment uses the apps API group:
apiVersion: apps/v1
kind: Deployment
A Role uses the rbac.authorization.k8s.io API group:
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
This is why RBAC rules need an apiGroups field. For core resources such as Pods and ConfigMaps, the API group is an empty string:
apiGroups: [""]
For RBAC resources, the API group is:
apiGroups: ["rbac.authorization.k8s.io"]
API call examples
Here are some common kubectl commands and the API paths they map to conceptually.
| kubectl command | API request |
|---|---|
kubectl get pods -n default | GET /api/v1/namespaces/default/pods |
kubectl get pods -A | GET /api/v1/pods |
kubectl get secrets -n default | GET /api/v1/namespaces/default/secrets |
kubectl get deployments -A | GET /apis/apps/v1/deployments |
kubectl get deployments -n default | GET /apis/apps/v1/namespaces/default/deployments |
kubectl get roles -n default | GET /apis/rbac.authorization.k8s.io/v1/namespaces/default/roles |
The namespace in the path matters. Permission to list pods in dev does not automatically mean permission to list pods in prod, and permission to list pods in one namespace does not automatically mean permission to list pods across all namespaces.
Subresources matter for RBAC
Some Kubernetes resources have subresources. A subresource is a nested API path under the main resource.
For Pods, common subresources include:
| Subresource | API endpoint |
|---|---|
pods/log | /api/v1/namespaces/{namespace}/pods/{name}/log |
pods/exec | /api/v1/namespaces/{namespace}/pods/{name}/exec |
pods/attach | /api/v1/namespaces/{namespace}/pods/{name}/attach |
For Deployments, common subresources include:
| Subresource | API endpoint |
|---|---|
deployments/scale | /apis/apps/v1/namespaces/{namespace}/deployments/{name}/scale |
deployments/status | /apis/apps/v1/namespaces/{namespace}/deployments/{name}/status |
This is why granting access to pods does not automatically grant access to pods/log or pods/exec.
For example, a user might be allowed to list Pods but still be blocked from reading logs:
User "alice" cannot get resource "pods/log" in API group "" in the namespace "dev"
Or from executing a shell inside a container:
User "alice" cannot create resource "pods/exec" in API group "" in the namespace "dev"
Subresources give you finer control. This is especially useful when logs may contain sensitive information or when exec access would effectively allow direct access inside running workloads.
Exploring API resources with kubectl
You can list the resources available in your cluster:
kubectl api-resources
Show more information, including verbs:
kubectl api-resources -o wide
Show only namespaced resources:
kubectl api-resources --namespaced=true
Show only non-namespaced resources:
kubectl api-resources --namespaced=false
Filter by API group:
kubectl api-resources --api-group=storage.k8s.io
Filter by supported verbs:
kubectl api-resources --verbs=list,get
This command is very useful when you are writing RBAC policies because it shows the exact resource names, API groups, whether the resource is namespaced, and which verbs are supported.
Creating test namespaces and Pods
Switch back to an admin context before creating namespaces and RBAC objects. In Minikube, this is usually the minikube context:
kubectl config use-context minikube
Create namespaces.yaml:
apiVersion: v1
kind: Namespace
metadata:
name: dev
---
apiVersion: v1
kind: Namespace
metadata:
name: prod
Apply it:
kubectl apply -f namespaces.yaml
Create one NGINX Pod in each namespace:
apiVersion: v1
kind: Pod
metadata:
name: nginx
namespace: dev
spec:
containers:
- name: nginx
image: nginx:1.27.0
ports:
- containerPort: 80
---
apiVersion: v1
kind: Pod
metadata:
name: nginx
namespace: prod
spec:
containers:
- name: nginx
image: nginx:1.27.0
ports:
- containerPort: 80
Save this as pods.yaml and apply it:
kubectl apply -f pods.yaml
kubectl get pods -A | grep nginx
Grant Bob read-only access in the dev namespace
Now create a Role that allows reading Pods and ConfigMaps only in the dev namespace.
Create dev-reader-role.yaml:
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: dev-reader
namespace: dev
rules:
- apiGroups: [""]
resources:
- pods
- configmaps
verbs:
- get
- list
Apply it:
kubectl apply -f dev-reader-role.yaml
A Role only defines permissions. It does not assign those permissions to anyone. For that, we need a RoleBinding.
Create dev-reader-rolebinding.yaml:
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: dev-reader
namespace: dev
subjects:
- kind: User
name: bob
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: Role
name: dev-reader
apiGroup: rbac.authorization.k8s.io
Apply it:
kubectl apply -f dev-reader-rolebinding.yaml
Now test as Bob:
kubectl config use-context bob
kubectl get pods -n dev
This should work.
Try the same in prod:
kubectl get pods -n prod
This should fail because Bob only has access in the dev namespace.
Try listing Pods across all namespaces:
kubectl get pods -A
This should also fail because Bob does not have cluster-wide list permission.
Try creating a Pod:
kubectl run test-busybox \
--image=busybox \
-n dev \
-- sleep 3600
This should fail because Bob only has get and list, not create.
Role vs ClusterRole
A Role is namespaced. It grants permissions inside one namespace.
A ClusterRole is cluster-scoped. It can be used to grant permissions across all namespaces or to grant permissions for cluster-scoped resources such as Nodes and Namespaces.
| Object | Scope | Typical use |
|---|---|---|
Role | Namespace | Read Pods in dev |
RoleBinding | Namespace | Assign a Role to a user, group, or service account in that namespace |
ClusterRole | Cluster | Define reusable or cluster-wide permissions |
ClusterRoleBinding | Cluster | Assign a ClusterRole across the whole cluster |
A RoleBinding can also reference a ClusterRole, but the permissions are still limited to the RoleBinding namespace.
A ClusterRoleBinding grants the referenced ClusterRole across the cluster.
Grant Alice admin-style pod access across namespaces
Alice belongs to the admins group because her certificate subject includes:
O=admins
We can bind a ClusterRole to that group.
Create pod-admin-clusterrole.yaml:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: pod-admin
rules:
- apiGroups: [""]
resources:
- pods
- pods/log
- pods/exec
- pods/attach
verbs:
- "*"
This grants all verbs on Pods and selected Pod subresources.
For a learning lab, verbs: ["*"] is convenient. In production, prefer the smallest set of verbs required by the job. For example, logs usually need get, while exec and attach commonly need create.
Create pod-admin-clusterrolebinding.yaml:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: pod-admin
subjects:
- kind: Group
name: admins
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: pod-admin
apiGroup: rbac.authorization.k8s.io
Apply both files as an admin user:
kubectl config use-context minikube
kubectl apply -f pod-admin-clusterrole.yaml
kubectl apply -f pod-admin-clusterrolebinding.yaml
Test as Alice:
kubectl config use-context alice
kubectl get pods -n dev
kubectl get pods -n prod
kubectl delete pod nginx -n dev
kubectl apply -f pods.yaml
Alice should now be able to manage Pods across namespaces because her admins group is bound through a ClusterRoleBinding.
Testing Pod subresources
Now test Pod logs:
kubectl logs nginx -n dev
If pods/log was not included in the ClusterRole, this command would fail even though Alice has access to pods.
You can inspect the underlying API request:
kubectl logs nginx -n dev --v=8
You will see a request path that ends with /log.
Now test exec:
kubectl exec -n dev -it nginx -- sh
If pods/exec and pods/attach were not included, this operation could fail with a forbidden error. The important lesson is that subresources must be explicitly included in RBAC rules when you want to allow access to those nested API operations.
Reading the Kubernetes API reference
When you are unsure which apiVersion, kind, fields, or operations a resource supports, check the Kubernetes API reference.
For example, for a Deployment, the API reference shows:
- the API group and version, such as
apps/v1 - the
kind, such asDeployment - required and optional fields under
spec - supported operations and HTTP endpoints
- subresources such as
scaleandstatus
This is useful when writing manifests and even more useful when writing RBAC policies.
Common troubleshooting
| Problem | Likely cause | Fix |
|---|---|---|
User "alice" cannot list resource "pods" in API group "" | Authentication worked, but RBAC does not allow the operation | Create a Role/ClusterRole and a binding |
CSR stays Pending | It was created but not approved | Run kubectl certificate approve <name> as an admin |
| CSR is approved but certificate is empty | Signing controller has not issued the certificate or the signer is wrong | Check signerName, controller manager, and CSR events |
kubectl logs is forbidden | User has pods access but not pods/log | Add pods/log to RBAC resources |
kubectl exec is forbidden | User lacks pods/exec and possibly pods/attach | Add the required subresources |
kubectl get pods -A is forbidden | User has namespaced access only | Use a ClusterRoleBinding if cluster-wide access is intended |
Bob can list Pods in dev but not prod | The RoleBinding exists only in dev | Create another RoleBinding in prod or use a cluster-wide binding |
Security notes
Client certificates are powerful because they directly authenticate as a Kubernetes identity. Keep these points in mind:
- Keep private keys secure.
- Use short certificate lifetimes where possible.
- Be careful when approving CSRs because the certificate can authenticate as the requested identity.
- Avoid wildcard permissions in production unless there is a strong reason.
- Prefer external identity providers and centralized access management for real user access in production clusters.
- Use service accounts for workloads running inside the cluster.
Conclusion
Creating users with client certificates is a great way to understand how Kubernetes separates authentication from authorization.
The certificate tells Kubernetes who the user is:
CN=alice,O=admins
RBAC tells Kubernetes what that identity can do:
subjects:
- kind: Group
name: admins
The Kubernetes API structure explains why RBAC rules need apiGroups, resources, verbs, and sometimes subresources such as pods/log and pods/exec.
Once you understand these pieces together, RBAC becomes much less mysterious. You can read an error message like this:
User "bob" cannot list resource "pods" in API group "" in the namespace "prod"
And immediately know what Kubernetes is telling you:
- user:
bob - action:
list - resource:
pods - API group: core API group, represented by
"" - namespace:
prod - result: no matching RBAC permission
That is the foundation for building safe, precise, and maintainable Kubernetes access policies.
Sources and further reading
- Kubernetes API concepts - kubernetes.io/docs/reference/using-api/api-concepts/
- The Kubernetes API - kubernetes.io/docs/concepts/overview/kubernetes-api/
- Authenticating with X.509 client certificates - kubernetes.io/docs/reference/access-authn-authz/authentication/
- Certificates and CertificateSigningRequests - kubernetes.io/docs/reference/access-authn-authz/certificate-signing-requests/
- Using RBAC Authorization - kubernetes.io/docs/reference/access-authn-authz/rbac/
- kubectl config set-credentials - kubernetes.io/docs/reference/kubectl/generated/kubectl_config/kubectl_config_set-credentials/
- kubectl api-resources - kubernetes.io/docs/reference/kubectl/generated/kubectl_api-resources/
- kubectl certificate approve - kubernetes.io/docs/reference/kubectl/generated/kubectl_certificate/kubectl_certificate_approve/