When a human interacts with a Kubernetes cluster, they usually use kubectl, a kubeconfig file, a certificate, an OIDC login, or another authentication mechanism managed outside the Kubernetes API itself.
Pods are different.
A process running inside a pod can also talk to the Kubernetes API. For example, an application may need to read ConfigMaps, watch Pods, inspect Jobs, update custom resources, or coordinate work with other workloads. But Kubernetes does not allow that automatically just because the process is already running inside the cluster.
The pod still needs an identity.
That identity is normally provided by a ServiceAccount.
In this article, we will look at the difference between human users and service accounts, how service accounts are attached to pods, how the service account token is mounted inside the container, and how RBAC is used to grant only the permissions that the workload needs.
Human Users vs Service Accounts
Kubernetes distinguishes between two important identity types:
| Identity type | Intended for | Managed as Kubernetes API object? | Scope |
|---|---|---|---|
| User account | Humans, administrators, developers, CI users authenticated as external users | No built-in User object | Global cluster identity |
| ServiceAccount | Pods, workloads, controllers, automation, in-cluster processes | Yes, ServiceAccount object | Namespaced |
This distinction is important.
Kubernetes recognizes authenticated users, but it does not provide a built-in user store. Human identities usually come from external authentication systems such as client certificates, OpenID Connect, authenticating proxies, or cloud-provider integrations.
Service accounts are different. A service account is a Kubernetes API object. You can create it, delete it, bind permissions to it, and assign it to pods.
A useful mental model is:
- Use user accounts when humans interact with the cluster.
- Use service accounts when applications, controllers, jobs, or processes running in pods interact with the cluster.
Why Pods Need Service Accounts
A pod does not automatically get permission to do everything in the cluster.
For example, imagine a container tries to call the Kubernetes API and list pods in the dev namespace. Kubernetes will evaluate the request in two major steps:
- Authentication: Who is making the request?
- Authorization: Is that identity allowed to perform this action?
A service account answers the first question. RBAC usually answers the second question.
If the pod authenticates as system:serviceaccount:dev:pod-inspector, Kubernetes now knows the identity. But that identity still needs permissions. Without a Role, ClusterRole, RoleBinding, or ClusterRoleBinding, the request will be rejected.
That is a strong security boundary. Running inside the cluster does not mean a workload can freely read Secrets, list Pods, or scale Deployments.
Default Service Accounts
Every namespace gets a service account named default.
You can see it with:
kubectl get serviceaccounts -n default
kubectl get serviceaccounts -n dev
kubectl get serviceaccounts --all-namespaces
If you create a pod and do not specify spec.serviceAccountName, Kubernetes assigns the default service account from the pod’s namespace.
You can verify this with:
kubectl get pod <pod-name> -n dev -o yaml | grep serviceAccountName
The output will usually show:
serviceAccountName: default
This does not mean the pod has useful application permissions. In RBAC-enabled clusters, the default service account has no special application-level permissions by default, apart from basic API discovery permissions available to authenticated identities.
For production workloads, it is usually better to create a dedicated service account instead of relying on the namespace’s default service account.
Service Account Names Are Namespaced
Human user names are treated as global cluster identities. If a user named bob authenticates to the cluster, that identity is the same regardless of the namespace where a request is made.
Service accounts are namespaced. That means these two identities are different:
system:serviceaccount:dev:pod-inspector
system:serviceaccount:prod:pod-inspector
They have the same service account name, but they belong to different namespaces. Because of that, they can have completely different permissions.
This is useful because you can follow the principle of least privilege. A pod in dev can be allowed to list pods in dev, while a similarly named service account in prod can have a stricter or completely different set of permissions.
Creating a Service Account
Let’s create a service account called pod-inspector in the dev namespace.
First, create the namespace if it does not exist:
kubectl create namespace dev
Now create the service account manifest.
apiVersion: v1
kind: ServiceAccount
metadata:
name: pod-inspector
namespace: dev
Save it as service-account/pod-inspector.yaml and apply it:
kubectl apply -f service-account/pod-inspector.yaml
Check that it exists:
kubectl get serviceaccounts -n dev
You should see something similar to:
NAME SECRETS AGE
default 0 10m
pod-inspector 0 10s
Do not be surprised if SECRETS is 0. Modern Kubernetes versions do not automatically create long-lived Secret-based tokens for every service account. Instead, pods usually receive short-lived service account tokens through projected volumes.
Attaching a Service Account to a Pod
A service account does nothing by itself until a workload uses it.
Here is a simple pod that uses the pod-inspector service account:
apiVersion: v1
kind: Pod
metadata:
name: alpine-curl
namespace: dev
spec:
serviceAccountName: pod-inspector
containers:
- name: curl
image: curlimages/curl:8.10.1
command: ["sleep", "3600"]
Apply it:
kubectl apply -f pod.yaml
Then confirm the service account assignment:
kubectl describe pod alpine-curl -n dev
You should see:
Service Account: pod-inspector
At this point, the pod has an identity, but it still does not have permission to list pods.
Accessing the Service Account Token Inside the Pod
Kubernetes mounts service account credentials into the container filesystem. The common path is:
/var/run/secrets/kubernetes.io/serviceaccount/
Inside that directory, you typically find:
| File | Purpose |
|---|---|
token | Bearer token used to authenticate to the Kubernetes API |
ca.crt | CA certificate bundle used to verify the API server certificate |
namespace | Namespace where the pod is running |
Open a shell inside the pod:
kubectl exec -it alpine-curl -n dev -- sh
Inside the container, inspect the mounted files:
ls /var/run/secrets/kubernetes.io/serviceaccount/
You can read the namespace:
cat /var/run/secrets/kubernetes.io/serviceaccount/namespace
And you can store the token in a variable:
SERVICEACCOUNT=/var/run/secrets/kubernetes.io/serviceaccount
TOKEN=$(cat ${SERVICEACCOUNT}/token)
CACERT=${SERVICEACCOUNT}/ca.crt
NAMESPACE=$(cat ${SERVICEACCOUNT}/namespace)
APISERVER=https://kubernetes.default.svc
Now try to call the Kubernetes API:
curl --cacert ${CACERT} \
--header "Authorization: Bearer ${TOKEN}" \
${APISERVER}/api/v1/namespaces/${NAMESPACE}/pods
If no RBAC permissions have been granted yet, the response should be forbidden. It will look conceptually like this:
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "pods is forbidden: User \"system:serviceaccount:dev:pod-inspector\" cannot list resource \"pods\" in API group \"\" in the namespace \"dev\"",
"reason": "Forbidden",
"code": 403
}
This is expected. The service account can authenticate, but it is not authorized to list pods.
Granting Read-Only Pod Access with RBAC
Now we can create a Role that allows listing, watching, and getting pods in the dev namespace.
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: pod-reader
namespace: dev
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list", "watch"]
Save this as roles/pod-reader-role.yaml and apply it:
kubectl apply -f roles/pod-reader-role.yaml
The Role defines the permissions, but it does not grant them to anyone yet. To grant those permissions to the service account, create a RoleBinding.
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: pod-reader
namespace: dev
subjects:
- kind: ServiceAccount
name: pod-inspector
namespace: dev
roleRef:
kind: Role
name: pod-reader
apiGroup: rbac.authorization.k8s.io
Save it as roles/pod-reader-rolebinding.yaml and apply it:
kubectl apply -f roles/pod-reader-rolebinding.yaml
Now describe the RoleBinding:
kubectl describe rolebinding pod-reader -n dev
You should see the service account listed as a subject.
Testing the Permission
Before going back into the pod, you can test the permission with kubectl auth can-i.
kubectl auth can-i list pods \
--as=system:serviceaccount:dev:pod-inspector \
-n dev
Expected output:
yes
You can also test something that should still be denied:
kubectl auth can-i delete pods \
--as=system:serviceaccount:dev:pod-inspector \
-n dev
Expected output:
no
This is exactly what we want. The service account can read pods, but it cannot delete them.
Now execute the same curl command inside the pod again:
curl --cacert ${CACERT} \
--header "Authorization: Bearer ${TOKEN}" \
${APISERVER}/api/v1/namespaces/${NAMESPACE}/pods
This time, the API should return a JSON response containing the pods in the namespace.
RoleBinding vs ClusterRoleBinding
The difference between RoleBinding and ClusterRoleBinding is crucial.
| Binding type | Permission scope | Typical use case |
|---|---|---|
RoleBinding | Grants permissions within a namespace | A pod in dev can list pods in dev |
ClusterRoleBinding | Grants permissions cluster-wide | A controller needs to watch resources across all namespaces |
A RoleBinding can bind either a Role or a ClusterRole, but the permissions are still limited to the namespace of the RoleBinding.
A ClusterRoleBinding grants the referenced ClusterRole across the cluster. Use it carefully because it can easily create overly broad permissions.
For application pods, start with a namespaced Role and RoleBinding. Move to ClusterRole or ClusterRoleBinding only when the workload truly needs cluster-wide access.
Service Accounts and Kubernetes Controllers
Service accounts are not only for your own applications.
Many Kubernetes controllers and add-ons also need identities. For example, a controller that manages Deployments, ReplicaSets, Jobs, or custom resources needs permissions to watch and update those resources.
That is why you often see service accounts in system namespaces such as kube-system, combined with ClusterRoles and ClusterRoleBindings.
The pattern is the same:
- A controller runs in a pod.
- The pod uses a service account.
- The service account authenticates to the API server.
- RBAC authorizes the controller to perform specific actions.
The only difference is that controllers often need broader permissions than a simple application pod.
Disabling Automatic Token Mounting
Not every pod needs to talk to the Kubernetes API.
If a workload does not need API access, you can disable automatic service account token mounting. This reduces unnecessary credential exposure inside containers.
You can disable it on the service account:
apiVersion: v1
kind: ServiceAccount
metadata:
name: no-api-access
namespace: dev
automountServiceAccountToken: false
Or directly on the pod:
apiVersion: v1
kind: Pod
metadata:
name: nginx
namespace: dev
spec:
automountServiceAccountToken: false
containers:
- name: nginx
image: nginx:1.27
This is a good default for workloads that never call the Kubernetes API.
Manual Tokens and kubectl create token
Sometimes you may need to create a service account token manually, for example for an external automation system that needs to authenticate to the Kubernetes API.
Modern Kubernetes supports:
kubectl create token pod-inspector -n dev
You can also request a custom duration:
kubectl create token pod-inspector -n dev --duration=10m
This is preferable to relying on old, long-lived Secret-based service account tokens. Long-lived tokens increase risk because they do not rotate automatically and may remain valid for too long.
Security Best Practices
Service accounts are powerful because they give workloads a real API identity. That also means they must be treated carefully.
Use these practices:
- Create dedicated service accounts per workload.
- Avoid using the namespace
defaultservice account for real applications. - Grant the minimum permissions required.
- Prefer
RoleandRoleBindingbefore using cluster-wide permissions. - Avoid granting broad verbs such as
*. - Avoid granting broad resources such as
*. - Disable
automountServiceAccountTokenfor pods that do not need Kubernetes API access. - Be careful with Secret read permissions because Secrets can contain credentials.
- Prefer short-lived tokens over manually created long-lived token Secrets.
- Regularly review RoleBindings and ClusterRoleBindings.
A simple rule works well: if the application only needs to list pods in one namespace, do not give it access to deployments, secrets, or all namespaces.
Complete Example
Here is the full minimal setup.
apiVersion: v1
kind: Namespace
metadata:
name: dev
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: pod-inspector
namespace: dev
---
apiVersion: v1
kind: Pod
metadata:
name: alpine-curl
namespace: dev
spec:
serviceAccountName: pod-inspector
containers:
- name: curl
image: curlimages/curl:8.10.1
command: ["sleep", "3600"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: pod-reader
namespace: dev
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: pod-reader
namespace: dev
subjects:
- kind: ServiceAccount
name: pod-inspector
namespace: dev
roleRef:
kind: Role
name: pod-reader
apiGroup: rbac.authorization.k8s.io
Apply it:
kubectl apply -f service-account-demo.yaml
Test it:
kubectl auth can-i list pods \
--as=system:serviceaccount:dev:pod-inspector \
-n dev
Open a shell:
kubectl exec -it alpine-curl -n dev -- sh
Call the API:
SERVICEACCOUNT=/var/run/secrets/kubernetes.io/serviceaccount
TOKEN=$(cat ${SERVICEACCOUNT}/token)
CACERT=${SERVICEACCOUNT}/ca.crt
NAMESPACE=$(cat ${SERVICEACCOUNT}/namespace)
APISERVER=https://kubernetes.default.svc
curl --cacert ${CACERT} \
--header "Authorization: Bearer ${TOKEN}" \
${APISERVER}/api/v1/namespaces/${NAMESPACE}/pods
You should now receive pod data from the Kubernetes API.
Conclusion
Service accounts are the Kubernetes-native way to give workloads an identity.
Human users and service accounts are not the same thing. Users are intended for people and are usually managed outside Kubernetes. Service accounts are Kubernetes API objects, scoped to namespaces, and designed for pods, controllers, jobs, and automation.
The most important point is that authentication is not the same as authorization. Mounting a service account token into a pod only gives the pod an identity. RBAC decides what that identity can actually do.
For secure clusters, create dedicated service accounts, bind only the required permissions, and avoid unnecessary token mounting. This keeps your workloads useful without making them more privileged than they need to be.
References
- Kubernetes: Service Accounts - kubernetes[.]io/docs/concepts/security/service-accounts/
- Kubernetes: Managing Service Accounts - kubernetes[.]io/docs/reference/access-authn-authz/service-accounts-admin/
- Kubernetes: Configure Service Accounts for Pods - kubernetes[.]io/docs/tasks/configure-pod-container/configure-service-account/
- Kubernetes: Accessing the Kubernetes API from a Pod - kubernetes[.]io/docs/tasks/run-application/access-api-from-pod/
- Kubernetes: Using RBAC Authorization - kubernetes[.]io/docs/reference/access-authn-authz/rbac/
- Kubernetes: Authenticating - kubernetes[.]io/docs/reference/access-authn-authz/authentication/
- Kubernetes: kubectl create token - kubernetes[.]io/docs/reference/kubectl/generated/kubectl_create/kubectl_create_token/