Kubernetes makes pod-to-pod communication easy by default. When you deploy a few pods into a cluster, they can usually communicate with each other immediately, even if they run in different namespaces. That default behavior is convenient while learning Kubernetes, but it is not always safe.
In a production cluster, you usually do not want every pod to be able to connect to every other pod. A frontend should not be able to connect directly to a database unless that is explicitly required. A random debug pod should not be able to call internal APIs. A compromised workload should not get unrestricted access to the rest of the cluster network.
This is where Kubernetes NetworkPolicy becomes useful.
A NetworkPolicy lets you define allowed network traffic for selected pods. You can control inbound traffic, outbound traffic, namespace-based access, pod-label-based access, ports, and external IP ranges. In practice, network policies help you build a more explicit and secure communication model inside your cluster.
In this article, we will explore network policies locally with Minikube and Calico.
What We Will Cover
We will build the lab step by step:
- Create a new Minikube cluster with Calico.
- Deploy a simple
color-apiapplication. - Expose the application with a
ClusterIPservice. - Create
curlpods to test connectivity from inside the cluster. - Apply a default-deny ingress policy.
- Allow ingress only from selected pods.
- Explore how selectors behave with OR and AND logic.
- Use
namespaceSelectorto allow traffic from another namespace. - Understand the difference between
podSelectoralone andnamespaceSelectorpluspodSelector. - Apply default-deny egress.
- Allow egress only to selected pods.
- Fix DNS resolution after egress is restricted.
- Understand why standard Kubernetes network policies are namespace-scoped.
- Clean up the lab resources.
By the end, you should understand not only what YAML to write, but also why a policy behaves the way it does.
Why We Need a CNI That Supports NetworkPolicy
A very important point: Kubernetes NetworkPolicy objects are not enforced by Kubernetes alone.
Kubernetes stores the policy object in the API server, but actual traffic enforcement is handled by the cluster networking implementation, usually called the CNI plugin. CNI stands for Container Network Interface.
If the cluster uses a network plugin that does not support NetworkPolicy, then you can still create network policy objects, but they will not affect traffic.
That is why this lab uses Calico. Calico is a CNI plugin that supports Kubernetes NetworkPolicy and is commonly used for network security use cases.
Minikube can run with different network configurations. For this lab, we explicitly start Minikube with Calico enabled.
Create a Fresh Minikube Cluster with Calico
You can either stop your existing Minikube cluster or delete it completely. For a clean lab, I prefer using a separate Minikube profile.
Delete an old profile if it already exists:
minikube delete -p network-policies
Now create a new cluster with Calico:
minikube start -p network-policies --cni=calico --cpus=4 --memory=8192
The important part is this option:
--cni=calico
Calico requires more resources than the simplest Minikube networking setup. If you see pods restarting or containers being killed during startup, check whether your machine has enough available CPU and memory.
After the cluster starts, check the system pods:
kubectl get pods -n kube-system
You can also watch the pods until everything becomes stable:
kubectl get pods -n kube-system -w
During startup, some Calico-related pods can briefly show ContainerCreating, Error, or restarts. That is not necessarily a problem during bootstrapping. What matters is that the pods eventually become Running.
You should see system pods such as:
calico-kube-controllers
calico-node
coredns
kube-apiserver
kube-controller-manager
kube-proxy
kube-scheduler
Once the system pods are running, the cluster is ready.
Create the Color API Deployment and Service
For this lab we will use a small application called color-api.
Create a file named color-api.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: color-api
spec:
replicas: 1
selector:
matchLabels:
app: color-api
template:
metadata:
labels:
app: color-api
spec:
containers:
- name: color-api
image: nikolaysm/color-api:1.2.0
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: color-svc
spec:
selector:
app: color-api
ports:
- port: 80
targetPort: 80
Apply it:
kubectl apply -f color-api.yaml
Check the deployment and service:
kubectl get deploy,svc
You should see something like this:
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/color-api 1/1 1 1 30s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/color-svc ClusterIP 10.96.123.45 <none> 80/TCP 30s
The service is a ClusterIP service. That means it is reachable from inside the cluster, but not exposed externally.
Create a Curl Pod for Testing
Now create a pod that we can use as a client.
Create a file named curl.yaml:
apiVersion: v1
kind: Pod
metadata:
name: curl
labels:
app: curl
spec:
containers:
- name: curl
image: nikolaysm/alpine-curl:1.0.0
command: ["sh", "-c", "sleep infinity"]
Apply it:
kubectl apply -f curl.yaml
Check that the pod is running:
kubectl get pods
Now open a shell inside the pod:
kubectl exec -it curl -- sh
From inside the pod, call the API through the service:
curl http://color-svc/api
The request should succeed.
Exit the pod:
exit
At this point, no network policies are restricting traffic. The curl pod can talk to the color-api pod through the color-svc service.
This is the default Kubernetes behavior.
Default Kubernetes Network Behavior
By default, pods are non-isolated.
That means:
- Pods can receive traffic from other pods.
- Pods can send traffic to other pods.
- Pods can usually reach services in other namespaces.
- Pods can usually reach external destinations if the cluster allows it.
A pod becomes isolated for ingress only when at least one NetworkPolicy selects it and has Ingress in policyTypes.
A pod becomes isolated for egress only when at least one NetworkPolicy selects it and has Egress in policyTypes.
This distinction is important. Ingress and egress are evaluated independently.
For a connection to work under strict policies, two sides may matter:
- The source pod must be allowed to send traffic.
- The destination pod must be allowed to receive traffic.
If egress is unrestricted but ingress is blocked, the request still fails. If ingress is allowed but egress is blocked, the request also fails.
Create a Default-Deny Ingress Policy
Let us now block all inbound traffic to all pods in the current namespace.
Create a directory for policies:
mkdir -p policies
Create policies/deny-all-ingress.yaml:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny-all-ingress
spec:
podSelector: {}
policyTypes:
- Ingress
Apply it:
kubectl apply -f policies/deny-all-ingress.yaml
Check the policy:
kubectl get networkpolicy
Describe it:
kubectl describe networkpolicy deny-all-ingress
The important part is:
podSelector: {}
An empty podSelector selects all pods in the namespace where the policy exists.
Because this policy declares Ingress but does not define any ingress rules, it allows no ingress traffic.
Now test the API again:
kubectl exec -it curl -- sh
curl http://color-svc/api
This request should hang and eventually time out.
Exit the shell:
exit
The curl pod is still able to send traffic. However, the color-api pod is no longer allowed to receive ingress traffic.
This is a very common pattern:
- Apply default deny.
- Add explicit allow rules.
Allow Ingress from the Curl Pod
Now we will allow only the curl pod to call the color-api pod.
Create policies/allow-curl-to-color-api.yaml:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-curl-to-color-api
spec:
podSelector:
matchLabels:
app: color-api
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
app: curl
ports:
- protocol: TCP
port: 80
Apply it:
kubectl apply -f policies/allow-curl-to-color-api.yaml
Test again:
kubectl exec -it curl -- sh
curl http://color-svc/api
exit
This time the request should succeed.
The policy says:
- Select destination pods with
app: color-api. - Allow ingress traffic to those pods.
- The source must be a pod with
app: curl. - The destination port must be TCP 80.
Why the Service Is Not Mentioned in the Policy
Notice that the policy does not target the service color-svc.
That is intentional.
Kubernetes network policies select pods, not services. A service provides a stable virtual IP and DNS name, but the traffic eventually reaches one or more pods. The policy is enforced at the pod traffic level.
So even when you call:
curl http://color-svc/api
the destination selected by the policy is still the color-api pod.
That is why the policy targets:
podSelector:
matchLabels:
app: color-api
not the service name.
Create a Second Curl Pod That Should Be Blocked
Let us confirm that the allow rule is specific.
Create curl-two.yaml:
apiVersion: v1
kind: Pod
metadata:
name: curl-two
labels:
app: curl-two
spec:
containers:
- name: curl
image: nikolaysm/alpine-curl:1.0.0
command: ["sh", "-c", "sleep infinity"]
Apply it:
kubectl apply -f curl-two.yaml
Now test from curl-two:
kubectl exec -it curl-two -- sh
curl http://color-svc/api
This request should be blocked.
Exit:
exit
The reason is simple: the policy allows traffic only from pods with this label:
app: curl
The second pod has this label:
app: curl-two
Therefore, it does not match the allow rule.
Network Policies Are Additive
Network policies are additive. They do not work like ordered firewall rules where the first matching rule wins.
If multiple policies select the same pod, the allowed traffic is the union of all rules across those policies.
That means:
- A policy cannot deny traffic that another Kubernetes
NetworkPolicyallows. - Standard Kubernetes
NetworkPolicyhas allow rules, not explicit deny rules. - To deny traffic, you usually isolate the pod first and then allow only what is required.
Calico has its own policy resources with more advanced behavior, including explicit deny and ordering, but that is separate from standard Kubernetes NetworkPolicy.
Selector Logic: OR vs AND
Selector structure is one of the most important parts of network policies.
A tiny YAML change can completely change the policy behavior.
Multiple from Entries Mean OR
Example:
ingress:
- from:
- podSelector:
matchLabels:
app: curl
- podSelector:
matchLabels:
tier: backend
This allows traffic from:
- Pods with
app: curl - OR pods with
tier: backend
Each item under from is a separate allowed peer.
Multiple Labels in One matchLabels Block Mean AND
Example:
ingress:
- from:
- podSelector:
matchLabels:
app: curl
tier: backend
This allows traffic only from pods that have both labels:
app: curl
tier: backend
A pod with only app: curl does not match.
A pod with only tier: backend does not match.
The pod must satisfy both label requirements.
Test Selector Logic with Labels
Update curl.yaml:
apiVersion: v1
kind: Pod
metadata:
name: curl
labels:
app: curl
tier: frontend
spec:
containers:
- name: curl
image: nikolaysm/alpine-curl:1.0.0
command: ["sh", "-c", "sleep infinity"]
Update curl-two.yaml:
apiVersion: v1
kind: Pod
metadata:
name: curl-two
labels:
app: curl-two
tier: backend
spec:
containers:
- name: curl
image: nikolaysm/alpine-curl:1.0.0
command: ["sh", "-c", "sleep infinity"]
Now update the allow policy:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-curl-to-color-api
spec:
podSelector:
matchLabels:
app: color-api
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
app: curl
- podSelector:
matchLabels:
tier: backend
ports:
- protocol: TCP
port: 80
This should allow both pods:
curlmatchesapp: curl.curl-twomatchestier: backend.
Apply the policy and recreate the pods:
kubectl apply -f policies/allow-curl-to-color-api.yaml
kubectl delete pod curl curl-two --ignore-not-found
kubectl apply -f curl.yaml
kubectl apply -f curl-two.yaml
Test both:
kubectl exec -it curl -- sh
curl http://color-svc/api
exit
kubectl exec -it curl-two -- sh
curl http://color-svc/api
exit
Both should work.
Now change the policy to require both labels in the same selector:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-curl-to-color-api
spec:
podSelector:
matchLabels:
app: color-api
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
app: curl
tier: backend
ports:
- protocol: TCP
port: 80
Apply it:
kubectl apply -f policies/allow-curl-to-color-api.yaml
Now neither pod should be allowed:
curlhasapp: curl, buttier: frontend.curl-twohastier: backend, butapp: curl-two.
To match this policy, a pod would need:
app: curl
tier: backend
Selector Summary Table
| YAML shape | Meaning |
|---|---|
podSelector: {} in spec | Selects all pods in the policy namespace |
Multiple items under from | OR logic |
Multiple items under to | OR logic |
Multiple labels under one matchLabels | AND logic |
namespaceSelector and podSelector in the same list item | AND logic |
namespaceSelector and podSelector as separate list items | OR logic |
No ingress rules with policyTypes: [Ingress] | No ingress is allowed |
No egress rules with policyTypes: [Egress] | No egress is allowed |
This table is worth remembering because many network policy bugs come from misunderstanding selector structure.
Using matchExpressions
You are not limited to matchLabels. You can also use matchExpressions.
For example, this allows pods where the app label is either curl or curl-two:
from:
- podSelector:
matchExpressions:
- key: app
operator: In
values:
- curl
- curl-two
Other useful operators include:
InNotInExistsDoesNotExist
For simple policies, matchLabels is usually easier to read. For broader matching rules, matchExpressions can be more flexible.
Allow Traffic from Another Namespace
So far, our policies have worked inside the default namespace.
Now let us create a new namespace called dev:
kubectl create namespace dev
Check its labels:
kubectl get namespace dev --show-labels
Kubernetes automatically adds a stable namespace label:
kubernetes.io/metadata.name=dev
You can use this label in a namespaceSelector.
Create curl-dev.yaml:
apiVersion: v1
kind: Pod
metadata:
name: curl-dev
namespace: dev
labels:
app: curl
spec:
containers:
- name: curl
image: nikolaysm/alpine-curl:1.0.0
command: ["sh", "-c", "sleep infinity"]
Apply it:
kubectl apply -f curl-dev.yaml
Because this pod runs in the dev namespace and the service runs in the default namespace, use the fully qualified service name:
kubectl exec -it curl-dev -n dev -- sh
curl http://color-svc.default.svc.cluster.local/api
exit
Depending on the current policy, this request may be blocked.
Now update policies/allow-curl-to-color-api.yaml to allow traffic from app: curl pods in the dev namespace:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-curl-to-color-api
spec:
podSelector:
matchLabels:
app: color-api
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
app: curl
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: dev
podSelector:
matchLabels:
app: curl
ports:
- protocol: TCP
port: 80
Apply it:
kubectl apply -f policies/allow-curl-to-color-api.yaml
Test again:
kubectl exec -it curl-dev -n dev -- sh
curl http://color-svc.default.svc.cluster.local/api
exit
The request should now work.
Same List Item vs Separate List Item
This is a critical detail.
This policy peer:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: dev
podSelector:
matchLabels:
app: curl
means:
Allow pods with
app: curlin namespaces withkubernetes.io/metadata.name=dev.
The namespace and pod requirements are combined.
But this policy peer list:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: dev
- podSelector:
matchLabels:
app: curl
means:
Allow all pods from the
devnamespace OR allow pods withapp: curlfrom the policy namespace.
That second version is much broader.
This is one of the most common mistakes when writing network policies. The extra dash changes the logic.
Why Namespace Selectors Matter
Imagine you have these namespaces:
apps-dev
mongodb-dev
apps-prod
mongodb-prod
You may want:
- Pods in
apps-devto talk only to MongoDB inmongodb-dev. - Pods in
apps-prodto talk only to MongoDB inmongodb-prod. - Dev workloads should not talk to production databases.
- Production workloads should not talk to development databases.
If the same app labels exist in both environments, a podSelector alone is not enough. You need namespace-level selection as well.
For example:
from:
- namespaceSelector:
matchLabels:
environment: prod
podSelector:
matchLabels:
app: backend
This means:
Allow only backend pods from production namespaces.
That is much safer than allowing every pod with app: backend from any namespace.
Default-Deny Egress
So far, we focused on ingress. Now let us restrict outbound traffic.
Create policies/deny-all-egress.yaml:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny-all-egress
spec:
podSelector: {}
policyTypes:
- Egress
Apply it:
kubectl apply -f policies/deny-all-egress.yaml
Now test external connectivity from the curl pod:
kubectl exec -it curl -- sh
curl https://google.com
This should fail because the pod is no longer allowed to send outbound traffic.
Exit:
exit
With default-deny egress, every outbound connection must be explicitly allowed.
Allow Egress from Curl to Color API
Create policies/allow-curl-egress-to-color-api.yaml:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-curl-egress-to-color-api
spec:
podSelector:
matchLabels:
app: curl
policyTypes:
- Egress
egress:
- to:
- podSelector:
matchLabels:
app: color-api
ports:
- protocol: TCP
port: 80
Apply it:
kubectl apply -f policies/allow-curl-egress-to-color-api.yaml
This policy allows pods with app: curl to send traffic to pods with app: color-api on TCP port 80.
But there is a gotcha.
If you call the color-api pod IP directly, it may work. If you call the service name, it may still fail:
kubectl exec -it curl -- sh
curl http://color-svc/api
Why?
Because color-svc is a DNS name. Before the pod can connect to the API, it must resolve the service name through CoreDNS.
If egress is denied and DNS is not explicitly allowed, service name resolution fails.
Find the Color API Pod IP
To see the difference, get the pod IP:
kubectl get pod -l app=color-api -o wide
You may see something like:
NAME READY STATUS RESTARTS AGE IP
color-api-7b9d7c9c7b-lx8tm 1/1 Running 0 10m 10.244.0.12
Now call that IP from the curl pod:
kubectl exec -it curl -- sh
curl http://10.244.0.12/api
exit
This can work because the destination matches the allowed pod selector.
But this call may fail:
kubectl exec -it curl -- sh
curl http://color-svc/api
exit
That failure is not because the API traffic is wrong. It is because DNS traffic is blocked.
Allow Egress to CoreDNS
In most Kubernetes clusters, DNS is provided by CoreDNS in the kube-system namespace.
Check the labels on the namespace:
kubectl get namespace kube-system --show-labels
You should see:
kubernetes.io/metadata.name=kube-system
Now check CoreDNS labels:
kubectl get pods -n kube-system --show-labels
In many clusters, CoreDNS pods use:
k8s-app=kube-dns
Create policies/allow-egress-dns.yaml:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-egress-dns
spec:
podSelector: {}
policyTypes:
- Egress
egress:
- to:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: kube-system
podSelector:
matchLabels:
k8s-app: kube-dns
ports:
- protocol: UDP
port: 53
- protocol: TCP
port: 53
Apply it:
kubectl apply -f policies/allow-egress-dns.yaml
Now try the service name again:
kubectl exec -it curl -- sh
curl http://color-svc/api
exit
The request should now work.
Why DNS Should Often Be a Separate Policy
You could put DNS egress rules into every application policy, but that quickly becomes repetitive.
It is usually cleaner to create a separate DNS policy, for example:
metadata:
name: allow-egress-dns
Then application-specific policies can focus on application traffic:
metadata:
name: allow-curl-egress-to-color-api
This keeps responsibilities separate:
- DNS policy: allows name resolution.
- Application policy: allows the actual application communication.
That separation becomes much easier to maintain as the cluster grows.
Egress to External IPs
Network policies can also allow traffic to external IP ranges using ipBlock.
For example:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-egress-to-example-cidr
spec:
podSelector:
matchLabels:
app: curl
policyTypes:
- Egress
egress:
- to:
- ipBlock:
cidr: 203.0.113.0/24
ports:
- protocol: TCP
port: 443
This allows selected pods to connect to TCP port 443 for that CIDR range.
Be careful with broad ranges such as:
cidr: 0.0.0.0/0
That effectively allows access to the entire IPv4 internet unless you also use exceptions or additional controls. For sensitive workloads, prefer narrow ranges.
Network Policies Are Namespace-Scoped
Standard Kubernetes NetworkPolicy is a namespaced resource.
If you create this policy in the default namespace:
metadata:
name: deny-all-egress
it only applies to pods selected in the default namespace.
It does not automatically apply to pods in dev, staging, or prod.
Test this with the curl-dev pod:
kubectl exec -it curl-dev -n dev -- sh
curl https://google.com
exit
If no egress policy exists in dev, the request may still work, even though egress is denied in default.
This is not a bug. It is how standard Kubernetes network policies work.
If you want default deny in every namespace, you need to create a policy in every namespace, or use a higher-level mechanism supported by your platform or CNI.
Calico also supports Calico-specific policy resources, including global policies, but those are not the same as portable Kubernetes NetworkPolicy objects.
A Full Example Policy Set
Here is a compact set of policies for the lab.
Default Deny Ingress
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny-all-ingress
spec:
podSelector: {}
policyTypes:
- Ingress
Default Deny Egress
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny-all-egress
spec:
podSelector: {}
policyTypes:
- Egress
Allow Curl to Reach Color API
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-curl-to-color-api
spec:
podSelector:
matchLabels:
app: color-api
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
app: curl
ports:
- protocol: TCP
port: 80
Allow Curl Egress to Color API
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-curl-egress-to-color-api
spec:
podSelector:
matchLabels:
app: curl
policyTypes:
- Egress
egress:
- to:
- podSelector:
matchLabels:
app: color-api
ports:
- protocol: TCP
port: 80
Allow DNS Egress
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-egress-dns
spec:
podSelector: {}
policyTypes:
- Egress
egress:
- to:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: kube-system
podSelector:
matchLabels:
k8s-app: kube-dns
ports:
- protocol: UDP
port: 53
- protocol: TCP
port: 53
Useful Commands
List network policies:
kubectl get networkpolicy
List network policies in all namespaces:
kubectl get networkpolicy -A
Describe a policy:
kubectl describe networkpolicy deny-all-ingress
Show pod labels:
kubectl get pods --show-labels
Show namespace labels:
kubectl get namespaces --show-labels
Show pod IPs:
kubectl get pods -o wide
Run a request from a pod:
kubectl exec -it curl -- curl http://color-svc/api
Open a shell in a pod:
kubectl exec -it curl -- sh
Check CoreDNS pods:
kubectl get pods -n kube-system -l k8s-app=kube-dns
Troubleshooting
Traffic Is Still Allowed
Check whether a policy actually selects the destination pod for ingress or the source pod for egress.
kubectl get pods --show-labels
kubectl describe networkpolicy <policy-name>
Also verify that your CNI supports NetworkPolicy. Without a supporting CNI, policies may exist but not be enforced.
Traffic Is Blocked Unexpectedly
Check for default-deny policies:
kubectl get networkpolicy
kubectl describe networkpolicy <policy-name>
Remember that if any ingress policy selects a pod, the pod becomes isolated for ingress. If no allow rule matches the traffic, it is blocked.
The same applies to egress.
Service Name Does Not Resolve
If egress is restricted, DNS may be blocked.
Test DNS:
kubectl exec -it curl -- nslookup color-svc
If DNS fails, allow egress to CoreDNS on UDP and TCP port 53.
Cross-Namespace Traffic Fails
Use the fully qualified service name:
curl http://color-svc.default.svc.cluster.local/api
Then check whether your policy uses a namespaceSelector and whether the namespace labels match.
Selector Looks Correct but Still Fails
Check whether you accidentally created OR logic instead of AND logic.
This is AND:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: dev
podSelector:
matchLabels:
app: curl
This is OR:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: dev
- podSelector:
matchLabels:
app: curl
The second version is not equivalent.
Best Practices
Start with a default-deny mindset for sensitive namespaces. This forces you to document required communication paths explicitly.
Use stable labels. A network policy is only as reliable as the labels it depends on.
Separate DNS egress from application egress. It makes policies easier to read and reuse.
Avoid broad ipBlock ranges unless required. Prefer narrow CIDR ranges for external dependencies.
Test policies with temporary client pods. A simple curl pod is often enough to validate behavior.
Document expected traffic flows. Network policies are easier to maintain when there is a clear map of which workload should talk to which workload.
Be careful with YAML list structure. In network policies, a single dash can change your rule from restrictive to permissive.
Apply policies namespace by namespace. Standard Kubernetes network policies do not automatically protect every namespace.
Cleanup
Delete all policies:
kubectl delete -f policies/ --ignore-not-found
Delete the pods and application:
kubectl delete -f curl-dev.yaml --ignore-not-found
kubectl delete -f curl-two.yaml --ignore-not-found
kubectl delete -f curl.yaml --ignore-not-found
kubectl delete -f color-api.yaml --ignore-not-found
Delete the namespace:
kubectl delete namespace dev --ignore-not-found
Delete the Minikube profile:
minikube delete -p network-policies
Conclusion
Network policies are one of the most practical Kubernetes security controls. They let you move from an open cluster network to an explicit communication model.
The main lessons are:
- Kubernetes allows pod communication by default.
NetworkPolicyenforcement requires a supporting CNI plugin.- Calico is a good choice for testing network policies with Minikube.
- A default-deny policy isolates selected pods.
- Ingress and egress are evaluated independently.
- Services are not selected by network policies; pods are.
- Multiple policies are additive.
- Selector structure matters.
- DNS must be allowed when default-deny egress is enabled.
- Standard Kubernetes
NetworkPolicyobjects are namespace-scoped.
Once you understand these rules, network policies become much easier to reason about. They are not just YAML files. They are a way to describe and enforce how applications are allowed to communicate inside your Kubernetes cluster.
References
- Kubernetes: Network Policies - kubernetes[.]io/docs/concepts/services-networking/network-policies/
- Kubernetes: Use Calico for NetworkPolicy - kubernetes[.]io/docs/tasks/administer-cluster/network-policy-provider/calico-network-policy/
- Minikube: Network Policy - minikube[.]sigs[.]k8s[.]io/docs/handbook/network_policy/
- Calico: Quickstart for Calico on Minikube - docs[.]tigera[.]io/calico/latest/getting-started/kubernetes/minikube
- Calico: Get Started with Kubernetes Network Policy - docs[.]tigera[.]io/calico/latest/network-policy/get-started/kubernetes-policy/kubernetes-network-policy
- Calico: Enable a Default Deny Policy - docs[.]tigera[.]io/calico/latest/network-policy/get-started/kubernetes-default-deny