Kubernetes Network Policies: Controlling Pod Ingress and Egress Traffic

0


Kubernetes networking is open by default. If no network policies apply, pods can usually communicate with other pods in the same namespace, pods in other namespaces, and external network endpoints. That default is convenient while learning Kubernetes, but it is too permissive for many real applications.

NetworkPolicy gives us a Kubernetes-native way to define which network connections are allowed for selected pods. We can regulate incoming traffic with ingress rules, outgoing traffic with egress rules, or both.

Network policies are especially useful when we want to reduce the blast radius of a compromised pod. For example, if a frontend container is compromised, it should not automatically be able to talk to every database, cache, internal API, and administrative endpoint in the cluster.

What a NetworkPolicy Controls

A Kubernetes NetworkPolicy controls traffic at the IP address and port level. In practice, this means Layer 3 and Layer 4 traffic such as TCP, UDP, and SCTP.

It can define traffic rules based on:

  • Which pods the policy applies to.
  • Which pods are allowed as sources or destinations.
  • Which namespaces are allowed as sources or destinations.
  • Which external CIDR blocks are allowed.
  • Which ports and protocols are allowed.

A native Kubernetes NetworkPolicy does not inspect HTTP paths, headers, users, JWT claims, TLS properties, or application-specific authorization rules. For that, you normally need a service mesh, an ingress controller, an API gateway, or an extended CNI-specific policy system.

The Important Prerequisite: CNI Support

Creating a NetworkPolicy object is not enough. Kubernetes stores the object, but the enforcement is done by the network plugin. Your cluster must use a CNI plugin that supports NetworkPolicy enforcement.

Common options include:

CNI or policy engineNotes
CalicoWidely used for Kubernetes networking and network policy. It supports native Kubernetes NetworkPolicy and also has its own richer Calico policy API.
CiliumeBPF-based CNI with support for Kubernetes NetworkPolicy and extended Cilium policy resources.
Cloud-provider integrationsManaged Kubernetes services often expose network policy support through their own configuration options.

For local testing with Minikube, Calico is a common choice:

minikube start --cni=calico

After the cluster starts, verify that the CNI pods are running:

kubectl get pods -n kube-system

If your CNI does not enforce network policies, the YAML will still be accepted by Kubernetes, but it will not restrict traffic.

The Three Main Parts of a NetworkPolicy

A practical NetworkPolicy has three important parts.

PartFieldRequired?Purpose
Target podsspec.podSelectorYesSelects the pods that the policy applies to.
Policy directionspec.policyTypesRecommendedDefines whether the policy controls IngressEgress, or both.
Allow rulesspec.ingress and/or spec.egressDependsDefines what traffic is allowed for the selected pods.

The podSelector is always about the pods that are being protected by the policy. It is not the source selector by default. This is one of the most important details to understand.

For example, this selector means: “this policy applies to pods with app: mongodb.”

podSelector:
  matchLabels:
    app: mongodb

An empty selector means: “this policy applies to all pods in this namespace.”

podSelector: {}

Default Behavior: Allow First, Isolate Later

By default, pods are non-isolated. That means:

  • All ingress traffic is allowed if no ingress policy applies to the pod.
  • All egress traffic is allowed if no egress policy applies to the pod.

A pod becomes isolated for ingress when at least one NetworkPolicy selects that pod and includes Ingress in policyTypes.

A pod becomes isolated for egress when at least one NetworkPolicy selects that pod and includes Egress in policyTypes.

Once a pod is isolated in a direction, only traffic explicitly allowed by matching policies is allowed in that direction.

This creates an allow-list model. Native Kubernetes network policies do not work like ordered firewall rules with explicit deny statements. Policies are additive. If multiple policies apply to the same pod, the allowed traffic is the union of all matching policies.

Both Sides Matter: Source Egress and Destination Ingress

For pod-to-pod traffic, two things may need to be true:

  • The source pod’s egress policy must allow the connection.
  • The destination pod’s ingress policy must allow the connection.

If the backend pod is restricted by an egress policy and the database pod is restricted by an ingress policy, both policies must allow the backend to connect to the database.

Think of it like this:

backend pod  ---- egress allowed? ---->  mongodb pod
backend pod  <--- reply traffic -------- mongodb pod
mongodb pod  <--- ingress allowed? ---- backend pod

Reply traffic for an allowed connection is allowed implicitly. You normally do not need a separate reverse rule for the response packets.

Example Scenario

Assume we have three workloads in a namespace called production:

ComponentLabelExpected communication
Frontendapp: frontendCan call the backend API.
Backendapp: backendCan call MongoDB.
MongoDBapp: mongodbShould only accept traffic from the backend.

The goal is simple:

  • MongoDB should not be reachable from every pod.
  • Only backend pods should connect to MongoDB.
  • The connection should be limited to TCP port 27017.

Ingress Policy: Allow Backend to Reach MongoDB

This policy selects MongoDB pods and allows incoming traffic only from backend pods on TCP port 27017.

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-backend-to-mongodb
  namespace: production
spec:
  podSelector:
    matchLabels:
      app: mongodb
  policyTypes:
    - Ingress
  ingress:
    - from:
        - podSelector:
            matchLabels:
              app: backend
      ports:
        - protocol: TCP
          port: 27017

Important details:

  • podSelector under spec selects the protected pods: MongoDB.
  • podSelector under from selects allowed source pods: backend.
  • Because there is no namespaceSelector, the backend pods must be in the same namespace as the policy.
  • Traffic from frontend pods, debug pods, and unrelated workloads is not allowed into MongoDB unless another policy allows it.

Egress Policy: Allow Backend to Connect to MongoDB

Now let us protect backend egress. This policy selects backend pods and allows outgoing traffic to MongoDB pods on TCP port 27017.

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-backend-egress-to-mongodb
  namespace: production
spec:
  podSelector:
    matchLabels:
      app: backend
  policyTypes:
    - Egress
  egress:
    - to:
        - podSelector:
            matchLabels:
              app: mongodb
      ports:
        - protocol: TCP
          port: 27017

This is the egress side of the same relationship. The backend pods can send traffic to MongoDB pods, but only on the allowed port.

There is one practical catch: if the backend connects to MongoDB through a Kubernetes Service name such as mongodb.production.svc.cluster.local, the backend also needs DNS access. A strict egress policy can accidentally block DNS.

Allow DNS When Egress Is Restricted

Most pods need DNS to resolve service names. If you add a default-deny egress policy or strict egress rules, explicitly allow access to the cluster DNS service.

A common policy looks like this:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-backend-dns
  namespace: production
spec:
  podSelector:
    matchLabels:
      app: backend
  policyTypes:
    - Egress
  egress:
    - to:
        - namespaceSelector:
            matchLabels:
              kubernetes.io/metadata.name: kube-system
          podSelector:
            matchLabels:
              k8s-app: kube-dns
      ports:
        - protocol: UDP
          port: 53
        - protocol: TCP
          port: 53

Some clusters label DNS pods differently. Always check first:

kubectl get pods -n kube-system --show-labels

Selector Behavior: AND vs OR

Selectors are powerful, but YAML structure matters a lot.

One List Item Means AND

This example allows ingress only from pods with app: frontend inside namespaces with environment: staging.

ingress:
  - from:
      - namespaceSelector:
          matchLabels:
            environment: staging
        podSelector:
          matchLabels:
            app: frontend

Because namespaceSelector and podSelector are inside the same list item, both conditions must match.

In plain English:

Allow traffic from pods where:
namespace has environment=staging
AND
pod has app=frontend

Two List Items Mean OR

This example is different:

ingress:
  - from:
      - namespaceSelector:
          matchLabels:
            environment: staging
      - podSelector:
          matchLabels:
            app: frontend

Now there are two separate items under from.

In plain English:

Allow traffic from:
any pod in namespaces with environment=staging
OR
any local-namespace pod with app=frontend

This difference is subtle, but it can completely change the security behavior of the policy.

Selecting a Namespace by Name

NetworkPolicy does not have a direct field like this:

namespaceName: staging

Instead, use labels. Kubernetes automatically sets the immutable label kubernetes.io/metadata.name on namespaces, where the value is the namespace name.

Example: allow traffic from frontend pods in the staging namespace.

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-staging-frontend
  namespace: production
spec:
  podSelector:
    matchLabels:
      app: backend
  policyTypes:
    - Ingress
  ingress:
    - from:
        - namespaceSelector:
            matchLabels:
              kubernetes.io/metadata.name: staging
          podSelector:
            matchLabels:
              app: frontend
      ports:
        - protocol: TCP
          port: 8080

This policy applies to app: backend pods in production, and it allows incoming traffic from app: frontend pods in the staging namespace.

Using IP Blocks

You can also allow traffic from or to CIDR ranges with ipBlock.

Example: allow egress from selected pods to an external API range.

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-egress-to-external-api
  namespace: production
spec:
  podSelector:
    matchLabels:
      app: backend
  policyTypes:
    - Egress
  egress:
    - to:
        - ipBlock:
            cidr: 203.0.113.0/24
      ports:
        - protocol: TCP
          port: 443

Use ipBlock mostly for external IP ranges. Pod IPs are ephemeral and unpredictable, so labels are usually the better abstraction for pod-to-pod communication.

Also be careful with Services, load balancers, NAT, and cloud networking. Source or destination IPs can be rewritten before or after policy processing depending on the plugin, cloud provider, and Service implementation.

Default Deny Patterns

A strong production setup often starts with default-deny policies and then adds explicit allow rules.

Default Deny Ingress

This policy selects all pods in the namespace and allows no incoming traffic.

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: default-deny-ingress
  namespace: production
spec:
  podSelector: {}
  policyTypes:
    - Ingress

After applying this, pods in the namespace will not accept ingress traffic unless another policy allows it.

Default Deny Egress

This policy selects all pods in the namespace and allows no outgoing traffic.

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: default-deny-egress
  namespace: production
spec:
  podSelector: {}
  policyTypes:
    - Egress

Be careful with this one. It blocks DNS and external traffic unless you explicitly allow them.

Default Deny Ingress and Egress

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: default-deny-all
  namespace: production
spec:
  podSelector: {}
  policyTypes:
    - Ingress
    - Egress

This is a good security baseline, but only when you are ready to define all required communication paths.

Native Kubernetes NetworkPolicy vs Calico NetworkPolicy

In this article, the main focus is the native Kubernetes API:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy

Calico also provides its own policy resources:

apiVersion: projectcalico.org/v3
kind: NetworkPolicy

And cluster-wide policies:

apiVersion: projectcalico.org/v3
kind: GlobalNetworkPolicy

The difference matters.

FeatureKubernetes NetworkPolicyCalico NetworkPolicy
API groupnetworking.k8s.io/v1projectcalico.org/v3
Port and IP based rulesYesYes
Pod and namespace selectorsYesYes
Explicit deny rulesNoYes
Policy ordering or priorityNoYes
Global cluster-wide policyNo native standard policyYes, with GlobalNetworkPolicy
Host endpoint policyNoYes
Portability across CNIsHighCalico-specific

A good learning path is to understand native Kubernetes NetworkPolicy first. Once the fundamentals are clear, Calico, Cilium, or another extended policy model becomes much easier to reason about.

What Native NetworkPolicy Cannot Do

Native Kubernetes NetworkPolicy is useful, but it is intentionally limited.

It cannot directly:

  • Apply explicit deny rules.
  • Define rule ordering or priority.
  • Target Kubernetes Services by name.
  • Inspect HTTP paths, methods, headers, or TLS identities.
  • Log allowed or blocked connections by itself.
  • Force all internal traffic through a central gateway.
  • Provide a global default policy across every namespace.

Some of these gaps can be addressed with CNI-specific features, service mesh policies, admission controllers, observability tools, or cloud-provider controls.

Practical Debugging Commands

List policies in a namespace:

kubectl get networkpolicy -n production

Describe how Kubernetes interpreted a policy:

kubectl describe networkpolicy allow-backend-to-mongodb -n production

Check pod labels:

kubectl get pods -n production --show-labels

Check namespace labels:

kubectl get namespaces --show-labels

Test connectivity from a temporary pod:

kubectl run tmp-shell \
  --rm -it \
  --image=curlimages/curl \
  -n production \
  -- sh

Then test a service or pod endpoint:

curl -v http://backend:8080

For DNS troubleshooting:

nslookup mongodb.production.svc.cluster.local

Production Checklist

Before rolling out network policies in a real cluster, verify the following:

  • Your CNI plugin supports and enforces Kubernetes NetworkPolicy.
  • Workloads have stable and meaningful labels.
  • Namespaces have labels that can be used safely in selectors.
  • DNS traffic is explicitly allowed when egress is restricted.
  • Policies are tested in a non-production namespace first.
  • You understand which traffic paths are required by the application.
  • You have a rollback plan if a policy blocks critical traffic.
  • You use plugin-specific observability when available, such as Calico tooling or Cilium Hubble.

A Recommended Rollout Strategy

Do not start by blocking everything in the whole cluster.

A safer rollout looks like this:

  1. Pick one namespace or one application.
  2. Map real communication paths.
  3. Add labels to pods and namespaces.
  4. Add ingress policies first for the most sensitive workloads.
  5. Add egress policies carefully, starting with obvious destinations.
  6. Explicitly allow DNS where needed.
  7. Test with real application flows.
  8. Move toward default-deny once required allow rules are known.

This gives you segmentation without breaking the cluster unexpectedly.

Conclusion

Network policies are one of the most important Kubernetes-native tools for workload isolation. They let us move away from the default open network model and define explicit communication boundaries between pods, namespaces, and external endpoints.

The key ideas are straightforward:

  • podSelector chooses the pods protected by the policy.
  • Ingress controls incoming traffic.
  • Egress controls outgoing traffic.
  • Policies are additive allow rules, not ordered deny rules.
  • Both source egress and destination ingress can matter.
  • CNI support is mandatory for enforcement.

Start with native Kubernetes NetworkPolicy, become comfortable with selectors and default-deny patterns, and then explore advanced features in Calico, Cilium, or your cloud provider when you need more control.

References

  • Kubernetes: Network Policies - kubernetes[.]io/docs/concepts/services-networking/network-policies/
  • Kubernetes: Use Calico for NetworkPolicy - kubernetes[.]io/docs/tasks/administer-cluster/network-policy-provider/calico-network-policy/
  • Kubernetes: Use Cilium for NetworkPolicy - kubernetes[.]io/docs/tasks/administer-cluster/network-policy-provider/cilium-network-policy/
  • Minikube: Network Policy - minikube.sigs.k8s[.]io/docs/handbook/network_policy/
  • Calico: Quickstart for Calico on Minikube - docs.tigera[.]io/calico/latest/getting-started/kubernetes/minikube
  • Calico: Get Started with Calico Network Policy - docs.tigera[.]io/calico/latest/network-policy/get-started/calico-policy/calico-network-policy
  • Cilium: Overview of Network Policy - docs.cilium[.]io/en/stable/security/policy/
  • Kubernetes Blog: Defining Network Policy Conformance for CNI Providers - kubernetes[.]io/blog/2021/04/20/defining-networkpolicy-conformance-cni-providers/

Post a Comment

0 Comments

Post a Comment (0)

#buttons=(Ok, Go it!) #days=(20)

This site uses cookies from Google to deliver its services and analyze traffic. Your IP address and user-agent are shared with Google along with performance and security metrics to ensure quality of service, generate usage statistics, and to detect and address abuse. More Info
Ok, Go it!