Introduction
In Kubernetes, a NodePort Service is a type of Kubernetes Service that exposes a set of Pods to external traffic (outside the cluster) by opening a specific port on each Node (VM, bare metal, or container host). This allows users to access the application using the Node’s IP address and the NodePort (a port number in the range 30000-32767).
The NodePort Service acts as a bridge between external traffic and Pods running inside the cluster by listening on a specific port on each Node and forwarding traffic to the desired Pod. This is particularly useful in scenarios where you want to expose your application to the external world without using a LoadBalancer or Ingress Controller.
How Does NodePort Work?
When you create a NodePort Service, the following happens:
- Kubernetes assigns a port (between 30000-32767) on each Node in the cluster. This port is known as the NodePort.
- The NodePort Service listens on this port across all Nodes, even if your application is running only on one Pod in one Node.
- Any traffic that hits
<NodeIP>:<NodePort>
is automatically forwarded to one of the Pods selected by the Service based on the label selector. - Kubernetes internally uses iptables or IPVS (IP Virtual Server) to handle the traffic routing from the NodePort to the corresponding Pod.
In simple terms, the NodePort acts as a public-facing endpoint that can be accessed externally.
How Traffic Flow Works in NodePort Service
The traffic flow for a NodePort Service can be explained as follows:
- Client Request: A client sends a request to any Node’s IP address on the specified NodePort.
http://<NodeIP>:<NodePort>
- NodePort Handling: The Node receives the request on the NodePort (e.g.,
30008
). It then uses iptables or IPVS rules to forward the request to one of the Pods matching the Service’s label selector. - Pod Response: The selected Pod processes the request and sends the response back through the NodePort to the client.
- Load Balancing: If multiple Pods are running, Kubernetes will automatically distribute traffic among them based on its internal load-balancing mechanism.
- External Access: You can access your application using:
http://<NodeIP>:<NodePort>
Even if your Pod is destroyed or recreated, the NodePort will remain the same as long as the Service is not deleted.
Advantages of NodePort Service
- Exposes Pods Externally: NodePort Service allows external users to access your application without additional configuration.
- Simple Setup: You don’t need a LoadBalancer or Ingress Controller; just a NodePort is enough to expose your application.
- High Availability: The NodePort is accessible on all Nodes, ensuring the application remains available even if some Pods are down.
Limitations of NodePort Service
- Fixed Port Range: NodePort uses a port range of 30000-32767, which means you are limited to these ports for external exposure.
- Node IP Dependency: Clients need to know the Node IP to access the application. In dynamic environments, Node IPs might change frequently.
- Security Risks: Opening a NodePort directly exposes the application to the internet, making it vulnerable to external threats. Proper firewall and security settings must be applied.
- No Automatic Load Balancing Across Nodes: If a Pod exists only on one Node, other Nodes will simply forward traffic to that Node, creating unnecessary network hops.
Example: Creating a NodePort Service in Kubernetes
Let’s create a NodePort Service that exposes a simple web application running in a Pod.
Step 1: Create a Deployment
First, create a Deployment for a sample web application.
deployment.yaml
apiVersion: apps/v1 kind: Deployment metadata: name: web-app labels: app: web spec: replicas: 3 selector: matchLabels: app: web template: metadata: labels: app: web spec: containers: - name: web-container image: nginx ports: - containerPort: 80
Apply the deployment:
kubectl apply -f deployment.yaml
Verify the Pods are running:
kubectl get pods
Step 2: Create a NodePort Service
Now, create a NodePort Service to expose the web application.
nodeport-service.yaml
apiVersion: v1 kind: Service metadata: name: web-service spec: type: NodePort selector: app: web ports: - port: 80 # Service Port (used internally by Pods) targetPort: 80 # Port on the container nodePort: 30008 # External port accessible from outside the cluster
Apply the Service:
kubectl apply -f nodeport-service.yaml
Verify the Service is created:
kubectl get svc
Output:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE web-service NodePort 10.96.12.20 <none> 80:30008/TCP 10s
Step 3: Access the Application Externally
Find the Node IP by running:
kubectl get nodes -o wide
Output:
NAME STATUS ROLES AGE VERSION INTERNAL-IP node1 Ready master 5d v1.29.1 192.168.1.100 node2 Ready worker 5d v1.29.1 192.168.1.101
Now you can access your application using:
http://192.168.1.100:30008
or
http://192.168.1.101:30008
Kubernetes will automatically route the traffic to one of the healthy Pods.
How Load Balancing Works in NodePort
Although you are accessing the Node directly, Kubernetes will load balance traffic between Pods.
- If 3 Pods are running, Kubernetes will randomly distribute traffic among these Pods.
- If a Pod goes down, Kubernetes will automatically remove it from the Service and continue serving traffic from the remaining Pods.
NodePort Without External IP (Cloud Providers)
If you deploy your cluster on cloud providers (like AWS, GCP, Azure), you won’t be able to access your Node’s Internal IP from the internet unless you create a LoadBalancer or Ingress.
In such cases:
- Use LoadBalancer Service or Ingress Controller for external access.
- Avoid exposing NodePorts directly to the internet.