Kubernetes Core Concepts - Services

Kubernetes Core Concepts - Services

What is a Kubernetes Service?

Kubernetes Service is a REST object in the API that we define in a manifest file and post to the API server.

It is a very crucial component in Kubernetes. It lets you expose an application running in Pods to be reachable from outside your cluster.

As per the official documentation It is an abstract way to expose an application running on a set of Pods as a network service.

It enables communication between various components within and outside of the application. It provides us a single and stable IP address and DNS name through which we can access underlying application pods.


Understand it by an example

Lets assume we have an application where a group of pods serving front end load to users and other group is running back end processes and a third group connecting to an external data source.

The service enables the connectivity between these groups of pods. It enables the front end application to be made available to end users and helps communication between back end and front end pods and helps in establishing connectivity to an external data source.

Thus services enable loose coupling between micro services in our application let's take a look at one such diagram.


Why Service is required?

Pods are nonpermanent resources. Whenever pods fail they get replaced by new ones with the new IPs. Scaling-up operation also introduces new Pods with new IP addresses. Scaling down operation removes Pods. Rolling updates too replaces existing Pods with new ones with new IPs.

All these different operations creates a mess around the IPs and exhibits why one should never connect directly to any particular Pod.

With a Service in place, the pods can scale up and down, they can be updated, rolled back, they can fail. And the end users will continue to access them without interruption because the Service is observing/absorbing all the changes and updating it's list of endpoints automatically. A Service never changes its stable IP, DNS and port information.

Labels and loose coupling

Kubernetes Services are loosely coupled with Pods via labels and selectors. Its similar the way Deployments are loosely coupled with Pods.

service-lables (1).png

The above figure shows 3 Pods which are labelled as type=prod and version=v1, and the Service has a selector that matches those labels.

In this scenario you will get stable connectivity to all the pods. End users will send the request to the Service and Service will forward them to the underlying Pods.

For a Service to send traffic to a Pod, the Pod should be assigned all the labels which the Service is selecting. Though a Pod can have additional labels the Service isn't looking for.

Here is the example where none of the Pods matches to Service. Because the Service is looking for Pods with both the labels assigned.


The example below shows the Pods have all the labels the Service is selecting on. It makes no difference if the Pods have additional labels.


Services and Endpoint Objects

Every time you create a Service object, Kubernetes automatically creates an associated Endpoint object.

An endpoint is an object that stores a dynamic list of IP addresses of individual healthy pods assigned to it. Further it will be referenced by a Service object, so that the Service has a record of the internal IPs of pods in order to be able to communicate with them.

Methods to create a Service object

Like all other Kubernetes objects a Service can be defined in YAML or JSON manifest files.

Define a service without a selector

Let us create a Service object named lco-demo-nginx-service exposing port 80 in the service while targeting port 8080 in the Pods.

root@kube-master:~/services# cat service_without_selector.yml
apiVersion: v1
kind: Service
  name: lco-demo-nginx-service
    - protocol: TCP
      port: 80
      targetPort: 8080

Because this Service has no selector, the corresponding Endpoints object is not created automatically. You can manually map the Service to the network address and port where it's running, by adding an Endpoints object manually like below.

root@kube-master:~/services# cat endpoint.yml
apiVersion: v1
kind: Endpoints
  name: lco-demo-nginx-service
  - addresses:
      - ip:
      - port: 8080

Make sure the name of the service and the endpoint must be an exact match.

Use cases where we need to create Service object without selectors :-

  • When we want to have an external database cluster in our environment.
  • When you want to point your Service to a Service in a different Namespace or on another cluster.
  • When you are migrating a workload to Kubernetes.

Now let us create both the objects.

# kubectl apply -f service_without_selector.yml

# kubectl apply -f endpoint.yml

# kubectl get svc lco-demo-nginx-service

# kubectl get ep lco-demo-nginx-service

# kubectl describe svc lco-demo-nginx-service


Define a service with selector

Let us create a Service object named lco-demo-nginx-service-2 exposing port 80 in the service while targeting port 8080 in the Pods selector label “tier=front-end”.

root@kube-master:~/services# cat service_with_selector.yml
apiVersion: v1
kind: Service
  name: lco-demo-nginx-service-2
    tier: front-end
    - protocol: TCP
      port: 80
      targetPort: 8080

Here we have defined a selector now its controller's job to consistently monitor for Pods matching the defined label tier: front-end.

Let us create the Service object here.

# kubectl apply -f service_with_selector.yml

# kubectl describe svc lco-demo-nginx-service-2


There are no endpoints listed as of now because there is no Pod running in our cluster with the label tier: front-end.

Now let us create a Pod with the label tier: front-end and see if the controller automatically picks it up or not.

# kubectl run service-demo-pod --image=nginx --port=8080 --labels="tier=front-end,env=prod"

# kubectl get pod service-demo-pod -o wide

# kubectl describe svc lco-demo-nginx-service-2


You can see above the controller has automatically created an endpoint object with the Pod matching the selector label.

I will create another Pod with the similar matching labels and lets see what changes happens to our Service Endpoint object.

# kubectl run service-demo-pod-2 --image=nginx --port=8080 --labels="tier=front-end,env=prod"

# kubectl get pod service-demo-pod-2 -o wide  

# kubectl describe svc lco-demo-nginx-service-2



Here we have another IP added to our Endpoint object list.

If an Endpoints resource has more than 1000 endpoints then a Kubernetes v1.21 cluster annotates that Endpoints with endpoints.kubernetes.io/over-capacity: warning. This annotation indicates that the affected Endpoints object is over capacity.

Types of Kubernetes services

There are four types of Services in Kubernetes which can be accessed from inside the cluster or from outside the cluster based on their types.

Accessing Services from inside the cluster

ClusterIP Service has a stable virtual IP address that is only accessible from inside the cluster. It kind of restrict the service within the cluster. If you want to expose a ClusterIP service then you need to implement port forwarding or a proxy.

All the Pods in the cluster are pre-programmed to use the cluster's DNS service, meaning all Pods can convert Service names to ClusterIPs. These are all guaranteed to be stable and long-lived.

The ClusterIP service exposes the following:


All the services are by default of type ClusterIP unless specified externally.


Accessing Services from outside the cluster

For all the requests coming from outside the cluster Kubernetes has two types of Services:

  • NodePort
  • LoadBalancer

Let us understand these two services in more detail.


NodePort Service allows external access to Kubernetes resources via a dedicated static port on every cluster node. That port is known as NodePort. A ClusterIP Service, to which the NodePort Service routes, is automatically created.

One can contact the NodePort Service, from outside the cluster, by requesting <NodeIP>:<NodePort>.

NodePort (1).png

Let us take an example where we will create a NodePort type service object.

root@kube-master:~/services# cat nodeport.yaml
apiVersion: v1
kind: Service
  name: nodeport-service-example
  type: NodePort
    app: my-webapp
    - port: 80
      targetPort: 80
      nodePort: 30008

By default and for convenience, the targetPort is set to the same value as the port field.

The Kubernetes control plane will allocate a nodePort from a range (default: 30000-32767)

Pods on the cluster with the matching label selector can access this Service by the name "nodeport-service-example" on port 80 and clients connecting from outside the cluster can send traffic to any cluster node node on port 30008.

If you want to specify particular IP(s) to proxy the port, you can set the --nodeport-addresses flag for kube-proxy or the equivalent nodePortAddresses field of the kube-proxy configuration file to particular IP block(s).

Let me create the service object now.

root@kube-master:~/services# kubectl apply -f nodeport.yaml
service/nodeport-service-example created

root@kube-master:~/services# kubectl get svc nodeport-service-example
NAME                       TYPE       CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
nodeport-service-example   NodePort   <none>        80:30008/TCP   45s

root@kube-master:~/services# kubectl describe svc nodeport-service-example


You can see that the Service object has been created with all the ports assigned. The Endpoint field doesn't have any pod associated with it because the service object did not find any Pod which matches the label selector app=my-webapp.

No I will create a deployment with matching labels and then we will verify if the Service endpoints gets updated or not.

root@kube-master:~/services# cat nodeport-deployment.yaml
apiVersion: apps/v1
kind: Deployment
  name: nodeport-demo-deployment
  replicas: 3
      app: my-webapp
        app: my-webapp
      - name: nginx
        image: nginx:latest
        - containerPort: 80
root@kube-master:~/services# kubectl apply -f nodeport-deployment.yaml
deployment.apps/nodeport-demo-deployment created


We have three new endpoints (pods) mapped to our NodePort Service object.

Now let us try accessing our nginx application using the NodePort 30008 using any of the cluster node IP.



LoadBalancer Service make external access more easier by integrating with an internet facing load-balancer on your underlying cloud platform like AWS, Azure or GCP.

On cloud providers which support external load balancers, setting the type field to LoadBalancer provisions a load balancer for your Service.

Once applied Kubernetes automatically creates the load balancer, implement all the required firewall rules, and expose the service with the external IP address assigned by your cloud provider.

A LoadBalancer Service exposes the following:


We will not be doing a demo here as they only work on clouds that support them.

Here is an example LoadBalancer Service yaml manifest file:

root@kube-master:~/services# cat loadbalancer-demo.yaml
apiVersion: v1
kind: Service
  name: lb-service-demo
    app: my-webapp
    - protocol: TCP
      port: 80
      targetPort: 9376
  type: LoadBalancer
    - ip:

Traffic from the external load balancer is directed at the backend Pods. The cloud provider decides how it is load balanced.


Instead of mapping service to pods matching label selectors such as my-webapp ExternalName service type maps a Service to a DNS name. You specify these Services with the spec.externalName parameter.

This Service definition, for example, maps the my-webapp Service in the testing namespace to test.db.example.com:

apiVersion: v1
kind: Service
  name: my-webapp
  namespace: testing
  type: ExternalName
  externalName: test.db.example.com


The front-end of a Kubernetes Service provides an immutable IP, DNS name and Port that is guaranteed not to change for the entire life of Service. The back-end of a Service uses labels and selectors to load-balance traffic across a potentially dynamic set of application pods running across the cluster.

This is all about Kubernetes Services.

Hope you like the tutorial. Stay tuned and don't forget to provide your feedback in the response section.

Happy Learning!

Did you find this article valuable?

Support Learn Code Online by becoming a sponsor. Any amount is appreciated!