1. Overview

Kubernetes is currently the most widely used container orchestration tool. It has become the de facto standard for the deployment of microservices. Often, each microservice is deployed as a Kubernetes Pod, and we need a stable network endpoint to access that microservice. In Kubernetes, we can achieve this using the Service object.

In this short tutorial, we’ll discuss how to expose the network application running inside a minikube cluster.

2. Setting up an Example

In this section, we’ll create a few Kubernetes objects to use for our examples. Let’s start with creating a new Namespace.

2.1. Creating a Kubernetes Namespace

Minikube comes with a few namespaces out of the box, and one such namespace is default. We can deploy applications inside this namespace. However, it’s a best practice to create a new namespace for the application deployment.

So, let’s create a new namespace with the name service-demo:

$ kubectl create ns service-demo
namespace/service-demo created

2.2. Creating a Kubernetes Deployment

Next, let’s deploy the NGINX pod inside the service-demo namespace:

$ kubectl create deploy nginx --image=nginx:alpine -n service-demo
deployment.apps/nginx created

Finally, let’s verify that the NGINX pod is running:

$ kubectl get pods -n service-demo
NAME                    READY   STATUS    RESTARTS   AGE
nginx-b4ccb96c6-qzvmm   1/1     Running   0          2s

Now, the required setup is ready. In the forthcoming sections, we’ll discuss how to expose the nginx deployment using the various service types.

3. Exposing a Port Using port-forward

In the previous section, we deployed an NGINX pod. Kubernetes assigns an IP address to each pod that is routable within the cluster. However, using the pod’s IP address isn’t the most reliable method because it might change when the pod restarts or gets scheduled on another node. We can address this issue using the Kubernetes Service.

3.1. Creating a ClusterIP Service

First, let’s use the expose command to create a ClusterIP service and verify that the service has been created:

$ kubectl expose deploy nginx --type ClusterIP --port=80 -n service-demo
service/nginx exposed

$ kubectl get svc -n service-demo
NAME    TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
nginx   ClusterIP   10.96.252.114   <none>        80/TCP    2s

It’s important to note that the expose command creates a service using the selectors of the Deployment object. In our case, it’s using the selectors of the nginx deployment.

3.2. Exposing the ClusterIP Service Using port-forward

In the previous section, we created a ClusterIP service with the name nginx, and that is listening on the TCP port 80.

Next, let’s use the port-forward command to forward the traffic from the local machine to the nginx pod:

$ kubectl port-forward svc/nginx 8080:80 -n service-demo
Forwarding from 127.0.0.1:8080 -> 80
Forwarding from [::1]:8080 -> 80
Handling connection for 8080

In this example, we’ve used the colon (:) character to separate the local and pod ports. Port 8080 is the local port, whereas 80 is the pod port.

Now, let’s open another terminal and execute the curl command using 127.0.0.1:8080 as the web server’s URL:

$ curl http://127.0.0.1:8080
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

Here, we can see that the NGINX server responds with the welcome page.

3.3. Cleaning Up

In the next section, we’ll expose the nginx deployment using the NodePort service. But before that, let’s do the cleanup of the ClusterIP service.

port-forward is a blocking command. So, first, let’s use the Ctrl + c key combination to stop the port forwarding.

Now, let’s use the delete command to remove the nginx service:

$ kubectl delete svc nginx -n service-demo
service "nginx" deleted

4. Exposing the Port Using the Nodeport Service

A NodePort service is the most common way of exposing a service outside the cluster. The NodePort service opens a specified port on all cluster nodes and forwards the traffic from this port to the service.

Let’s understand this with a simple example.

4.1. Creating a NodePort Service

First, let’s create a NodePort service using the expose command:

$ kubectl expose deploy nginx --port 80 --type NodePort -n service-demo
service/nginx exposed

In this example, we’ve used the –type option to specify the service type.

Next, let’s find out the allocated node port:

$ kubectl get svc -n service-demo
NAME    TYPE       CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
nginx   NodePort   10.108.241.105   <none>        80:30136/TCP   2s

In the output, the PORT(S) column shows that the service port 80 is mapped to the node port 30136.

Now, we can use the IP address of the Kubernetes node and port 30136 from outside to access the NGINX server. Let’s see this in action in the next section.

4.2. Using the NodePort Service

First, let’s find out the node on which the nginx pod is running:

$ kubectl get pods -o wide -n service-demo
NAME                    READY   STATUS    RESTARTS   AGE     IP           NODE           NOMINATED NODE   READINESS GATES
nginx-b4ccb96c6-qzvmm   1/1     Running   0          2m36s   10.244.1.2   minikube-m02   <none>           <none>

Here, we’ve used the -o option with the get command to show a few additional fields. In the output, the NODE column shows that the pod is running on a minikube-m02 node.

Next, let’s find the IP address of the minikube-m02 node:

$ kubectl get nodes minikube-m02 -o wide
NAME           STATUS   ROLES    AGE   VERSION   INTERNAL-IP    EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION      CONTAINER-RUNTIME
minikube-m02   Ready    <none>   13m   v1.28.3   192.168.58.3   <none>        Ubuntu 22.04.3 LTS   5.15.0-41-generic   docker://24.0.7

In the above output, the column INTERNAL-IP shows the IP address of the Kubernetes node.

Now, from the external machine, let’s connect to the NGINX server using 192.168.58.3:30136 as the URL:

$ curl http://192.168.58.3:30136 
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

Here, we can see that the NGINX server is accessible using the IP address and port of the Kubernetes node.

4.3. Cleaning Up

Now, let’s do the cleanup of the NodePort service before going to the next section:

$ kubectl delete svc nginx -n service-demo
service "nginx" deleted

5. Exposing the Port Using the LoadBalancer Service

In a production environment, service availability is crucial. We need a reliable load balancer that can handle heavy traffic effortlessly to meet high request demands. In such cases, we can expose the application using the LoadBalancer service.

Kubernetes’ LoadBalancer service provides an option to create a cloud load balancer. It provides an externally accessible IP address that routes traffic to the application.

It’s important to note that minikube doesn’t create a load balancer in the cloud. Instead, it’s simulated using the tunnel. So, let’s see how to create a LoadBalancer service in a minikube.

5.1. Creating a LoadBalancer Service

First, let’s create a service with the type LoadBalancer:

$ kubectl expose deploy nginx --port 80 --type LoadBalancer -n service-demo
service/nginx exposed

Next, let’s find out the IP address of the load balancer:

$ kubectl get svc -n service-demo
NAME    TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
nginx   LoadBalancer   10.105.32.228   <pending>     80:30814/TCP   2s

In the above output, the column EXTERNAL-IP shows the IP address of the load balancer. However, in our case, it’s showing as <pending>.

It’s important to note that this is the default behavior of the minikube. To avoid this, we need to execute the minikube tunnel command:

$ minikube tunnel
Status:	
	machine: minikube
	pid: 34540
	route: 10.96.0.0/12 -> 192.168.58.2
	minikube: Running
	services: [nginx]
    errors: 
		minikube: no errors
		router: no errors
		loadbalancer emulator: no errors

Now, we can see that the load balancer is simulated using minikube’s tunnel.

5.2. Using the LoadBalancer Service

In the previous section, we used the minikube tunnel command to allocate the external IP. However, that is a blocking command.

So, let’s open another terminal and check the external IP address of the LoadBalancer service:

$ kubectl get svc -n service-demo
NAME    TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)        AGE
nginx   LoadBalancer   10.105.32.228   10.105.32.228   80:30814/TCP   34s

In this output, we can see that the minikube has allocated an external IP to the LoadBalancer service.

Now, let’s access the NGINX web server using 10.105.32.228 as its URL:

$ curl http://10.105.32.228
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

Here, we can see that the NGINX server renders the welcome page.

6. Cleaning Up

It’s a good practice to clean up the temporarily created resources. So, let’s do the cleanup in our cluster.

First, let’s use the Ctrl + c key combination to terminate the minikube tunnel.

Next, let’s delete the service-demo namespace:

$ kubectl delete ns service-demo
namespace "service-demo" deleted

While setting up an example, we deployed all resources in the service-demo namespace, and we wanted to delete all of them as a part of the cleanup activity. Hence, we deleted the namespace directly.

However, we should be careful while performing the delete operation in the production environment, and deleting the namespace directly is not recommended.

7. Conclusion

In this article, we discussed how to expose a service in minikube.

First, we discussed exposing the ClusterIP service using the port-forward command. Next, we discussed exposing a service using the Kubernetes node’s IP address and port.

Finally, we discussed exposing a service using the external load balancer.

Comments are open for 30 days after publishing a post. For any issues past this date, use the Contact form on the site.