Demystifying Kubernetes Ingress

Tame Ingress with MicroK8s & MetalLB in your own backyard

Adrin Mukherjee
8 min readAug 18, 2023
Photo by Shane Aldendorff on Unsplash

In this post, we will attempt to fathom the depths of Kubernetes Ingress- what is it, what does it solve, popular usage patterns, how to secure the Ingress, etc. While managed Kubernetes service from popular public cloud providers like GKE, EKS, AKS, etc. are favored, one needs to understand the nuances of these services as well, which somewhat shifts the focus. This post however, pivots solely around Kubernetes Ingress. By the end of this post, you might not have a halo magically appearing behind your head, but you will surely gravitate (or levitate if you are really into that kind of a thing) a little more towards better understanding of Ingress.

By the way, lest we forget- this post will assume that you had a few sips of the powerful potion of Kubernetes, are aware of the basic primitives, and know your way around the K8s world.

Note: The following GitHub link houses code for the representative API enabled Microservices (with Dockerfile) referred in this post, along with essential K8s manifest files pertaining to deployments, services, and ingress

Scratching the surface

Ingress helps to create a secure entry point to the Kubernetes cluster through which we can expose our services for external clients to consume. However, the astute observer in us would know that this can be achieved using a NodePort or a LoadBalancer type K8s service as well. Then why use Ingress after all?

Truth be told, in case of a NodePort type service, we have to manage the load balancing and routing outside the K8s cluster. While in case of LoadBalancer type service, we could end up creating (and getting charged) for multiple network load balancers. In fact, one network load balancer for each LoadBalancer type service and then another layer of proxy or load balancer to route the traffic to a particular service (represented by a network load balancer).

K8s ingress abstracts all this complexity within the K8s cluster. It helps to access HTTP/HTTP(s) based services using a single externally accessible endpoint, that we can configure to route to different services within the cluster based on host name and/or path. At the same time it helps to implement SSL security, load balancing, patterns like fan-out and name based virtual hosting. In short, Kubernetes ingress could be considered as a layer-7 load balancer that resides within the cluster.

Ingress has two components- the Ingress controller and Ingress resources. The Ingress controller realizes the Ingress typically with a load balancer. Popular Ingress controllers are NGINX, Traefik, Contour, etc. The Ingress resource on the other hand, is a Kubernetes object that could be defined with the help of a manifest file (similar to a deployment or service) and generally houses host names and routes to various services.

Prelude

This post will use a locally installed single-node Kubernetes cluster powered by MicroK8s. This choice is shaped by the fact that MicroK8s is very lightweight and ships with Ingress add-on, which can be enabled with just the flick of a finger. We will enable another add-on named MetalLB in our MicroK8s cluster. MetalLB will act as the load balancer for our cluster and for all practical purposes provide a external IP to our Ingress. Have you ever noticed how when we create a LoadBalancer type service in a local Kubernetes cluster (powered by MiniKube or Microk8s or any other new kid on the block), the EXTERNAL-IP is always in a pending state? Well, MetalLB can help us to circumvent that challenge in an amicable fashion.

Here are the commands to enable MetalLB and Ingress, assuming MicroK8s is already installed.

$ microk8s enable metallb
$ microk8s enable ingress

Note: MetalLB add-on will ask for a CIDR range to configure the IPs for LoadBalancer type K8s services in the cluster

By the way, thanks to the Ingress add-on for the heavy lifting that it did for us. Behind the curtains, it installed NGINX Ingress controller and much more. With Ingress installed, we should be able to check the contents of the recently created ingress namespace:

$ microk8s.kubectl get all -n ingress

In a single node cluster, the above command should produce a pod (that represents an Ingress controller) and a corresponding daemon set. In order to make the above setup leverage the MetalLB load balancer that we just installed, all we need to do is to create a K8s Ingress (controller) service. Note that the manifest file (which we will fondly call ingress-lb-service.yaml) below clearly shows that the ingress service will run on ports 80 and 443.

apiVersion: v1
kind: Service
metadata:
name: ingress
namespace: ingress
spec:
selector:
name: nginx-ingress-microk8s
type: LoadBalancer
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80
- name: https
protocol: TCP
port: 443
targetPort: 443

The chosen selector name, basically maps with one of the relevant labels attached to the NGINX Ingress controller pod (in ingress namespace). In order to get hold of the label, we just need to fire the following command:

$ microk8s.kubectl get po --show-labels -n ingress

Once the ingress-lb-service.yaml manifest is applied (with the command below), the ingress namespace will have a new member: service/ingress

$ microk8s.kubectl apply -f ingress-lb-service.yaml -n ingress

And instead of ‘Pending’ status under EXTERNAL-IP, the ingress service is magically provided with a sassy IP address. The IP address assigned depends on the IP address range selection during the enabling process of MetalLB add-on. We will keep a note of this assigned IP address, since this will be vital during host configuration of the Ingress. For the purpose of this post, we will assume this to be: 10.11.12.13

Interlude

Now that the Ingress controller is amply configured, let’s turn our attention to the Ingress resource. We will make the following assumptions:

  • The Ingress resource will be placed in a different namespace: ingress-res
  • The Ingress will be hosted at: api.iloveingress.com
  • We have two API enabled representative Microservices: user_service and item_service that we would like to host with the help of Ingress and then perform path-based routing
  • Each of these Microservice is containerized and the container images have been imported into MicroK8s. Here’s a sneak peek into the commands that can help us to achieve this for user_service in MicroK8s.
$docker build . -t user_service:local
$docker save user_service:local > user_service.tar
$microk8s ctr image import user_service.tar

The image built above is a local image and has been tagged as ‘user_service:local’, which means that the image will exist in the local registry. As such, ‘ImagePullPolicy’ must be specified as ‘Never’ as part of the container definition in the corresponding manifest file, otherwise MicroK8s will try to fetch the image from DockerHub. The same set of commands will have to be run for item_service as well.

The Ingress resource

Here’s the complete Ingress resource manifest file with path-based routing that follows a fan-out pattern:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: web-ingress
spec:
ingressClassName: nginx
rules:
- host: api.iloveingress.com
http:
paths:
- path: /users
pathType: Prefix
backend:
service:
name: user-service
port:
number: 8081
- path: /products
pathType: Prefix
backend:
service:
name: item-service
port:
number: 9091

When a request comes for api.iloveingress.com, based on its path (/users/… or /products/…), the request is routed by the Ingress to either user-service or item-service.

Simple fan-out pattern with K8s ingress

The above manifest could be run as follows:

$ microk8s.kubectl apply -f ingress-definition.yaml -n ingress-res

The host configuration

The Ingress resource defined above has a host. Since we are working with a local machine, we can create the required entries in the local hosts file (/etc/hosts), as shown below. Note that the IP against the host is load balancer IP of K8s Ingress (controller) service. In this case, it’s assumed to be 10.11.12.13. The contents of /etc/hosts file should look somewhat like:

127.0.0.1 localhost
10.11.12.13 api.iloveingress.com

At this point, we should be able to unleash the power of Ingress by simply calling the APIs pertaining to the Microservices from the browser (or cURL) and observe how magically the request gets routed to the appropriate service.

$ curl -v http://api.iloveingress.com/users/100
$ curl -v http://api.iloveingress.com/products/1001

Securing the Ingress

Securing the Ingress is quite simple. We just need to use a X509 certificate and private key to create a K8s TLS type secret and subsequently refer this secret in the Ingress definition. In this post, we will use a self-signed certificate. However, the fundamentals remain same for a CA signed certificate as well. Here are the commands to generate a self-signed certificate. Note that, the common name (or CN) in the certificate should be same as the Ingress host.

$openssl genrsa -out server.key 2048
$openssl req -new -key server.key -subj "/CN=api.iloveingress.com" -out server.csr
$openssl x509 -signkey server.key -in server.csr -req -days 365 -out server.crt

At this point we have a certificate file named server.crt and a private key file named server.key. Now let’s create a K8s secret with the following command:

$kubectl create secret tls ingress-tls --cert ./server.crt --key ./server.key

The final step is to refer this secret in the Ingress definition as below and applying (with kubectl apply) the same to the cluster:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: web-ingress
spec:
ingressClassName: nginx
tls:
- hosts:
- api.iloveingress.com
secretName: ingress-tls
rules:
- host: api.iloveingress.com
http:
paths:
- path: /users
pathType: Prefix
backend:
service:
name: user-service
port:
number: 8081
- path: /products
pathType: Prefix
backend:
service:
name: item-service
port:
number: 9091

Now we should be able to access our service APIs using HTTPS (secured) protocol instead of HTTP.

$ curl -t https://api.iloveingress.com/users/100
$ curl -t https://api.iloveingress.com/products/1001

A note on name based virtual hosting

With name based virtual hosting, we can map two or more host names with the same IP address. Essentially this means that the Ingress definition will have two or more hosts and associated HTTP paths (along with corresponding back-ends).

Firstly, we have to create the corresponding X509 certificate and private key. However, this time we will use ‘iloveingress.com’ as the common name (CN) for the certificate. This is so because, we will support two host names (users.iloveingress.com and products.iloveingress.com) via the same certificate.

Here’s a representation of how the two existing services could have been configured with name based virtual hosting. Note that we have created a new secret (named ingress-tls-new) to hold the X509 certificate and private key. Then referenced the same in our Ingress definition.

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: web-ingress
spec:
ingressClassName: nginx
tls:
- hosts:
- iloveingress.com
secretName: ingress-tls-new
rules:
- host: users.iloveingress.com
http:
paths:
- path: /users
pathType: Prefix
backend:
service:
name: user-service
port:
number: 8081
- host: products.iloveingress.com
http:
paths:
- path: /products
pathType: Prefix
backend:
service:
name: item-service
port:
number: 9091
Name based virtual hosting with K8s ingress

At the same time, we will have to add these host names corresponding to the load balancer IP address of the K8s Ingress (controller) service, in /etc/hosts file.

127.0.0.1 localhost
10.11.12.13 users.iloveingress.com
10.11.12.13 products.iloveingress.com

At this juncture, we should be able to call the service APIs from browser or cURL, using the virtual names assigned to these services.

$curl -k https://users.iloveingress.com/users/100
$curl -k https://products.iloveingress.com/products/100

Postlude

As the great Morpheus (of Matrix fame) once said- “I can only show you the door. You’re the one that has to walk through it”. So, go right ahead through the door and start experimenting with Kubernetes Ingress and rest assured you are going to enjoy it !

--

--

Adrin Mukherjee
Adrin Mukherjee

Written by Adrin Mukherjee

Cosmic inhabitant from a pale blue dot in the Milky Way galaxy | Arik's father | Coldplay & Imagine Dragons fan | Solution Architect | Author | Shutterbug

Responses (1)