Quick Start: Unboxing Istio Service Mesh

Photo by Luca Lago on Unsplash

To all who are serving their duties and staying at home or, I hope you are doing well during the spread of the coronavirus epidemic.

As a Kubernetes developer, Istio service mesh might have drawn your attention, you might urge to explore it but you found that you needed a Kubernetes cluster first. An easier option to prepare one is Minikube or MicroK8s, these two platforms are fully supported by Istio. Furthermore, you might want to try it with a multiple nodes cluster, which is closed to your production environment in the future.

Set up a multiple nodes cluster is not a single-command task, but it is practically achievable and affordable. I had discussed how to set up my home lab with the Rancher server in my previous post, and I will tell you how did I learn Istio with my home lab.

A brief introduction

Istio Service Mesh improves service-to-service communication in Kubernetes cluster by allowing the fine-grained control and monitoring the traffic. While Kubernetes Service gives the basic load balancing function above Pods, Istio takes one more step, which makes both incoming and outgoing traffic from Pods manageable and configurable by injecting an Envoy proxy sidecar into the Pod.

The advantage of sidecar pattern is transparent to applications, in the other words, it means you don’t need a delegated client in your code; your call to other services within the mesh is only a plain REST call; service discovery, load balancing, circuit breaking, or retry policy can be decoupled from your application. The independency from applications gives you the freedom for program language or tools selection and you still can enjoy the benefit of Service Mesh.

Install Istio

Installing Istio is pretty straightforward, the Istio Getting Started Guide is more than enough to get started but first, you need to have your Kubernetes cluster ready. I prepared my cluster with the Rancher server, and Istio also supports other Kubernetes platforms likes MicroK8s and Minikube.

After downloading the package and set istioctl CLI in the PATH variable, I installed Istio with the demo profile, the profile installed the istio-ingressgateway service for later use. I also appended the istio-injection label into the default namespace, afterward for each Pod created in that namespace, an Envoy proxy will be injected into the Pod.

istioctl manifest apply --set profile=demo
kubectl label namespace default istio-injection=enabled

My example setup

Referring to the helloworld example from Istio, I prepared an example starting from an empty yaml file, and I believe that is the best way to verify my understanding.

The example aimed to expose a simple Nginx service outside of the Kubernetes cluster and assumed that the service was rolling out a new version. At this point, half of the incoming requests had already routed to the new version, and another half were still handled by the old version. The following diagram shows the deployment overview and relationship among different resources, and all yaml files can be found in my GitHub repo.

It is not an ERD but is the relationships between resources involved in the Service Mesh

Step 1: Create Kubernetes Service and Deployment

  • Starting from the basic setup, first I created 2 Nginx Deployments which were running in different versions (v1.16 and v1.17), the yaml file can be found in GitHub.
  • Each Deployment had 3 replicas and 6 Nginx Pods were created.
  • Each Pod was assigned the app and version labels for later use, it also mounted a test.txt for testing with curl.
  • An Nginx Service was created after the Pods, its selector only matched with the app label, therefore it picked all 6 Nginx Pods as its Endpoints.
  • Since the default namespace enabled the sidecar injection feature, therefore the DNS name for the service: nginx-svc.default.svc.cluster.local was appended into the Istio Service Registry.
apiVersion: v1
kind: Service
metadata:
name: nginx-svc
labels:
service: nginx

spec:
selector:
app: nginx
ports:
- name: http
protocol: TCP
port: 80
- name: https
protocol: TCP
port: 443

Step 2: Create Istio Virtual Service and Destination Rule

Above Kubernetes Service and Pod, Istio introduces Virtual Service and Destination Rule which are the key building blocks of a service mesh. While Service performs as a load balancer in front of Pods, Virtual Service provides an abstract layer above Service, it allows us to control traffic routing and manipulate incoming requests. When a request comes into a Virtual Service, it will be filtered by a set of Routing Rules to determine its final Destination, the Destination ultimately is an Endpoint which is determined by a Destination Rule. The Destination Rule divides Endpoints of a Service into Subsets, the Subset matches Endpoints by looking up the Pod labels, and it also allows to configure the load balancing policy among these Endpoints.

  • First, I created a Destination Rule for the Nginx Service, the host: nginx-svc attribute looked for the entry in Istio Service Registry, which was referring to the Nginx Service.
  • 2 Subsets under the Destination Rule divided 6 Nginx Pods to 2 groups by their app and version label, these 2 Subsets were named as v116 and v117.
  • Here I haven’t defined any TrafficPolicy on both Destination Rule and Subset level, the default one is round-robin load balancing policy.
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: nginx
spec:
host: nginx-svc
subsets:
- name: v117
labels:
app: nginx
version: "1.17"

- name: v116
labels:
app: nginx
version: "1.16"
  • Next, I created a Virtual Service relied on previous Destination Rule.
  • The destination.host: nginx-svc attributes were referring to the same entry in Istio Service Registry, which was the Nginx Service. Here you might notice that a Virtual Service can cover multiple Services.
  • And the destination.subset attributes were referring to the v116 and v117 Subsets.
  • Finally, the weight attributes defined that 50% of traffic should go to v116 Subset, and another 50% should go to v117 Subset.
  • Since there is only one http.route was defined, therefore it is the default Routing Rule for this Virtual Service. A Virtual Service allows having many Route Rule that matches and routes requests to different Subset.
  • The Virtual Service is accessible via the hostnames defined in hosts list attribute, for my example is nginx.hung.org.hk.
  • The gateways list attribute defined the Istio Gateway that this Virtual Service should bind to, details will be discussed later.
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: nginx
spec:
hosts:
- "nginx.hung.org.hk"
gateways:
- nginx-gateway
http:
- route:
- destination:
host: nginx-svc
subset: v117
weight: 50

- destination:
host: nginx-svc
subset: v116
weight: 50

Step 3: Create Istio Gateway

To facilitate communication outside of the mesh, Istio provides the ingress gateway to route inbound traffic from external parties to Pods, just like the Kubernetes Ingress Controller does, but more than that, Istio also provides the egress gateway to control the outgoing traffic from Pods to external parties.

While I was installing Istio in the previous step, I picked the demo profile option that had deployed the istio-ingressgateway and istio-egressgateway services into the istio-system namespace, Endpoints of both services were the standalone Envoy proxies which works as the window of the Mesh to communicate outside the mesh.

Istio Gateway is the configuration being applied to these standalone Envoy proxies, it controls ports and protocol that these proxies should listen to, then Virtual Services can bind to these ports to communicate outside of the mesh.

  • To expose the Nginx Virtual Service, I had to create an Istio Gateway configuration, the label istio: ingressgateway defined in selector attribute matched with the istio-ingressgateway service in istio-system namespace.
  • The servers.port attribute defined the ports opened for this Gateway.
  • And the servers.hosts attribute restricted the Virtual Service that can bind to this policy. The Virtual Service defined in Step 2 had hostname name nginx.hung.org.hk that matched with *.hung.org.hk, therefore it is allowed to bind this Gateway.
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: nginx-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
name: http
number: 80
protocol: HTTP
hosts:
- "*.hung.org.hk"
  • Here I completed my example setup, and before I tested it I had to find out the assigned NodePort for the http2 Port of istio-ingressgateway service, since a NodePort is randomly assigned, so you might have a different value from my example. (referring to Determining Ingress IP and ports in Istio document for detail).
Find the NodePort assigned to the http2 port of istio-ingressgateway service, for my case is port 30934

Testing success but not completed

Once I had found the assigned NodePort (30934), I run a simple test with curl and watch shown below. The host rancher-node04 is one of the worker nodes in my Kubernetes cluster, the URI path /echo/test.txt was the volume mounted by the Deployment, which told the version of the Nginx server that made the response, the most important option was to specify the Host: nginx.hung.org.hk header in the curl command, this header must be matched with the configuration in previous Istio Gateway and Virtual Service setup, otherwise, the request could not be delivered as expected.

watch -n 1 'curl -v -s \
--header "Host: nginx.hung.org.hk" \
http://rancher-node04:30934/echo/test.txt'

The watch executed the given curl command in every second and kept monitoring its output. You might notice the version number in the last line jumped between v1.16.1 and v1.17.9 from time to time, that version number is responded from different Nginx Endpoints where its version was different, the requests seem evenly distributed as my desired, everything looked good until I test it with browser.

Test the example with watch and curl command

Workaround: HAproxy in front of worker nodes

The problem was when the browser opened an URL that with a specified port ( nginx.hung.org.hk:30934 in my case), both requested domain and port were appended into the HTTP Host header, that header’s value on longer matched with the host configuration in both Gateway and Virtual Service. I had tried to append the port number in the yaml files, but it failed the validation while applied the changes.

My workaround was to set up an HAProxy in another VM as the load balancer in front of 3 worker nodes (I will need one eventually to complete a production-like setup), on that VM, the HAProxy could bind to the standard 80 and 443 ports for http and https protocol, and that was no more port number in the HTTP Host header anymore. Here is my HAProxy configuration:

global
maxconn 2000
log 127.0.0.1 local0 info
defaults
log global
mode http
timeout server 30s
timeout connect 30s
timeout client 30s
retries 5
frontend http
bind *:80
default_backend k8s-istio-ingress-gateway
backend k8s-istio-ingress-gateway
server node02 rancher-node02:30934
server node03 rancher-node03:30934
server node04 rancher-node04:30934
Finally, browser can open the test page via HAProxy using the standard http port (80)

What’s next

This post introduced Istio Virtual Service, Destination Rule, Gateway, and their usage. I hope that can help you to begin your Istio journey.

There are still a lot of topics to be covered, like how to manage the outbound traffic with egress gateway, using Helm to repackage the yaml files, monitoring and traceability are also the great topics in Istio world, hopefully, I can cover those in my next post.

Please feel free to make contact with me if you are interested in this topic or you have any enquiry to me.

email: kwonghung.yip@gmail.com

linkedin: linkedin.com/in/yipkwonghung

Twitter: @YipKwongHung

github https://github.com/kwonghung-YIP

Developer from Hong Kong

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store