Okay I know most folks will be thinking Linkerd? You should have gone with Istio (and if you are like me don’t know either of the two don’t worry). I didn’t know exactly what a service mesh does until I actually tried it. So be patient and try this tutorial if you have the time. In this post I’ll explain why I didn’t use Istio and why I tried this in the first place.
So what is a service mesh? There are tons of articles which describe what a service mesh is and me writing another full blog post on it is not going to help. However to keep it short, a service mesh allows you to view all the interactions between your services and allows you to configure rules, monitor service health and tinker with your service invocations without changing your applications. The key is ‘without changing your applications’. A good article to start is here by the Linkerd founders.
However your don’t need to read up all the information in the world to try one out. Thanks to the folks behind the Linkerd project, you can try out a service mesh in the same time as it takes to read up all the information about it on the web. So why Linkerd you ask? Well for starters its been there long before Istio came out, Linkerd2 is really easy to understand and it complements the BWCE features which we already have. For example, Istio provides Tracing OOTB but that will give you a service trace not the BW process tracing. Does this mean Istio isn’t suited for BWCE? Absolutely not, using Linkerd was a personal preference. Linkerd2 is also apparently faster, has commercial support (which many of our customers like) and some very cool reporting dashboards.
I haven’t played with Istio yet but I have heard that it is quite tricky to configure. I need to give a special call out to the Linkerd maintainers, its super easy even for novices to try it out. You’ll have it running in minutes. This reminds me of what Solomon Hykes co-founder of Docker had said multiple times – The best tools allow you to the most powerful things in a simple way. Enough theory, let’s start the actual work.
You can install Linkerd2 in your Kubernetes (in my case minikube) environment by following the Getting Started Steps 1 to 3 from the Linkerd website
So all the artifacts are shared in my Github repository here. We are going to reuse the configmaps example for this demo.
If you have seen the ConfigMaps post you already have the bookstore-config ConfigMap created. You can see a detailed post here on how to create this configmap.
This example uses two configmaps one for consumption and one for the service. You can create the configmaps using the commandkubectl create configmap bookstore-consumption --from-env-file=k8sprofile.properties
So how do you bind these BWCE applications to the service mesh. Well you don’t change anything in apps (refer to my comment above on the beauty of having a service mesh). All you need to do is either use the linkerd command line to deploy the BWCE apps or use the provided YAML files which contain the annotation linkerd.io/inject: enabled
Now the great part here is that even though I have added this to every app that I am deploying, Linkerd2 provides a way to annotate a namespace which will automatically add any deployed apps to the service mesh.
NOTE:
If you are using the BW.HOST.NAME module property in your HTTP connector shared resource you will need to change it to have it set to 0.0.0.0 for proxying to work. This is because the system module property binds the HTTP Connector the hostname and not localhost or 0.0.0.0 that service meshes expect.
Deploy both the apps (after creating the docker images) using the following command:kubectl apply -f bookstore-demo.yaml
kubectl apply -f consumption/bookstore-consumption.yaml
This will deploy both applications and automatically add them as part of the service mesh.
So what really happens behind the covers? Linkerd2 reads the annotations and adds two sidecar containers – one to init and update IP tables in the pods and the other as a Rust based proxy between the external world and your service. Any request coming in to your service will pass through the Linkerd proxy. Any outbound request that your service will make will also pass through the Linkerd proxy. Now that your apps are running using the linkerd cli type:linkerd dashboard
This will open a linkerd dashboard which looks similar to the following screenshots:
The consumption app sends 1 request per second so you should see the service performing well. Clicking on the little button under the Grafana column will allow you to see a predefined Grafana dashboard (powered using Prometheus) for your service.
Clicking on the deployment name will also layout the topology of the mesh. For example clicking the bookstore-consumption deployment gives you the below information that its reaching out to the bookstore-demo service
Linkerd2 also ensures that the services are automatically upgraded to TLS even though none of our projects have anything related to SSL configured. Linkerd calls it mTLS. Here’s a screenshot of the edge which confirms that the services are interacting with mTLS enabled.
As you can see you can get metrics, dashboards OOTB without much configuration required and you get the added benefit of mTLS. Now that you have a service mesh you can further configure it to have retries, invocation timeouts and much more
We have just scraped the surface of all the possibilities of Linkerd2 as a service mesh. Next steps will be to look at Service Profiles and how to use this to create Canary releases (or Blue Green deployments) by using Traffic Split features. Hopefully I can get to try this in a week’s time. Until then – Keep exploring!