“Look around you. Everything changes. everything on this earth is in a continuous state of evolving, refining, improving, adapting, enhancing, and changing. You were not put on this earth to remain stagnant.” Dr. Steve Mataboli
This Blog is more a story rather than just a technical document. It’s a journey which we have gone through as a team here in GlobalVox for deploying our microservices by conventional JAR files to as advance as Kubernetes.
By summing up all my personal and career experiences one thing I surely learnt about human brain that the ultimate goal of Brain is to seek Comfort. This feeling of seeking comfort comes from our current Chaos. Here recognizing that you are in the middle of chaos is important, because then only you will be tempted towards the change and thus the Evolution begins. We found ourself in the middle of this chaos almost 1.5 years ago when we first time started deploying our application which was using Microservices Architecture. So, the journey went like this,
Manual Deployment of JAR…
Deploying your java applications from JAR files is widely known and most conventional method. You just have to follow following steps,
- Create JAR file of your JAVA applicational.
- Create JAR file of your JAVA applicational.
- Upload your JAR file to server.
- Run this command to execute JAR file,
- > nohup java -jar filename
Seems quite straight forward. But now imagine your application is a group of 20 Microservices, so every service will be an independent JAVA application. Now every time when you have to deploy your application you will need to create JAR files of every Microservice and will find yourself firing the JAR file deployment command 20 times. Here I am leaving HA (Highly Available) architecture aside, as deploying only 4 JAR files alone was enough to make us realise that we are in the middle of chaos, and that was important, Realising…
Deployment of WAR files with Jenkins…
From our previous experience of chaos, we learnt one thing that if we want our APIs always up to date for our precious front-end developers to consume, we need some automated and faster way to deploy our microservices.
Jenkins is universally used automation server to build your applications. It saved our DevOps guy from the wrath of continuous deployment, but to some extent.
Evolution doesn’t mean the end of the problems; it means previous problems have been sorted out now you ready to face new ones…
We entered into the phase of Load/Performance testing where HA architecture is must to have. Every microservice was going to have at least one Twin and the heavier one was supposed to have 4, 5… We were having 12 services to deploy before running any Performance test. I found myself pushing DevOps guy to finish the deployment quickly, but doesn’t matter how faster his motor nerves work deployment always lasted for an hour. But somehow, we were bearing with that.
Then came another challenge, the Windows server. We were always in the committed relationship with Linux Servers but found it little complicated with Windows Server. We found ourself struggling with some OS specific issues and also, we were not able to move our multi-instance Tomcat architecture and middleware directly from Linux server. That means every deployment and middleware installation which we were doing on Linux server we needed to do on Windows server separately, that was again extra time consumption.
And here came the second realization, deployment process should be platform independent.
Deployment in Docker Container…
“Docker is a set of Platform as a Service products”, that means it will create a complete self-sufficient package of your application, for example if you are having a JAVA SE application, you need following things to deploy it,
- WAR file of your application
- Properties file (Optional)
Docker will create a “Docker Image” out of these, now this docker image can be deploy in any docker container irrespective of the Host machines OS because Docker Container uses OS Level virtualization. Docker containers can communicate with each other so we started keeping one container for database and another for our application.
This helped us improving our relationship with Windows Servers. But setting up HA was still the most time-consuming part, and not even HA, the process of deploying multiple instances of same application was still a set of repetitive steps. As I mentioned earlier the ultimate goal of huma brain is to seek comfort, we were feeling the same.
Container Orchestration with Kubernetes…
Kubernetes and Docker go very well together, as Kubernetes provides the orchestration of the Containers in clustered manner. Kubernetes is more or less a helping hand to manage your Docker containers. You can manage and even keep an eye on resources consumption of your Docker Container.
Kubernetes helped us in various manners,
- It helped us to manage multiple instances of same application (For example, Dev, Test, UAT, PROD). It was a laborious task to setup the whole instance from scratch. Now all we need to do is take Docker Image and create a new Pod and we can assign required RAM, CPU and Storage with minimum efforts and also can monitor these resources. It reduced the instance setup time consumption to 50%
- It helped us to manage the Twins of our Applications. Kubernetes can create and manage replica sets of containers, that means it will scale up the container according to the workload, which we were doing manually.
MicroK8s helped us to run Kubernetes clusters locally. There is one more option available MiniKube. Both has their own pros and cons. MiniKube works by spinning up a new Virtual Machine that’s why it consumes more space but VM is transferable across Operating Systems, whereas MicroK8s used Application Packages isolation that’s why consumes less space. In our case Container Orchestration was more important so we went with the Microk8s, but soon going to use MiniKube as well.
Journey doesn’t end here; it was just a beginning. You will find us soon exploring Terraform because “What’s dangerous is not to evolve.”