Dev Ops

Built solutions that can be deployed with ease and Scale to handle high volumes of transactions using DevOps solutions like docker containerization, orchestrate and Scaling the solution through Kubernetes. There by ensuring robust, scalable and high availability applications. Our expertise is on both Kubernetes native implementations and implementation using Rancher. Our solution helps organizations to achieve application scalability, continuous integration, ease of deployment and version control of all the artifacts.

Case Study 1 - IOT Platform - Native Implementation

 

This native implementation is done for IOT Platform which is a cloud based solution. Platform is a centralised system to provide

IOT device integration, management, data visualization, analytics, alerting, OTA, telematics services, smart metering services.

IOT Platform components are deployed as docker containers which are managed and orchestrated by Kubernetes upstream. IOT Platform is configured to run 1 master node and 2 worker nodes.

Each component (microservices, Kafka, mqtt…) is containerized by creating a docker image of that component. Using continuous integration tool like Jenkins docker images are built and the same is uploaded to Nexus Repository. The same image is pulled from Nexus Repository to create the docker container on host worker nodes.

For high availability and scalability, Kubernetes is used for container orchestration there by managing these components. Containers in Kubernetes are scheduled as pods. These worker Pods are managed and controlled from Master nodes using yaml configurations. Using yaml configuration IOT Platform can be replicated in any environment. Huge/heavy modules (Kafka, mongo dB, MySQL) are segregated into different pods and lightweight modules (3 microservices) are configured to single pods.

Kubernetes Deployment Controller- continuously monitors the application instances which are running on the containers If the Node hosting of an instance goes down or it is deleted, the Kubernetes Deployment Controller replaces the instance/s with an instance on another Node in the cluster

The Grafana Kubernetes App allows system administrator to monitor Kubernetes cluster’s performance

Case Study 2 - European Telecom customer - Rancher Implementation

This Rancher implementation done for an European Telecom customer for the 5g product.

Implementation is using Rancher for high available  and scalable solution. Also building pipeline with Jenkins for easy and auto deployments.

Rancher Implementation provides integrated tools for running containerized docker workloads.

Rancher implementation is configured to run 3 master nodes and 7 worker nodes. Master nodes are also configured to run as worker nodes.

Rancher cluster is configured to use the RKE custom setup with nginx load balancer

Each component (microservices, GUI, mongo dB…) is containerized by creating a docker image of that component. Using continuous integration tool like Jenkins docker images are built and the same is uploaded to Nexus Repository. The same image is pulled from Nexus Repository to create the docker container on host worker nodes.

Each component (microservices, GUI, mongo dB…) will be deployed in two nodes with replica set as 1 on each node. GUI will be deployed in Nginx server configured on a pod. Load balancer is configured to forward the request to respective pods in the cluster distributing the load between the pods.
Replicas of a pod can be scaled high or low based on the load on the pod.

The Rancher UI allows system administrator to monitor Rancher cluster’s performance