Integrating Cassandra, Prometheus, & Grafana with AWS-EKS!
This blog will explain the integration of Cassandra deployment on AWS-EKS with cluster monitoring by Prometheus, & cluster metrics visualization by Grafana! An ultimate integration is explained in this blog which can be used also for the huge deployments.
In this world, we all know the machines might fail, but what if we are providing a very critical service to our customers, & then there is a failure. This scenario will definitely create a bad impact on the customers who are willing to avail of our services, not only that how much it takes to bring the services back online is also a very important factor.
In this Agile world, the minimum time it takes to handle the failover for any machine, is the key to keep any business running, or any other service running. We can achieve it manually, but it will be a very slow process.
There exists a better alternative to this problem, & it is to use container technology with container orchestration tools. Some of the examples of container technology are Docker, PodMan, CRI-O, Rocker, etc. Some of the examples of container management tool are Kubernetes, Opneshift, etc.
In this article, I will explain the deployment of an amazing database Cassandra on the AWS-EKS which is a managed Kubernetes service. If we are using this service, then we have to just only take care of our product, the master node of Kubernetes will be automatically managed by the AWS-EKS service.
Note: Anybody willing to the same practical should keep in mind that this service is not supported by AWS educate account & it is chargeable service.
Pre-requisite Softwares for doing this practical
- AWS CLI
- kubectl package
- eksctl package
Flow of the complete Integration
- EKS Cluster creation.
- Updating the kubectl config file with the EKS cluster information.
- Creating a Cassandra deployment.
- Installing “Helm” software which acts as a package manager for the Kubernetes.
- Installing “Prometheus” using Helm.
- Installing “Grafana” using Helm.
EKS Cluster Creation!
In order to run the above script, run the command “eksctl create cluster -f <filename in which this script is stored>”.
Updating kubectl config File!
To update the kubectl config file, first of all, configure your AWS CLI, then run the command given below!
aws eks update-kubeconfig --name <eks cluster name> --region <region name in which the cluster is running>For example:
aws eks update-kubeconfig --name eks-tw-cluster --region ap-south-1
Creating a Cassandra deployment
This is a long process as we have to do multiple things in it because EFS is used in this part to provide storage with the capability of multiple mounting support.
The very first step in this part is to first of all support of EFS storage for shared storage and efficient network storage. In order to proceed for the support of the same, we have to install one package into the worker Kubernetes nodes. To do the same, login to the worker nodes one by one using ssh or any other methods, and run the command given below to install the efs support software in them.
sudo yum install amazon-efs-utils
Now, we have to create the EFS provisioner to use the EFS storage because by default it is not available in the EFS service.
Now, we have to bind a cluster-admin role to perform all the operations with the EFS.
Now, in the last step, we have to create the EFS Storage.
At last, we can create a Cassandra deployment.
Installing Helm software!
To install this software, visit the link given below, & install it according to the OS you are using.
After installing Helm, first of all we have to initialize it, for that run the command “helm init”, then we have to add an address where the packages (known as charts in Kubernetes World) are available to download & install.
Run the command given below for the same.
helm repo add stable https://kubernetes-charts.storage.googleapis.com
To use Helm properly, we have to configure some settings of “Tiller” software which is a server-side software of Helm.
To configure the server properly run the below commands:
kubectl -n kube-system create serviceaccount tillerkubectl create clusterrolebinding tiller --clusterrole cluster-admin --serviceaccount=kube-system:tillerhelm init --service-account tiller
Installing Prometheus using Helm!
For better management, create a different namespace for Prometheus by running the command “kubectl create ns prometheus”.
Now, to install the Prometheus software which is used for monitoring, run the command given below.
helm install prometheus stable/prometheus --namespace prometheus
To access Prometheus from the local machine, run the following command.
kubectl -n prometheus port-forward svc/prometheus-server 8888:80
Installing Grafana using Helm
For better management, create a different namespace for Prometheus by running the command “kubectl create ns grafana”.
Now, to install the Grafana software which is used for monitoring, run the command given below. In the below command, we have set the admin password to “123456” and created 1 LoadBalancer for the Grafana.
helm install grafana stable/grafana --namespace=grafana --set adminPassword=123456 --set service.type=LoadBalancer
From this command, in the present scenario, the Load Balancer will be created in AWS using AWS-ELB service, once it is created and connected, one IP will be provided use that, to connect to the Grafana tool which will help to visualize the information of Prometheus in a visual way.
After accessing Grafana through the ELS IP, you will be landed on the login page, now login using the username as admin & password as 123456 which we have initialized above.
Now, set the data source as Prometheus and use the IP of the prometheus server obtained by running the command “kubectl get svc -n promethues”. Then import any available dashboard.
Finally, you will see the output of your cluster usage in real time.
Code used in this blog, is present in my Organization Repository, if anybody wants to use the code, click on the link given below.
I hope my article explains each and everything related to the integration of the multiple Softwares along with the explanation & execution of code. Thank you so much for investing your time in reading my blog & boosting your knowledge!