End to End Mega Project

End to End Mega Project

With DevSecOps, GitOps And Monitoring

Introduction

The project is a complete DevOps pipeline designed to automate and optimize the software development lifecycle, from code integration to deployment and monitoring. It incorporates cutting-edge tools and technologies to implement robust Continuous Integration (CI) and Continuous Deployment (CD) processes. GitHub serves as the source code repository, while Jenkins orchestrates the CI/CD workflows. Code quality and security are ensured using SonarQube and Trivy, respectively, while Docker manages containerization. ArgoCD handles deployment to Kubernetes, enabling efficient application delivery to production environments. Real-time system monitoring is achieved using Prometheus and Grafana, with email notifications providing timely updates on pipeline statuses. This pipeline ensures secure, scalable, and efficient application deployment with a focus on automation and reliability.

Prerequisites:

  1. Dockerhub Account

  2. Github Account

  3. AWS Account

Step1 : Create a master machine using terraform script

  • Create 1 Master machine on AWS with 2CPU, 8GB of RAM (t2.large) and 30 GB of storage and install Docker on it.

  • Open the below ports in security group of master machine and also attach same security group to Jenkins worker node

  • Create one IAM Role for Ec2 instance provide admin access and attach the role to this master machine

  • Install & Configure Docker by using below command, "NewGrp docker" will refresh the group config hence no need to restart the EC2 machine

sudo apt-get update
sudo apt-get install docker.io -y
sudo usermod -aG docker ubuntu && newgrp docker / sudo chmod 777 /var/run/docker.sock /

  • Install and configure Jenkins (Master machine)

      sudo apt update -y
      sudo apt install fontconfig openjdk-17-jre -y
    
      sudo wget -O /usr/share/keyrings/jenkins-keyring.asc \
        https://pkg.jenkins.io/debian-stable/jenkins.io-2023.key
    
      echo "deb [signed-by=/usr/share/keyrings/jenkins-keyring.asc]" \
        https://pkg.jenkins.io/debian-stable binary/ | sudo tee \
        /etc/apt/sources.list.d/jenkins.list > /dev/null
    
      sudo apt-get update -y
      sudo apt-get install jenkins -y
    

Now, access Jenkins Master on the browser on port 8080 and configure it

http://3.110.212.101:8080/

Step 2 : Create EKS Cluster on AWS (Master machine)

  • IAM user with access keys and secret access keys

  • AWSCLI should be configured(master machine)

      curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
      sudo apt install unzip
      unzip awscliv2.zip
      sudo ./aws/install
      aws configure
    

  • Install kubectl (Master machine)

      curl -o kubectl https://amazon-eks.s3.us-west-2.amazonaws.com/1.19.6/2021-01-05/bin/linux/amd64/kubectl
      chmod +x ./kubectl
      sudo mv ./kubectl /usr/local/bin
      kubectl version --short --client
    
  • Install eksctl (Master machine)

      curl --silent --location "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp
      sudo mv /tmp/eksctl /usr/local/bin
      eksctl version
    

  • Create EKS Cluster (Master machine)

      eksctl create cluster --name=wanderlust \
                          --region=ap-south-1 \
                          --version=1.30 \
                          --without-nodegroup
    

  • Associate IAM OIDC Provider (Master machine)

      eksctl utils associate-iam-oidc-provider \
        --region ap-south-1 \
        --cluster wanderlust \
        --approve
    

  • Create Nodegroup (Master machine)

      eksctl create nodegroup --cluster=wanderlust \
                           --region=ap-south-1 \
                           --name=wanderlust \
                           --node-type=t2.large \
                           --nodes=2 \
                           --nodes-min=2 \
                           --nodes-max=2 \
                           --node-volume-size=30 \
                           --ssh-access \
                           --ssh-public-key=eks-nodegroup-key
    

Make sure the ssh-public-key "eks-nodegroup-key is available in your aws account

Step 3 : Install Trivy, SonarQube & OWASP on master machine

  • Install and configure SonarQube (Master machine)
docker run -itd --name SonarQube-Server -p 9000:9000 sonarqube:lts-community

  • Install Trivy

      sudo apt-get install wget apt-transport-https gnupg lsb-release -y
      wget -qO - https://aquasecurity.github.io/trivy-repo/deb/public.key | sudo apt-key add -
      echo deb https://aquasecurity.github.io/trivy-repo/deb $(lsb_release -sc) main | sudo tee -a /etc/apt/sources.list.d/trivy.list
      sudo apt-get update -y
      sudo apt-get install trivy -y
    

    • Go to Jenkins Master and click on Manage Jenkins --> Plugins --> Available plugins install the below plugins:

      • OWASP

      • SonarQube Scanner

      • Docker

      • Pipeline: Stage View

  • After OWASP plugin is installed, Now move to Manage jenkins --> Tools

  • Login to SonarQube server and create the credentials for jenkins to integrate with SonarQube

    Navigate to Administration --> Security --> Users --> Token

  • Now, go to Manage Jenkins --> credentials and add Sonarqube credentials:

  • Go to Manage Jenkins --> Tools and search for SonarQube Scanner installations:

While adding github credentials add Personal Access Token in the password field.

Go to Github account profile --> settings --> developer settings

Go to Manage Jenkins --> credentials and add Github credentials to push updated code from the pipeline:

  • Go to Manage Jenkins --> System and search for SonarQube installations:

  • Now again, Go to Manage Jenkins --> System and search for Global Trusted Pipeline Libraries:</b

  • Login to SonarQube server, go to Administration --> Configuration —> Webhook and click on create

  • Now, go to github repository and under Automations directory update the instance-id field on both the updatefrontendnew.sh updatebackendnew.sh with the k8s worker node's instance id

  • Navigate to the DockerHub account Settings --> Personal Access Token create access token

  • Navigate to Manage Jenkins --> credentials and add credentials for docker login to push docker image:

Step 4 : Steps to add Email notification

  • Now, we need to generate an application password from our gmail account to authenticate with jenkins

    • Open gmail and go to Manage your Google Account --> Security

    • Make sure 2 step verification must be on

    • Search for App password and create a app password for jenkins

    • Once, app password is create and go back to jenkins Manage Jenkins --> Credentials to add username and password for email notification

    • Go back to Manage Jenkins --> System and search for Extended E-mail Notification

    • Scroll down and search for E-mail Notification and setup email notification

    • Enter your gmail password which we copied recently in password field E-mail Notification --> Advance

      Step 5 : Install and Configure ArgoCD (Master Machine)

Create Argocd namespace

kubectl create namespace argocd
  • Apply Argocd manifest
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
  • Make sure all pods are running in Argocd namespace
watch kubectl get pods -n argocd
  • Install Argocd CLI
sudo curl --silent --location -o /usr/local/bin/argocd https://github.com/argoproj/argo-cd/releases/download/v2.4.7/argocd-linux-amd64
  • Provide executable permission
sudo chmod +x /usr/local/bin/argocd
  • Check Argocd services
kubectl get svc -n argocd
  • Change ArgoCD server's service from ClusterIP to NodePort
kubectl patch svc argocd-server -n argocd -p '{"spec": {"type": "NodePort"}}'
  • Confirm service is patched or not
kubectl get svc -n argocd
  • Check the port where Argo CD server is running and expose it on security groups of a worker node

  • Access it on browser, click on advance and proceed with

<public-ip-worker>:<port>
https://13.201.34.33:30420

  • Fetch the initial password of Argocd server(master machine)
kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d; echo

  • Username: admin

  • Now, go to User Info and update your Argocd password

    Go to Settings --> Repositories and click on Connect repo

Connection should be successful

  • Go to Master Machine and add our own eks cluster to argocd for application deployment using cli

    • Login to ArgoCD from CLI
     argocd login 13.201.34.33:30420 --username admin

13.201.34.33:30420**--> This should be your argocd url**

  • Check how many clusters are available in argocd
argocd cluster list

  • Get your cluster name
kubectl config get-contexts

  • Add your cluster to argocd
argocd cluster add Plyaground@wanderlust.ap-south-1.eksctl.io --name wanderlust-eks-cluster

--> This should be your EKS Cluster Name.

Once your cluster is added to argocd, go to argocd console Settings --> Clusters and verify it

Connection should be successful

  • Now, go to Applications and click on New App

    Make sure to click on the Auto-Create Namespace option while creating argocd application

    • Open port 31000 and 31100 on worker node and Access it on browser

Step 6: Create a Wanderlust CI Pipeline

cd pipeline

Wanderlust-CI Pipeline Output

Wanderlust-CD Pipeline Output

Email Notification After Application is Deployed Successfully

Access application on cluster node public ip on port 31000

Step 7 : Monitor EKS cluster, Kubernetes components and workloads using Prometheus and Grafana via HELM (On Master machine)

  • Install Helm Chart
curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3
chmod 700 get_helm.sh
./get_helm.sh
  • Add Helm Stable Charts for Your Local Client
helm repo add stable https://charts.helm.sh/stable
  • Add Prometheus Helm Repository
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
  • Create Prometheus Namespace
kubectl create namespace prometheus
kubectl get ns
  • Install Prometheus using Helm
helm install stable prometheus-community/kube-prometheus-stack -n prometheus
  • Verify Prometheus installation
kubectl get pods -n prometheus
  • Check the services file (svc) of the Prometheus
kubectl get svc -n prometheus
  • Expose Prometheus and Grafana to the external world through Node Port

Important

change it from Cluster IP to NodePort after changing make sure you save the file and open the assigned nodeport to the service.

kubectl edit svc stable-kube-prometheus-sta-prometheus -n prometheus

  • Verify service
kubectl get svc -n prometheus

Access your Prometheus here : http://13.201.34.33:31177/query

  • Now,let’s change the SVC file of the Grafana and expose it to the outer world
kubectl edit svc stable-grafana -n prometheus

  • Check Grafana service
kubectl get svc -n prometheus

  • Open the ports in cluster node’s security group for both Prometheus and Grafana to access Externally

  • Get a password for Grafana

kubectl get secret --namespace prometheus stable-grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo

Username: admin

Now, view the Dashboard in Grafana

http://13.201.34.33:31032/login

Clean Up

Delete eks cluster

eksctl delete cluster --name=wanderlust --region=ap-south-1

Summary

This DevOps pipeline integrates key tools and processes to automate the software development lifecycle efficiently. The pipeline begins with developers pushing code to GitHub, triggering a Jenkins CI job that performs dependency checks (OWASP), code quality analysis (SonarQube), and Docker image security scans (Trivy). The CI process builds Docker containers for deployment.

In the CD pipeline, Jenkins updates the Docker image version and pushes it to GitHub, where ArgoCD retrieves it for deployment to a Kubernetes cluster. Real-time monitoring is handled by Prometheus and Grafana, ensuring system health and performance. Email notifications inform stakeholders of pipeline events and issues. This workflow ensures secure, scalable, and automated application delivery, adhering to modern DevOps best practices.

Done!!

A Heartfelt Thanks to My Readers, Connections, and Supporters!

Let’s keep the momentum going – stay connected, stay inspired! 💡🚀

Stay tuned for my next blog. I will keep sharing my learnings and knowledge here with you.

Let's learn together! I appreciate any comments or suggestions you may have to improve my blog content.

Happy Learning !!!

Thank you,

Neha Bawane