Introduction
Blue-Green Deployment is a technique used in software deployment where two identical environments are set up. The active environment, which is serves live traffic, is called the blue environment, and the idle environment is called the green environment. The new version of the software is deployed in the green environment, and once it has been tested and verified to be working correctly, the traffic is shifted from the blue environment to the green environment. This approach ensures zero downtime during deployment and provides a quick and easy way to roll back if something goes wrong.
Kubernetes is a popular container orchestration platform that provides various deployment strategies, including Blue-Green Deployment. In this blog post, we will explore how to perform Blue-Green Deployment using Kubernetes.
Tools Used
Jenkins
Docker
Git
SonarQube
Trivy
Nexus
Kubernetes Cluster
kubectl command-line tool
Step 1: Provision an EC2 instance with the specified configuration details.
AMI: ubuntu
Instance type:t2 medium
make sure that port 22, 80,443,1000-11000, 500-1000 are opened
storage: 20 GiB
- Connect to virtual machine using an SSH client
ssh -i "key.pem"
ubuntu@ec2-public_ip.ap-south-1.compute.amazonaws.com
- Update the virtual server
sudo apt-get update
Step 2: Establish a connection with the AWS account
- Install AWS CLI on virtual server
curl "
https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip
" -o "
awscliv2.zip
" sudo apt-get install unzip unzip
awscliv2.zip
sudo ./aws/install
- Create an IAM user, generate the access key and secret access key, and configure it with the AWS account.
step 3: Install Terraform on virtual machine
sudo snap install terraform --classic
Step 4: Clone the git repository on virtual machine
git clone
https://github.com/msnehabawane/Blue-Green-Deployment.git
cd Blue-Green-Deployment
Step 5: Create EKS cluster using terraform script
- Navigate to the Cluster directory where the Terraform configuration files for provisioning and managing the infrastructure of our cluster are located.
cd Cluster
terraform init
terraform plan
terraform apply
- Install kubectl
sudo snap install kubectl --classic
aws eks --region ap-south-1 update-kubeconfig --name devopsshack-cluster
Step 6: Create 3 servers for Jenkins, SonarQube and Nexus
- Install and configure Jenkins on the Jenkins virtual machine that was provisioned earlier
sudo apt update
sudo apt install openjdk-17-jre-headless -y
sudo wget -O /usr/share/keyrings/jenkins-keyring.asc \
https://pkg.jenkins.io/debian-stable/jenkins.io-2023.key
echo "deb [signed-by=/usr/share/keyrings/jenkins-keyring.asc]" \
https://pkg.jenkins.io/debian-stable binary/ | sudo tee \
/etc/apt/sources.list.d/jenkins.list > /dev/null
sudo apt-get update
sudo apt-get install jenkins
- Install docker on Jenkins virtual server
# Add Docker's official GPG key:
sudo apt-get update
sudo apt-get install ca-certificates curl
sudo install -m 0755 -d /etc/apt/keyrings
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
sudo chmod a+r /etc/apt/keyrings/docker.asc
# Add the repository to Apt sources:
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \
$(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli
containerd.io
docker-buildx-plugin docker-compose-plugin
sudo usermod -aG docker jenkins
- Install trivy on Jenkins server
sudo apt-get install wget apt-transport-https gnupg lsb-release
wget -qO - https://aquasecurity.github.io/trivy-repo/deb/public.key | sudo apt-key add -
echo deb https://aquasecurity.github.io/trivy-repo/deb $(lsb_release -sc) main | sudo tee -a /etc/apt/sources.list.d/trivy.list
sudo apt-get update
sudo apt-get install trivy
Install kubectl on jenkins server
sudo snap install kubectl --classic
To access the Jenkins console available at http://13.201.16.251:8080
Retrieve the default administrator password by executing command on the serversudo cat /var/lib/jenkins/secrets/initialAdminPassword
- setup nexus server
install dockersudo apt update
sudo apt install docker.io -y
sudo usermod -aG docker ubuntu
newgrp docker
run the docker container for Nexusdocker run -d -p 8081:8081 sonatype/nexux3
We can verify that the Nexus container is running successfully
Access the Nexus repository manager via the URL: public_ip:8081
http://13.201.118.45:8081
nexus server is up and running
Click on the 'Sign In' button to authenticate
The username is 'admin', but the password needs to be retrieved in order to access the cluster using the appropriate command.
docker exec -it 8f83e8200176 /bin/bash
Navigate to the specified directory or path during the sign-in process
cd /nexus-data/admin.password
da52841a-af7b-4842-8b6d-81f16fa7a9f1
c384c755-8117-4968-ab63-2c511ca1f481
update the password
The Nexus server has been successfully configured and is fully operational
Setup SonarQube server
sudo apt update
sudo apt install docker.io -y
sudo usermod -aG docker ubuntu
newgrp docker
docker run -d -p 9000:9000 sonarqube:lts-community
SonarQube container is running
access the SonarQube server via public_ip:9000
http://13.232.246.183:9000
SonarQube server is up and running
The default credentials for the SonarQube server are 'admin' for both the username and password.
The SonarQube server has been successfully deployed and configured
Step 7: Create Service Account, Role, ClusterRole & Assign that role, And create a secret for Service Account and generate a Token
Create namespace
kubectl create ns webapps
Creating Service Account
vi sa.yml
apiVersion: v1
kind: ServiceAccount
metadata:
name: jenkins
namespace: webapps
kubectl apply -f sa.yml
Create role
vi role.yml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: app-role
namespace: webapps
rules:
- apiGroups:
- ""
- apps
- autoscaling
- batch
- extensions
- policy
- rbac.authorization.k8s.io
resources:
- pods
- secrets
- componentstatuses
- configmaps
- daemonsets
- deployments
- events
- endpoints
- horizontalpodautoscalers
- ingress
- jobs
- limitranges
- namespaces
- nodes
- pods
- persistentvolumes
- persistentvolumeclaims
- resourcequotas
- replicasets
- replicationcontrollers
- serviceaccounts
- services
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
kubectl apply -f role.yml
Bind the role to service account (role binding)
vi rolebind.yml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: app-rolebinding
namespace: webapps
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: app-role
subjects:
- namespace: webapps
kind: ServiceAccount
name: jenkins
kubectl apply -f rolebind.yml
Generate token using service account in the namespace
vi sec.yml
apiVersion: v1
kind: Secret
type: kubernetes.io/service-account-token
metadata:
name: mysecretname
annotations:
kubernetes.io/service-account.name: jenkins
kubectl apply -f sec.yml -n webapps
Create token from secret
kubectl describe secret mysecretname -n webapps
Step 8 : On Jenkins add secret token and Install required plugins
- Add secret token
go to the Manage Jenkins —> Credentials—> System —> Global Credentials and add credential
- Install the required plugins which are in snapshot
- Configure the tools in Jenkins
1. configure maven
Navigate to manage Jenkins —>tools
2. Configure SonarQube
- Setup certain credentials
Generate token for SonarQube
generate token on SonarQube server
Navigate to Administrator —>Security —>Users —>token then generate token
Navigate to manage Jenkins —>credential —> system
- Add SonarQube server inside Jenkins
Navigate to manage Jenkins —>system
6. Again navigate to manage Jenkins —> managed files
copy this URL and go to the git repository and the pom.xml file and update below url for maven-release and maven-snapshots
- Create Webhook
Navigate to SonarQube server
Administration —> Configuration —>Webhook
Step 9: Create Jenkins Job and write CICD Script for pipeline
pipeline {
agent any
tools {
maven 'maven3'
}
parameters {
choice(name: 'DEPLOY_ENV', choices: ['blue', 'green'], description: 'Choose which environment to deploy: Blue or Green')
choice(name: 'DOCKER_TAG', choices: ['blue', 'green'], description: 'Choose the Docker image tag for the deployment')
booleanParam(name: 'SWITCH_TRAFFIC', defaultValue: false, description: 'Switch traffic between Blue and Green')
}
environment {
IMAGE_NAME = "nehabawane/bankapp"
TAG = "${params.DOCKER_TAG}" // The image tag now comes from the parameter
KUBE_NAMESPACE = 'webapps'
SCANNER_HOME= tool 'sonar-scanner'
}
stages {
stage('Git Checkout') {
steps {
git branch: 'main', credentialsId: 'git_cred', url: 'https://github.com/msnehabawane/Blue-Green-Deployment.git'
}
}
stage('Compile') {
steps {
sh "mvn compile"
}
}
stage('Tests') {
steps {
sh "mvn test -DskipTests=true"
}
}
stage('Trivy FS Scan') {
steps {
sh "trivy fs --format table -o fs.html ."
}
}
stage('SonarQube Analysis') {
steps {
withSonarQubeEnv('sonar') {
sh "$SCANNER_HOME/bin/sonar-scanner -Dsonar.projectKey=multi -Dsonar.projectName=multi -Dsonar.java.binaries=target"
}
}
}
stage('Quality Gate Check') {
steps {
timeout(time: 1, unit: 'HOURS') {
waitForQualityGate abortPipeline: false
}
}
}
stage('Build') {
steps {
sh "mvn package -DskipTests=true"
}
}
stage('Publish Artifacts to Nexus') {
steps {
withMaven(globalMavenSettingsConfig: 'maven-settings', jdk: '', maven: 'maven3', mavenSettingsConfig: '', traceability:true){
sh "mvn deploy -DskipTests=true"
}
}
}
stage('Docker Build & Tag Image') {
steps {
script {
withDockerRegistry(credentialsId: 'docker_cred') {
sh "docker build -t ${IMAGE_NAME}:${TAG} ."
}
}
}
}
stage('Trivy Image Scan') {
steps {
sh "trivy image --format table -o fs.html ${IMAGE_NAME}:${TAG}"
}
}
stage('Docker Push Image') {
steps {
script {
withDockerRegistry(credentialsId: 'docker_cred') {
sh "docker push ${IMAGE_NAME}:${TAG}"
}
}
}
}
stage('Deploy MySQL Deployment and Service') {
steps {
script {
withKubeConfig(caCertificate: '', clusterName: 'devopsshack-cluster', contextName: '', credentialsId: 'K8-token', namespace: 'webapps', restrictKubeConfigAccess: false, serverUrl: 'https://E2424B26F1B61A585137ADF407006B24.gr7.ap-south-1.eks.amazonaws.com') {
sh "kubectl apply -f mysql-ds.yml -n ${KUBE_NAMESPACE}" // Ensure you have the MySQL deployment YAML ready
}
}
}
}
stage('Deploy SVC-APP') {
steps {
script {
withKubeConfig(caCertificate: '', clusterName: 'devopsshack-cluster', contextName: '', credentialsId: 'K8-token', namespace: 'webapps', restrictKubeConfigAccess: false, serverUrl: 'https://E2424B26F1B61A585137ADF407006B24.gr7.ap-south-1.eks.amazonaws.com') {
sh """ if ! kubectl get svc bankapp-service -n ${KUBE_NAMESPACE}; then
kubectl apply -f bankapp-service.yml -n ${KUBE_NAMESPACE}
fi
"""
}
}
}
}
stage('Deploy to Kubernetes') {
steps {
script {
def deploymentFile = ""
if (params.DEPLOY_ENV == 'blue') {
deploymentFile = 'app-deployment-blue.yml'
} else {
deploymentFile = 'app-deployment-green.yml'
}
withKubeConfig(caCertificate: '', clusterName: 'devopsshack-cluster', contextName: '', credentialsId: 'K8-token', namespace: 'webapps', restrictKubeConfigAccess: false, serverUrl: 'https://E2424B26F1B61A585137ADF407006B24.gr7.ap-south-1.eks.amazonaws.com') {
sh "kubectl apply -f ${deploymentFile} -n ${KUBE_NAMESPACE}"
}
}
}
}
stage('Switch Traffic Between Blue & Green Environment') {
when {
expression { return params.SWITCH_TRAFFIC }
}
steps {
script {
def newEnv = params.DEPLOY_ENV
// Always switch traffic based on DEPLOY_ENV
withKubeConfig(caCertificate: '', clusterName: 'devopsshack-cluster', contextName: '', credentialsId: 'K8-token', namespace: 'webapps', restrictKubeConfigAccess: false, serverUrl: 'https://E2424B26F1B61A585137ADF407006B24.gr7.ap-south-1.eks.amazonaws.com') {
sh '''
kubectl patch service bankapp-service -p "{\\"spec\\": {\\"selector\\": {\\"app\\": \\"bankapp\\", \\"version\\": \\"''' + newEnv + '''\\"}}}" -n ${KUBE_NAMESPACE}
'''
}
echo "Traffic has been switched to the ${newEnv} environment."
}
}
}
stage('Verify Deployment') {
steps {
script {
def verifyEnv = params.DEPLOY_ENV
withKubeConfig(caCertificate: '', clusterName: 'devopsshack-cluster', contextName: '', credentialsId: 'K8-token', namespace: 'webapps', restrictKubeConfigAccess: false, serverUrl: 'https://E2424B26F1B61A585137ADF407006B24.gr7.ap-south-1.eks.amazonaws.com') {
sh """
kubectl get pods -l version=${verifyEnv} -n ${KUBE_NAMESPACE}
kubectl get svc bankapp-service -n ${KUBE_NAMESPACE}
"""
}
}
}
}
}
}
This is the initial deployment on the blue environment, so traffic switching will not occur. Therefore, that stage can be omitted
get the load balancer URL to access the application using below command
kubectl get all -n webapps
Congratulations!! You have deployed app successfully
Currently, the application is deployed in the blue environment. I now have updates for the application and intend to build the updated version using specific parameters. The updated application will be deployed to the green environment, and once the deployment is successful, I will switch the traffic from the blue environment to the green environment.
Here we can see that there are two parameters are there Blue and Green
The below snapshot shows that initially there is only one environment Blue and then both environments are there.
Here in below snapshot we can observe that in first deployment the switching traffic stage can be ommitted .
Applications artifacts are also there in Nexus Repository in the form of snapshot
Docker Image is also pushed to the DockerHub registry
Conclusion
While Blue-Green Deployment can be an effective way to deploy applications, it may not be the best choice for every situation. For example, if your application requires a lot of data migration or database schema changes, Blue-Green Deployment may not be the best strategy, as it can lead to data inconsistencies between the blue and green environments.
In addition, Blue-Green Deployment can be challenging to implement for stateful applications that require persistent storage, as the data must be synchronized between the blue and green environments. In these cases, you may need to consider other deployment strategies, such as rolling updates or canary deployments.
In this blog post, we have learned how to perform Blue-Green Deployment using Kubernetes. Blue-Green Deployment is a popular deployment strategy that provides zero downtime and a quick and easy way to roll back if something goes wrong.
Thanks for your Attention! Happy Learning!
Done!
Stay tuned for my next blog. I will keep sharing my learnings and knowledge here with you.
Let's learn together! I appreciate any comments or suggestions you may have to improve my blog content.
Thank you,
Neha Bawane