Blue / Green And Canary Deployment For EKS

In this workshop you'll learn how to build a CI/CD pipeline (AWS CodePipeline) to develop a web-based application, containerize it, and deploy it on a Amazon EKS cluster.

You'll use the blue/green method to deploy application and review the switchover using Application Load Balancer (ALB) Target-groups. You will spawn this infrastructure using AWS Cloud Development Kit (CDK), enabling you to reproduce the environment when needed, in relatively fewer lines of code.

The CDK creates the infrastructure to host the application as well as a CICD CodePipeline, providing the developers an Application Development workflow.

A brief about the EKS workshop:

The hosting infrastructure consists of pods hosted on Blue and Green service on Kubernetes Worker Nodes, being accessed via an Application LoadBalancer.

The Blue service represents the production environment accesssed using the ALB DNS with http query (group=blue) whereas Green service represents a pre-production / test environment that is accessed using a different http query (group=green).

The CodePipeline build stage uses CodeBuild to dockerize the application and post the images to Amazon ECR.

In subsequent stages, the image is picked up and deployed on the Green service of the EKS.

The Codepipeline workflow is then blocked at the approval stage, allowing the application in Green service to be tested.

Once the application is confirmed to be working fine, the user can issue an approval at the Approval Stage and the application is then deployed on to the Blue Service.

Image description

Switch over traffic to the Green environment:

Image description

The CodePipeline would look like the below figure:

Image description

Image description

PREPARE AND DOWNLOAD PACKAGES

(a) AWS CLI Installed:

AWS CLI is installed by default if using Cloud9 service. If you would like to use your own local shell, please follow these instructions if you haven’t installed AWS CLI. Your CLI configuration needs PowerUserAccess and IAMFullAccess IAM policies associated with your credentials

Please test your cli access by running the below command:

aws --version
sudo yum install -y jq
export ACCOUNT_ID=$(aws sts get-caller-identity --output text --query Account)
export AWS_REGION=$(curl -s 169.254.169.254/latest/dynamic/instance-identity/document | jq -r '.region')
echo "export ACCOUNT_ID=${ACCOUNT_ID}" | tee -a ~/.bash_profile
echo "export AWS_REGION=${AWS_REGION}" | tee -a ~/.bash_profile
aws configure set default.region ${AWS_REGION}
aws configure get default.region

(b) Download and install Kubectl and IAM Authenticator:

Run the below command on your Cloud9 Terminal and check the last command to confirm kubectl is correctly installed

curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.17.0/bin/linux/amd64/kubectl
chmod +x ./kubectl
sudo mv ./kubectl /usr/local/bin/kubectl
kubectl help
curl -o aws-iam-authenticator https://amazon-eks.s3.us-west-2.amazonaws.com/1.15.10/2020-02-22/bin/linux/amd64/aws-iam-authenticator
chmod +x ./aws-iam-authenticator
mkdir -p $HOME/bin && cp ./aws-iam-authenticator $HOME/bin/aws-iam-authenticator && export PATH=$PATH:$HOME/bin
echo 'export PATH=$PATH:$HOME/bin' >> ~/.bashrc

(c ) Install pre-requisite packages for the CDK:

Install / upgrade the cdk required packages using the below commands:

npm install -g aws-cdk@1.124.0 --force

(d) Git clone the repository:

Run the below command on your Cloud9 Terminal to clone the repository:


git clone https://github.com/aws-samples/amazon-eks-cdk-blue-green-cicd.git amazon-eks-cicd-codebuild-eks-alb-bg

cd amazon-eks-cicd-codebuild-eks-alb-bg

ls

CDK LAUNCH

Launch the CDK using the following steps:

Step1: Move to the CDK project folder using below command. It is not necessary to run cdk init because the project files are already included:

cd cdk

Step2: Install the required modules defined in package.json file for this CDK stack using below command. It will create node_modules directory:

npm i

Step3: Compile the packages to js and then check the cdk empty stack listing:


npm run build

cdk ls

Step4: Generate CloudFormation template from the CDK:

cdk synth

Step5: Bootstrap the CDKToolkit CloudFormation stack into your environment(s), where an S3 bucket will be created to store synthesized templates and the related assets, before triggering a CloudFormation stack update in the next step:

cdk bootstrap aws://$ACCOUNT_ID/$AWS_REGION

If the CDK toolkit staging S3 bucket does not exist, this command will create one with following messages:

 0/2 | 6:55:15 PM | CREATE_IN_PROGRESS   | AWS::S3::Bucket | StagingBucket 
 0/2 | 6:55:16 PM | CREATE_IN_PROGRESS   | AWS::S3::Bucket | StagingBucket Resource creation Initiated
 1/2 | 6:55:37 PM | CREATE_COMPLETE      | AWS::S3::Bucket | StagingBucket 
 2/2 | 6:55:38 PM | CREATE_COMPLETE      | AWS::CloudFormation::Stack | CDKToolkit

Step6: Run the cdk deploy, which will launch the CloudFormation stacks created in the earlier steps:

cdk deploy

UPLOAD APPLICATION

Once the CDK stack creation has been completed, you would need to upload your local application directory to the newly created CodeCommit Repository: CdkStackEksALBBg-repo.

As the repository is empty, the CodePipeline will show a failure in the Source CodeCommit repository stage.

Before we upload our application to the repository, we will first review the EKS cluster and complete the configuration as per the instructions in the next sub-section.

EKS CONFIGURATION

EKS Cluster Configuration:

We will carry out steps on the EKS cluster to configure ingress controller, start the blue and green service and launch an ALB.

Access the EKS cluster from your Cloud9 Terminal and run the command that is provided in the output of the CloudFormation under the field "ClusterConfigCommand".

Copy-paste this command so that your kubectl commands will point to the required EKS Cluster.

Image description

Once the config command is run, execute the following commands and you should see 2 worker nodes:

kubectl get nodes

Change directory to k8s (amazon-eks-cicd-codebuild-eks-alb-bg/flask-docker-app/k8s) and run following setup script. You may collect the Clustername from the CloudFormation Output and the Worker node instance role from the EC2 dashboard.

cd ../flask-docker-app/k8s
ls setup.sh
chmod +x setup.sh
chmod +x setup2.sh
INSTANCE_ROLE=$(aws cloudformation describe-stack-resources --stack-name CdkStackALBEksBg | jq .StackResources[].PhysicalResourceId | grep CdkStackALBEksBg-ClusterDefaultCapacityInstanceRol | tr -d '["\r\n]')
CLUSTER_NAME=$(aws cloudformation describe-stack-resources --stack-name CdkStackALBEksBg | jq '.StackResources[] | select(.ResourceType=="Custom::AWSCDK-EKS-Cluster").PhysicalResourceId' | tr -d '["\r\n]')
echo "INSTANCE_ROLE = " $INSTANCE_ROLE 
echo "CLUSTER_NAME = " $CLUSTER_NAME

CONFIGURE SECURITY GROUP

Modify the Security Group for the newly spawned Application Load Balancer to add an incoming rule to allow http port 80 for the 0.0.0.0/0.

Go to Services, select EC2, navigate to the Load Balancers section, select the latest created ALB with "flaskalb" in the name.

Once selected, click the Description tab and scroll down to locate the Security groups and note down the name.

Edit this security group to add a new rule by clicking Edit inbound rules button and add following parameters: Protocol: http, Port range: 80, Source: 0.0.0.0/0

For the demo purpose, we are using 0.0.0.0/0, however, we strongly recommend to restrict it to the IP address of your device.

For the above selected load balancer, review the listener routing rules: Click Listeners Tab and click View/edit rules You would see the below settings shown:

Image description

Check the Load Balancer Target-groups and ensure the healthy hosts have registered and health check is consistently passing as shown below:

Image description

Image description

UPLOAD CODE

Upload code to CodeCommit Repo: Note: For Windows based git clients, you may need to clear the cache if a different user was being used or to reset the credentials. To do that: Control Panel -> All Control Panel Items -> Credential Manager and delete git user. If you have Keychain Access App based credentials caching in your client device, you may need to search and edit the required credential item. To upload your application, use the following commands (please copy-paste after changing the region below accordingly). Navigate to the directory as shown below to amazon-eks-cicd-codebuild-eks-alb-bg and run git commands:

Note: Please substitute the region below and then paste the commands provided below.


cd ../.. 
pwd => confirm your current directory is amazon-eks-cicd-codebuild-eks-alb-bg

git add flask-docker-app/k8s/alb-ingress-controller.yaml
git add flask-docker-app/k8s/flaskALBIngress_query.yaml
git add flask-docker-app/k8s/flaskALBIngress_query2.yaml


git add flask-docker-app/k8s/iam-policy.json

git commit -m "Updated files"

git remote add codecommit https://git-codecommit.$AWS_REGION.amazonaws.com/v1/repos/CdkStackALBEksBg-repo

git push -u codecommit master

This will push the last commit we carried out in our preparation section, which in turn will trigger the CodePipeline.

In the next part we will cover the infrastructure review. So, stay tuned! :)