BLUE/GREEN AND CANARY DEPLOYMENT FOR EKS: Infrastructure Review
In the previous part of this series, we handled the CDK launch, application upload.
In this part, we will handle infrastructure review of all app.
Once the application is pushed to the Repository, the CodePipeline will be triggered and CodeBuild will run the set of commands to dockerize the application and push it to the Amazon ECR repository.
CodeBuild will in turn run the kubectl commands to create the Blue and Green services on the EKS clusters if it does not exist.
For the first time, it will pull the Flask demo Application from the Dockerhub and deploy it on Blue as well as Green service. It should say "Your Flask application is now running on a container in Amazon Web Services" as shown below:
In rare conditions, it may occur that Blue and Green service shows the same output of "Amazon Web Services", depending upon the timing of the first initial release-changes propagating through the CodePipeline.
However, in subsequent runs, it will deploy the Codebuild created an application on the Green service only and the updated text will show "Amazon EKS" instead of "Amazon Web Services".
This change in text is introduced by the amazon-eks-cicd-codebuild-eks-bg/flask-docker-app/Dockerfile with the line setting the "ENV PLATFORM".
Once done, the CodePipeline will remain blocked by the Approval Stage.
ACCESS EKS
We will access the EKS resources via 2 methods shown below:
(a) Review via AWS Console:
Go to Services -> EKS -> Clusters -> Select the Cluster and confirm its Status is “Active”.
Go to Services -> EC2 -> Instances -> Ensure the EKS worker nodes (EC2) are in “running” state.
Go to the Services -> EC2 -> Load Balancer Ensure that the correct Load-Balancer is selected: Go to the Load Balancer -> “Listeners” tab -> Review the Listener rules.
Now under Load Balancing -> Target groups, select the Monitoring Tab -> Ensure that both the EKS worker nodes are “healthy”. It takes a few minutes after the application is deployed to have the Instances become “healthy”.
From the Description tab, note down the DNS name for accessing the application. This allows for the users to access and test the application deployed on the Green Service.
Thus if dns name is “dns-alb”, this application can be accessed via:
dns-alb.us-east-2.elb.amazonaws.com/?group=..
``
(b) Review via Terminal CLI:
If not already done in the earlier step, access the EKS cluster from your Cloud9 Terminal and run the command that is provided in the output of the CloudFormation under the field “ClusterConfigCommand”. Copy-paste this command so that your kubectl commands will point to the required EKS Cluster. Once the config command is run, execute the following commands:
kubectl get nodes -n flask-alb
kubectl get deploy -n flask-alb
kubectl get pods -n flask-alb
kubectl get svc -n flask-alb
kubectl get ingress -n flask-alb
CHECK CODEPIPELINE
Go to Services -> CodePipeline -> Pipelines -> CdkStackEksALBBg-[unique-string] Review the Stages and the CodeBuild Projects to understand the implementation. Once the Application is deployed on the Green service, access it as mentioned above: http://?group=green.
It is to be noted that container exposes port 5000, whereas service exposes port 80 (for blue-service) OR 8080 (for green-service) which in turn is mapped to local host port on the EC2 worker node instance.
After testing is completed, go to the Approval Stage and Click Approve. This will trigger the CodePipeline to execute the next stage to run the “Swap and Deploy” stage where it swaps the mapping of target-group to the blue / green service. Thus now access the Application via Blue Service and confirm if the updated application is reflected correctly.
In the next part, we will handle the code change and canary deployment. Stay tuned! :)