AWS Cost  Optimization | DevOps Transformation

  • About Us
  • Services
    • AWS – Cloud – Optimization
    • AWS Well-Architected Framework
    • Cloud Migration Strategy
    • Cloud Native Developers
    • DevOps-Transformation
    • Digital Marketing
    • Disaster Recovery – Public Cloud
    • Managed Cloud Services
    • Web Application Firewall
    • Web Development
  • Industries
    • Automotive, Transportation, and Logistics
    • Consumer Goods
    • Education
    • Financial Services
    • Food & Beverages
    • Information Technology
    • Manufacturing
    • Media & Entertainment
    • Oil & Gas
  • Blogs
  • Contact
REQUEST A QUOTE

Building a Serverless CI/CD Pipeline on AWS

Friday, 30 October 2020 / Published in AWS Architected Framework, AWS Cloud Optimization

Building a Serverless CI/CD Pipeline on AWS

We all know and like to think that gone are the days when the process of copy/paste deployment was taken into consideration. But wait a sec, it is very unfortunate that there are still some of the developers who prefer to continue with the same method and that is with the process of having copy/paste deployment.

Well, to this being a technical content writer I can easily write the article on to why it is bad, but rather let us think for a while with the process and method which can be pretty self-explanatory. The current and the latest trend which is ongoing in the market is heading towards serverless architectures. And yes indeed CI/CD Pipelines play an important role in this application when it comes to delivery of the work. So, with this said, here we are mentioning 3 top tips for you which can act as a guide for your next serverless project.

Continuous Integration and Delivery is something that has interested the organizations and the team for a long time. Today also, it is that we are using TeamCity for the majority of CI/CD pipelines today. It is that TeamCity works great and we don’t have anything against it, but we always look at how to do things in a better way. One of these things is being able to build our pipelines as code — which is one of the things TeamCity isn’t so good at.

It’s been some time since we have looked at the details in which the work of integration and delivery tools on AWS has been made available while there is a use of CodeDeploy for another project which runs on EC2. Moreover, it is a truth that we have never ever used them on or for the deployment of a serverless project. After getting reacquainted with the tools I could see there is now native integration for deploying CloudFormation and Lambda — presumably, with AWS’ SAM — we use the serverless framework and although it generates CloudFormation templates, it doesn’t work out of the box with the tools in AWS.

Getting Prepared

The AWS services that we will be using here are EC2, Docker, ECR, S3, IAM, CodeBuild, CodePipeline, CloudWatch, CloudTrail. You’ll need to understand at least a basic level of what each of these services does to follow along.

It is that we primarily write backend code in .net, and that is entirely based on the tutorials. None of the pre-built CodeBuild images have both .NET and NodeJS runtimes on them (NodeJS is required for the serverless framework). If your Lambda functions are written in NodeJS, the process of setting up a deployment pipeline becomes a lot simpler as it’s the only runtime you need installing on your Docker image (you could potentially skip a lot of this tutorial). I should also mention, this was my first time getting acquainted with containers so I was excited to learn something new.

1. Create an EC2 instance and install Docker

Start by spinning up a standard AWS Linux 2 EC2 instance — that should be self-explanatory. Log in and install Docker with these commands:

sudo yum update -y

sudo amazon-linux-extras install docker

sudo service docker start

We’ll also need to add the ec2-user to the docker group so you can execute Docker commands without using sudo:

sudo usermod -a -G docker ec2-user

After executing the command, log out and log back into your EC2 instance so ec2-user can assume the new permissions. Then verify ec2-user can run Docker commands without sudo:

docker info

2. Build Docker image and push to ECR

Assuming the above step has been successful, the next stage is to build the Docker image and push it to ECR. AWS provides the base images for CodeBuild on Github, which makes building our own image easier.

Here we have also published my image to Github if you don’t want to go through the steps below to build your own: https://github.com/effectivedigital/serverless-deployment-image

Start by cloning the images and navigating into the .NET Core 2.1 directory:

git clone https://github.com/aws/aws-codebuild-docker-images.git

cd aws-codebuild-docker-images

cd ubuntu/dot-net/core-2.1/

Open up Dockerfile in your preferred text editor:

nano Dockerfile

Add the commands to install NodeJS and the serverless framework at the end of the other commands already in Dockerfile. I was able to get the majority of these commands from the NodeJS Docker image from the same repository from AWS:

# Install Node Dependencies

ENV NODE_VERSION=”10.14.1″

# gpg keys listed at https://github.com/nodejs/node#release-team

RUN set -ex \

&& for key in \

94AE36675C464D64BAFA68DD7434390BDBE9B9C5 \

B9AE9905FFD7803F25714661B63B535A4C206CA9 \

77984A986EBC2AA786BC0F66B01FBB92821C587A \

56730D5401028683275BD23C23EFEFE93C4CFFFE \

71DCFD284A79C3B38668286BC97EC7A07EDE3FC1 \

FD3A5288F042B6850C66B31F09FE44734EB7990E \

8FCCA13FEF1D0C2E91008E09770F7A9A5AE15600 \

C4F0DFFF4E8C1A8236409D08E73BC641CC11F4C8 \

DD8F2338BAE7501E3DD5AC78C273792F7D83545D \

4ED778F539E3634C779C87C6D7062848A1AB005C \

A48C2BEE680E841632CD4E44F07496B3EB3C1762 \

; do \

gpg – keyserver hkp://p80.pool.sks-keyservers.net:80 – recv-keys “$key” || \

gpg – keyserver hkp://ipv4.pool.sks-keyservers.net – recv-keys “$key” || \

gpg – keyserver hkp://pgp.mit.edu:80 – recv-keys “$key” ; \

done

RUN set -ex \

&& wget “https://nodejs.org/download/release/v$NODE_VERSION/node-v$NODE_VERSION-

linux-x64.tar.gz” -O node-v$NODE_VER$

&& wget “https://nodejs.org/download/release/v$NODE_VERSION/SHASUMS256.txt.asc” -O SHASUMS256.txt.asc \

&& gpg – batch – decrypt – output SHASUMS256.txt SHASUMS256.txt.asc \

&& grep ” node-v$NODE_VERSION-linux-x64.tar.gz\$” SHASUMS256.txt | sha256sum -c – \

&& tar -xzf “node-v$NODE_VERSION-linux-x64.tar.gz” -C /usr/local – strip-components=1 \

&& rm “node-v$NODE_VERSION-linux-x64.tar.gz” SHASUMS256.txt.asc SHASUMS256.txt \

&& ln -s /usr/local/bin/node /usr/local/bin/nodejs \

&& rm -fr /var/lib/apt/lists/* /tmp/* /var/tmp/*

RUN npm set unsafe-perm true

CMD [ “node” ]

# Install Serverless Framework

RUN set -ex \

&& npm install -g serverless

Now the image can be built and tagged:


docker build -t aws/codebuild/dot-net .

After the building has completed, the image can be run to confirm all is working and serverless has installed correctly:


docker run -it –entrypoint sh aws/codebuild/dot-net -c bash

sls -v

Next we’ll create a repository in ECR using the AWS CLI. After the command has run the new repository will be visible in the AWS Console:

aws ecr create-repository –repository-name codebuild-dotnet-node

Now we’ll tag the aws/codebuild/dot-net image we created earlier with the repositoryUri value from the previous step:

docker tag aws/codebuild/dot-net <ACCOUNTID>.dkr.ecr.ap-southeast-2.amazonaws.com/codebuild-dotnet-node

Run the get-login command to get the docker login authentication command string for your container registry:

aws ecr get-login –no-include-email

Run the docker login command that was returned by running the get-login command in the last step.

docker login -u AWS -p eyJwYXlsb2FkIjoiNGZnd0dSaXM1L2svWWRLMmhJT1c0WWpOZEcxamJFeFJOK2VvT0Y5[…] https://<ACCOUNTID>.dkr.ecr.ap-southeast-2.amazonaws.com

If login is successful, we can now push our docker image to the repository created in ECR. It may take a few minutes depending on the size of the completed image:

docker push <ACCOUNTID>.dkr.ecr.ap-southeast-2.amazonaws.com/codebuild-dotnet-node

Once the image has been created, we’re going to allow anyone to access the image from ECR. Permission should be locked down in a production environment but for this example we’re going to allow it to be open. Navigate to the Permissions tab in the AWS Console, select Edit policy JSON and paste in this policy:.

{

“Version”: “2008-10-17”,

“Statement”: [

{

“Sid”: “EnableAccountAccess”,

“Effect”: “Allow”,

“Principal”: “*”,

“Action”: [

“ecr:BatchCheckLayerAvailability”,

“ecr:BatchGetImage”,

“ecr:DescribeImages”,

“ecr:DescribeRepositories”,

“ecr:GetAuthorizationToken”,

“ecr:GetDownloadUrlForLayer”,

“ecr:GetRepositoryPolicy”,

“ecr:ListImages”

]

}

]

}

3. Create your Pipeline

It’s time to build the pipeline. To make this easier and repeatedly deployable, and in the true form of serverless architectures, I’ve built the pipeline using the serverless framework. You could also achieve the same thing by building it in CloudFormation.

It is that we would not be pasting the entire source from my serverless.yml file, you can clone it from GitHub instead: https://github.com/effectivedigital/serverless-deployment-pipeline

Take a look through the serverless template to understand exactly what it will be doing, but in short, it’s going to setup:

3x S3 Buckets

1x Bucket Policy

3x IAM Roles

1x CodeBuild Project

1x CodePipeline Pipeline

1x CloudWatch Event

1x CloudTrail Trail

Once cloned, update the DockerImageArn to your image in ECR. If you’re going to be creating deployment packages with a filename other than Deployment.zip, update DeploymentFilename as well:

DockerImageArn: <ACCOUNTID>.dkr.ecr.ap-southeast-2.amazonaws.com/codebuild-dotnet-node:latest

DeploymentFilename: Deployment.zip

That’s it, the pipeline is now ready for deployment. Run the serverless deploy command and wait while everything is set up for you:

sls deploy -v

4. Add buildSpec.yml to your application

When CodePipeline detects a change to the Deployment file in S3, it will tell CodeBuild to run and attempt to build and deploy your application. That said, CodeBuild also needs to know what commands need to be run to build and deploy your application, buildSpec.yml has the instructions which CodeBuild follows.

Alternatively, you can create a buildSpec.yml file in your existing applications and populate it with the instructions below:

version: 0.2

phases:

pre_build:

commands:

– chmod a+x *

build:

commands:

– ./build.sh

post_build:

commands:

– sls deploy -v -s $STAGE

5. Testing your Pipeline

Everything is now in place to run your Pipeline for the first time. Create a package called Deployment.zip which should include all of the files for your serverless application and the buildSpec.yml file.

After a few moments, CloudTrail should log the PutObject event, trigger a CloudWatch Event Rule which will then trigger CodePipeline to start running.

Summary

CodePipeline allows you to create a scalable, flexible, and low-cost CI/CD pipeline and helps solve some of the issues associated with traditional pipelines created on servers.

We at Cloud Stack Group are at your service to assist you and guide you in moving your business to a serverless system and that is only possible with the help of a CI/CD application.

Connect with us today to know more and in detail about the service and how we can be a better option for you and your business both.

What you can read next

Azure Kubernetes Services
Welcome to the world of Azure Kubernetes Services – AKS
Explore different types of Cloud Computing Models
AWS opswork
Use of Chef Automated Process in AWS OpsWork in a Unify Environment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Search for posts

Loading

Recent Posts

  • What is Future of DevOps in 2022

    0 comments
  • DevOps tools and software :-Increase DevOps agility, shorten releases, improve reliability and stay ahead of the competition with DevOps tools

    0 comments
  • 4 Major Benefits of Using Kubernetes

    0 comments

Recent Comments

  • New York Consultants on Jenkins on Kubernetes Engine – Cloud Stack Group
  • Parbriz auto OPEL ASCONA C Hatchback J82 1981 on Strategies and Process of Migrating Applications to the Cloud
  • Geam Porsche Cayenne 9PA 2010 on Strategies and Process of Migrating Applications to the Cloud

Add Wings to the Modern Enterprise with the help of a Global Cloud Platform

The platform of Cloud Stack Group provides numerous options that are beneficial to the organisation. It is necessary that we follow all the process as we share the result that is scalable, accurate and convenient to use.

GET A QUOTE

Cloud Stack Group is the pioneered and well-established company that is working on the newest and the latest forms of AWS services. Being in a competitive market for more than 3+ years we have served with our services to more than 40 industries and 50+ fortune global companies

MENU

  • About Us
  • Services
  • Industries
  • Blogs
  • Contact

OUR BLOGS

VIEW ALL
  • What is Future of DevOps in 2022

  • DevOps tools and software :-Increase DevOps agility, shorten releases, improve reliability and stay ahead of the competition with DevOps tools

  • 4 Major Benefits of Using Kubernetes

COMPANY INFO

+91 9687177221

info@cloudstackgroup.com

WE'RE SOCIAL

SUBSCRIBE NOW

Loading

© 2019-2022 CLOUDSTACKGROUP. ALL RIGHTS RESERVED

A SADADIYA INDUSTRIES LLP COMPANY

TOP