Serverless deployment automation

Serverless deployment automation

- 18 mins

This post is part of my serverless series if you want to catch up please check Building a serverless blog - newbie way post. I decided to start improving my blog by creating a CI/CD pipeline for it. Having the automation will allow me to introduce and validate various changes very quickly and effortlessly. In this post I won’t touch functionality and implementation of my blog at all, rather I’d like to focus on creating “one click” deployment procedure. The end state that I want to achieve is: Every commit to master updates my blog automatically. I want to automate infrastructure creation as well, this will be useful when I decide to move to another region that has more features or if I need to create a similar setup for a different website.

Cloud formation to the rescue

Creating AWS infrastructure doesn’t have to be a click fest, we can do it in more efficient and automatic way. In the AWS world the tool for doing this is called CloudFormation. Based on templates in either JSON or YAML format it is able to setup all services for us. Moreover, when we introduce changes to our template CloudFormation is able to perform partial upgrade - without touching resources that don’t have to be changed as part of given change. What’s more, even if something goes wrong in the middle of applying the update, cloudformation is able to rollback the changes and go back to previous version of the infrastructure.

Cloud Formation

Let’s start our journey from automating S3 bucket creation for blog’s static content. Once bucket is created, we can attach domain to it. Later, we will take care of our send me a message API with corresponding lambda function. Having it all working, I’ll show you how to plug everything into simple CI/CD pipeline - in my case it is going to be Bitbucket pipelines, but the steps would be similar in other services like TravisCI or CircleCI.

The following picture presents the architecure we want to create via CloudFormation templates:

Architecture of this website

Our first template

Below you can find an example of the CloudFormation template. It creates 2 resources: S3 bucket with static website hosting enabled and bucket policy attached to this bucket that is needed for public access to the bucket files. We can test it qucikly using AWS console Simply, go to CloudFormation service and create new stack by adding this file. In my case I had to remove my bucket as it already existed, without removing it CloudFormation won’t be able to create it. I picked YAML instead of JSON as I believe it is more human readable and less error prone format for such usage. Lets take a look at template:

AWSTemplateFormatVersion: 2010-09-09
Resources:
  S3Bucket:
    Type: 'AWS::S3::Bucket'
    Properties:
      BucketName: 'pawelpikula.pl'
      AccessControl: PublicRead
      Tags:
        - Key: 'client'
          Value: 'pawel'
      WebsiteConfiguration:
        IndexDocument: index.html
    DeletionPolicy: Retain
  BucketPolicy:
    Type: 'AWS::S3::BucketPolicy'
    Properties:
      PolicyDocument:
        Id: MyPolicy
        Version: 2012-10-17
        Statement:
          - Sid: PublicReadForGetBucketObjects
            Effect: Allow
            Principal: '*'
            Action: 's3:GetObject'
            Resource: !Join
              - ''
              - - 'arn:aws:s3:::'
                - !Ref S3Bucket
                - /*
      Bucket: !Ref S3Bucket

Lets break it down. At the beginning we define AWSTemplateFormatVersion it is just a version of CloudFormation template spec - it will be 2010-09-09 the most of the time.

The next section is more interesting(the resources section). You can spot our bucket and bucket policy there. In CloudFormation for each resource we define its type and properties which are just options/parameters used during resource creation. If you want to know more about the properties, you can find them in the documenttion, for the resource defined in our example, such list can be found here: S3 Bucket and Bucket Policy

However, there is one thing that stands out in our template, words starting with "!". These guys are called intrinsic functions - we have a possibility to call small functions in our template like string join or extract reference to previously defined resource and its ARN as this informations are known during template execution. One of such intrinsics is used in attaching bucket policy to the bucket. Full list of the intrinsics can be found here.

Please keep it mind that created bucket is empty, CloudFormation is not able to copy files to newly created bucket. We have to copy it on our own, if you are a fan of automation you can create handy script that uses aws CLI and run it after the stack is successfuly created.

Now, lets take care of our domain, which should point to our S3 website. In order to achive this we need to create at least 2 resources: HostedZone and RecordSet A pointing to S3. Please append the following resources to our file:

  BlogDomain:
    Type: 'AWS::Route53::HostedZone'
    Properties:
      HostedZoneConfig:
        Comment: 'Personal blog domain'
      HostedZoneTags:
        - Key: 'client'
          Value: 'pawel'
      Name: 'pawelpikula.pl'
  BlogRecordSet:
    Type: 'AWS::Route53::RecordSet'
    Properties:
      Comment: 'Redirect to static website hosting'
      HostedZoneId: !Ref 'BlogDomain'
      Type: A
      Name: "pawelpikula.pl"
      AliasTarget:
        DNSName: 's3-website.eu-central-1.amazonaws.com.'
        HostedZoneId: 'Z21DNDUVLTQW6Q'

As you can see, this is not a rocket science, first we created hosted zone and then record set which is referencing the previously created hosted zone. We are ready to update our stack! We are going to do it in a safe way - that is by creating changeset. The changeset creation can be found under action button, it allows us to upload a new template and see what resources are going to be changed if decide to apply it. Having such changeset validated we can apply it on our production environment with one click. As you can seen on the picture below - changeset has only hosted zone and domain create action - it is not touching or recreating our S3 buckets.

Our changeset

In my current setup I have my domain configured in an external registrar. To have it available in Route53, I need to redirect the domain to AWS NS servers listed in the newly created Hosted Zone. There is one small issue with it, when we create or recreate domain resoruces via CloudFormation we must update these DNS entries, it requires us to go to the Route53 and copy them. However, CloudFormation can print them for us! We need to add an Output section(on the same level as Resources). Take a look at the following code:

[....]
  BlogRecordSet:
    Type: 'AWS::Route53::RecordSet'
    Properties:
      Comment: 'Redirect to static website hosting'
      HostedZoneId: !Ref 'BlogDomain'
      Type: A
      Name: "pawelpikula.pl"
      AliasTarget:
        DNSName: 's3-website.eu-central-1.amazonaws.com.'
        HostedZoneId: 'Z21DNDUVLTQW6Q'
Outputs:
  Nameservers:
    Value: !Join [",", !GetAtt BlogDomain.NameServers]

As a result of running our changeset we can see:

One more thing, we repeat the domain name pawelpikula.pl in couple of places. I’d would be nice to create a global constant out of if or even allow to pass it as a parameter, so we will have a generic template for static website hosting infrastructure. Of course it is possible, we just need to add the Parameters section on the same level as Resources and Outputs. Having this we need to replace all occurences of our domain with reference to the newly introduced parameter.

Parameters:
  RootDomainName:
    Description: Domain name for our website
    Type: String
    Default: "pawelpikula.pl"

Resources:
  [...]
  BlogRecordSet:
    Type: 'AWS::Route53::RecordSet'
    Properties:
      Comment: 'Redirect to static website hosting'
      HostedZoneId: !Ref 'BlogDomain'
      Type: A
      Name: !Ref RootDomainName
      AliasTarget:
        DNSName: 's3-website.eu-central-1.amazonaws.com.'
        HostedZoneId: 'Z21DNDUVLTQW6Q'

Looks pro, isn’t it? Let’s get back to original purpose of doing all of that - I want to migrate my infrastructure from Frankfurt region to Ireland as Ireland has much more services available.

To do that at the moment, we need to switch region in the UI and run our template, right?

WRONG!

We still have hard coded stuff that points to EU-Central:

      DNSName: 's3-website.eu-central-1.amazonaws.com.'
        HostedZoneId: 'Z21DNDUVLTQW6Q'

How we can tackle it? We can create 2 input params and pass them with every CloudFormation run. Doesn’t sound like the best solution, does it? Of course there is a better way of doing such things. We can get introduced to the new top level guy - Mappings. More or less they are working as global variables.

Mappings:
  RegionMap:
    us-east-1:
      S3hostedzoneID: Z3AQBSTGFYJSTF
      websiteendpoint: s3-website-us-east-1.amazonaws.com
    us-west-1:
      S3hostedzoneID: Z2F56UZL2M1ACD
      websiteendpoint: s3-website-us-west-1.amazonaws.com

[...]
      DNSName: !FindInMap [ RegionMap, !Ref 'AWS::Region', websiteendpoint]

As you can see we created a map which stores values per region, while running the template we can just use !Ref ‘AWS::Region' to get the region. Having it we can access values from the map.

API Resources

As you are now a CloudFormation expert we can tackle our non-static part, that is a lambda function and corresponding API exposed via API Gateway, to do that we can go through CloudFormation documentation and create something like the code snipped pasted below. However, there is an easier, also declarative way of doing that, but more about it later let’s learn the hard way first!

  BlogLamdbaRole:
    Type: 'AWS::IAM::Role'
    Properties:
      AssumeRolePolicyDocument:
        Version: "2012-10-17"
        Statement:
          -
            Effect: "Allow"
            Principal:
              Service:
                - "lambda.amazonaws.com"
            Action:
              - "sts:AssumeRole"
      Path: "/"
      Policies:
        -
          PolicyName: "root"
          PolicyDocument:
            Version: "2012-10-17"
            Statement:
              -
                Effect: "Allow"
                Action: ["logs:CreateLogGroup"]
                Resource: "arn:aws:logs:*:*:*"
              -
                Effect: "Allow"
                Action: ["logs:CreateLogStream", "logs:PutLogEvents"]
                Resource: "arn:aws:logs:*:*:log-group:/aws/lambda/ContactFormEmail:*"
              -
                Effect: "Allow"
                Action: ["sns:Publish"]
                Resource: !Ref SNSSendEmailTopic
  SendEmailLambda:
    Type: 'AWS::Lambda::Function'
    Properties:
      Code:
        S3Bucket: 'lambdas.pawelpikula.pl'
        S3Key: 'lambda.zip'
      Handler: 'Lambda.lambda_handler'
      FunctionName: "ContactFormEmail"
      Role: !GetAtt [BlogLamdbaRole, "Arn"]
      Runtime: 'python2.7'
      Tags:
        - Key: 'client'
          Value: 'pawel'
  BlogLambdaPermission:
    Type: 'AWS::Lambda::Permission'
    Properties:
      FunctionName: !GetAtt [SendEmailLambda, "Arn"]
      Action: "lambda:InvokeFunction"
      Principal: "apigateway.amazonaws.com"
      SourceArn: !Join ["", ["arn:aws:execute-api:", !Ref "AWS::Region", ":", !Ref "AWS::AccountId", ":", !Ref "RestApi", "/*/POST/message"]]
  RestApi:
    Type: "AWS::ApiGateway::RestApi"
    Properties:
      Name: "Personal blog notification"
  SendMessageAPIResource:
    Type: "AWS::ApiGateway::Resource"
    Properties:
      RestApiId: !Ref RestApi
      ParentId: !GetAtt RestApi.RootResourceId
      PathPart: "message"
  SendMessagePost:
    Type: "AWS::ApiGateway::Method"
    Properties:
      HttpMethod: "POST"
      ResourceId: !Ref SendMessageAPIResource
      RestApiId: !Ref RestApi
      AuthorizationType: "NONE"
      MethodResponses:
        - StatusCode: 200
          ResponseModels:
            application/json: "Empty"
        - StatusCode: 302
          ResponseParameters:
            method.response.header.Location: true
          ResponseModels:
            application/json: "Empty"
        - StatusCode: 400
          ResponseModels:
            application/json: "Error"
      Integration:
        Type: "AWS"
        IntegrationHttpMethod: "POST"
        PassthroughBehavior: "NEVER"
        Uri: !Join ["", ["arn:aws:apigateway:", !Ref "AWS::Region", ":lambda:path/2015-03-31/functions/", !GetAtt ["SendEmailLambda", "Arn"], "/invocations"]]
        IntegrationResponses:
          - StatusCode: 200
          - StatusCode: 400
            SelectionPattern: "exception"
          - StatusCode: 302
            SelectionPattern: "http://.*"
            ResponseParameters:
              method.response.header.Location: "integration.response.body.errorMessage"
        RequestTemplates:
          application/x-www-form-urlencoded: |
            {
                "data": {
                    #foreach( $token in $input.path('$').split('&') )
                        #set( $keyVal = $token.split('=') )
                        #set( $keyValSize = $keyVal.size() )
                        #if( $keyValSize >= 1 )
                            #set( $key = $util.urlDecode($keyVal[0]) )
                            #if( $keyValSize >= 2 )
                                #set( $val = $util.urlDecode($keyVal[1]) )
                            #else
                                #set( $val = '' )
                            #end
                            "$key": "$val"#if($foreach.hasNext),#end
                        #end
                    #end
                }
             }
    DependsOn: ["RestApi"]
  RestApiDeployment:
    Type: "AWS::ApiGateway::Deployment"
    Properties:
      RestApiId: !Ref RestApi
      StageName: "v1"
    DependsOn: ["SendMessagePost"]
  SNSSendEmailTopic:
    Type: "AWS::SNS::Topic"
    Properties:
      DisplayName: "BlogNotification"
      TopicName: "BlogNotification"
      Subscription:
        -
          Endpoint: "xxxx@gmail.com"
          Protocol: "email"
Outputs:
  RootUrl:
    Description: Root URL for api gateway
    Value: !Join ["", ["https://", !Ref RestApi, ".execute-api.", !Ref "AWS::Region", ".amazonaws.com"]]

This is not the most beautiful thing, am I right? We just dumped our UI configuration in to yaml file and it still looks complicated, especially in context of our API which is dead simple. The better way of doing it is called SAM - Serverless Application Model. The spec for this goodie can be found here This is extension to CloudFormation that allows to write serverless app definition in more compact and clear way. I’ll talk about it more in the next post together with client code generation.

Bitbucket pipelines

As I’m hosting this blog code in bitbucket private repo I wanted to play with their CI/CD pipeline called bitbucket pipelines. As the most of “CI as a service” the pipelines also offer configuration via file in the repo, builds can be triggered by every commit on monitored branch. In the config file we describe our env(container) and steps to take in order to test and deploy the project. In my case it is:

My config can be found below:

image: ruby:2.3.3

pipelines:
  branches:
    master:
      - step:
          script: # Modify the commands below to build your repository.
            - apt-get update && apt-get install -y python-pip python-dev
            - pip install awscli
            - bundler --version
            - bundle install
            - bundle exec jekyll build --config _config.yml # build static content
            - ./sync.sh # upload it to s3

Lets take a closer look at the last step sync.sh.

#!/bin/bash
aws s3 sync _site/  s3://pawelpikula.pl/ --storage-class REDUCED_REDUNDANCY --acl public-read

This is simply aws cli command that copies the content of _site directory to the defined s3 bucket, setting correct acceess rights and defining cheaper storage class(reduced redundancy). I guess the questions that comes to your mind is, how the aws cli knows what is my account id and how to get permission to make such changes. The answer is: I created a special user using IAM that has only programmatic access and it is allowed to write only to my bucket. There are several ways to pass credentials to aws cli, but in this case I think the most secure is to use ENV vars in pipeline configuration. Take a look at the picture below:

If I wanted to call CloudFormation here I’d need to extend my user’s permissions and extend the script to creation/update stack. I don’t want to do that as it grants a lot of permissions and in my case it will be done very rarely, so I can call it from my laptop. However in bigger installations it might be useful, especially if you for example use CloudFormation to spin environment for integration tests.

Summary

I hope that I convinced you to try CloudFormation! First of all I allows you to avoid unnecessary clicking in the UI, which might be irritating in bigger installations where you have a lot to create and it is hard to track all the IDs and dependencies. Thanks to CF we have a single point where changes are introduced and can be easily reviewed by other team members before applying. This approach is known as infrastructure as a code and is it a part of DevOps movement. Having template it is very easy to expand (by replication) infrastructure to other regions or create 1:1 QA env in matter of minutes, the whole process is safer as it is not done by human hands.

In my case templates are a bit verbose, but as I mentioned before there is an extension called SAM to make it more compact for this particular use case: serverless. Apart from that there are other tools, for example Serverless.js that expose very simple “interface” hiding CloudFormation underneath and other cli commands used to upload the code. Serverless.js can be also used with other cloud providers

Of course using CloudFormation templates in different cloud providers is not possible - classic vendor lock case. However sticking to it has its benefits for more informations take a look at Infrastructure Automation At Appliscale. VOL 1 and Infrastructure Automation – Part II – Cloudformation vs. Terraform.

In the next post I’ll focus on simplifying and securing the blog.

comments powered by Disqus
rss facebook twitter github youtube mail spotify instagram linkedin google google-plus pinterest medium vimeo stackoverflow reddit quora