Pipelines for AWS Lambda – Part 1: The Deployment Stack


Serverless applications are complex and AWS doesn’t do much to make setting up pipelines for them easy. I’m going to walk through how to use CloudFormation templates to configure multiple pipelines.


As I posted before, Tutorials are (Often) Bad and AWS tutorials are no exception. AWS has a tendency to use the CLI or console for tutorials. While this is a fine way to learn, you would never want to use these techniques outside of a sandbox environment. For production applications, you want to use a deployment pipeline. To create a pipeline to deploy to AWS, you need to configure a User with the permissions that the pipeline will need. However, you also should control the creation of Identity and Access Management (IAM) resources such as Users. This creates a “chicken and egg” situation: How do you allow your organization to manage creation of IAM resources which are required to create a deployment pipeline when you don’t have a deployment pipeline to help manage this activity? The short answer is CloudFormation.

While I love the concept of serverless applications, the development experience has always been a challenge. With the introduction of the AWS Serverless Application Model (SAM), things definitely got better, but it is still difficult to find good documentation and SAM itself does not always follow what I consider AWS best practices. In this series of posts, I’ll walk through the creation of a simple REST API written in Node.js and hosted in AWS Lambda behind an API Gateway. I will highlight all of the various “gotchas” I stumbled on along the way. To keep things simpler for this example, I’m not going to be using containers to deploy my Lambda function. In this first post, I want to focus on using CloudFormation to set up a the AWS resources required for your pipeline.

Creating the Deployment Stack

So right off the bat, when trying to follow the tutorial on setting up a SAM pipeline, I noticed that the very first step: sam pipeline bootstrap created resources in AWS. Thankfully this command does use CloudFormation. There is no way to specify a stack name, apply tags, etc. so I don’t understand why SAM doesn’t give you the option of just creating the CloudFormation template and then executing on your own. At least you can grab the template from the stack which is what I have done in this gist.

The template creates the following resources:

  • A bucket to store pipeline artifacts (your Lambda code)
  • A policy to allow the pipeline to write to the bucket
  • A bucket to log activity on the artifacts bucket
  • A policy to allow the bucket activity to be logged
  • A role to be assumed when CloudFormation templates are executed
  • A role to be assumed by your pipeline
  • A policy to allow the pipeline to create and execute CloudFormation change sets, and to read and write artifacts to the bucket
  • A user granted the pipeline execution role
  • An access key for the pipeline user
  • A Secrets Manager entry to store the pipeline user credentials

That is a lot of resources and we aren’t even doing anything with Lambda yet. These are simply the resources required to run the pipeline.

I modified the template to remove any container-related resources and added names to most of the resources. You can find this version in this gist.

You can run this template in the AWS console by going to CloudFormation->Stacks, selecting Create stack->With new resources (standard), select “Upload a template file”, select the file saved from the gist. You must provide a stack name, but you can leave the default parameter values or enter your own unique identifier to be used for the resource names.

If you save the template from the gist as deployment-stack.yml, you can create the stack using the AWS CLI as follows:

$ aws cloudformation create-stack \
--stack-name aws-sam-demo \
--template-body file:///$PWD/deployment-stack.yml

Note you will need to also specify --region if you have not already defined a default region in your local AWS settings.

Adding Secrets

Managing sensitive data can seem more complicated than necessary sometimes. Since we are building a pipeline with GitHub Actions which supports its own Secrets management, it may seem intuitive to use this to store all of your sensitive information. However, you should only use GitHub secrets (or any pipeline-based secure storage) to store information about connecting to AWS and not for information used by AWS. This is because we will be using CloudFormation to deploy to AWS and if you pass sensitive information via either a parameter or environment variable, it will be visible as plain text in the CloudFormation stack configuration.

For secrets that will be controlled by AWS, you can add the secret to the CloudFormation template and just allow AWS to set the value (and potentially rotate the secret). Below is a CloudFormation template that can be used to create a Secrets Manager entry for a password generated by AWS.

AWSTemplateFormatVersion: '2010-09-09'
      Type: String
      Default: DbSecret
    Type: 'AWS::SecretsManager::Secret'
      Name: !Sub ${SecretId}
        GenerateStringKey: "DB_PASSWORD"
        SecretStringTemplate: '{ "DB_USER": "admin" }'
        PasswordLength: 30

This will create a Secret named “DbSecret” in the format shown below:

  "DB_USER": "admin",
  "DB_PASSWORD": "[generated password goes here]

For secrets that are defined outside of AWS (ex: third-party API keys), you need to just create the Secret and then enter the sensitive values either via the console or CLI. While this manual process may seem problematic, it can be secure as long as you manage who can update the secrets.

Enabling API Gateway Logging

As described in the AWS Documentation, the API Gateway service does not have access to write to CloudWatch logs by default. Thankfully I found this gist:

AWSTemplateFormatVersion: '2010-09-09'
    Type: "AWS::ApiGateway::Account"
      CloudWatchRoleArn: !GetAtt "ApiGatewayLoggingRole.Arn"
    Type: "AWS::IAM::Role"
        Version: "2012-10-17"
          - Effect: Allow
                - "apigateway.amazonaws.com"
            Action: "sts:AssumeRole"
      Path: "/"
        - !Sub "arn:${AWS::Partition}:iam::aws:policy/service-role/AmazonAPIGatewayPushToCloudWatchLogs"

This template only needs to be executed once for any AWS account so you can run this template on its own to enable logging for your API Gateways. Note that you can still control whether logging is enabled for any gateway. This just makes sure the service can write to logs.


Before you can deploy a Lambda Function using AWS SAM, you need to create resources (primarily IAM resources). You can create these resources with sam pipeline bootstrap, but you won’t have much control over the details of the resources. Therefore, I recommend using a CloudFormation template that matches the template generated by SAM. This same template can be used over and over for multiple stacks.

Leave a Reply

Your email address will not be published. Required fields are marked *