Sprint from Zero to Hero

Foxbox performs application development as a service for our clients as a core business offering. These applications are of a known scope, and therefore should executed on a fixed timeline and within a fixed budget. Traditionally, this type of contracting work has used (and arguably gave birth to and fed) the “waterfall” approach to project management. Foxbox has embraced Scrum as our agile (i.e. non-waterfall) project management methodology. However, as an agency, part of our core value proposition is that we can accelerate the start of a project in ways our clients cannot – we can bring a team, infrastructure, and processes which allow us to deliver more value, faster. At Foxbox, this starts with Sprint Zero.

The Scrum Alliance, provided the following working definition for sprint zero:

  1. Sprint Zero should be used to create the basic skeleton and plumbing for the project so that future sprints can truly add incremental value in an efficient way. It may involve some research spikes.
  2. Minimal design up front is done in Sprint Zero so that emergent design is possible in future sprints. This includes putting together a flexible enough framework so that refactoring is easy.
  3. For minimal design up front, the team picks up a very few critical stories and develops them to completion. Since these are the first few stories, delivering them includes putting the skeleton/framework in place, but even Sprint Zero delivers value.

This same article also identifies some things that are not necessarily part of Sprint Zero:

  1. The team needs to be assembled in Sprint Zero.
  2. Organization and logistics need to be set up in Sprint Zero.
  3. Consider planning, product backlog setup, and design.

At Foxbox, we start every project with discovery. Ideally, this is a Design and Discovery service we are contracted to perform to help our clients refine their product and business vision. Sometimes this is part of our exploration process as we learn about a potential client and the work we are looking to perform. If we are unable to complete this work before we officially start an application development project, this is done in Sprint Zero. Regardless of how this Design and Discovery phase is coordinated, it meets this definition of Sprint Zero keeping us in line with this definition and Scrum’s tenants.

Even though every client is different, this is not our first rodeo (so to speak). We have already established processes so they can be tailored and repeated. We have defined an organization structure which allows us to drive continuous improvement. In other words we can maximize the value of the right side of the Agile Manifesto while continuing to value the items on the left more.

So what can we do during Sprint Zero that our clients can’t? We can assume that they can do the things that Sprint Zero should include, so let’s explore this under the above examples of what Sprint Zero should not be but definitely could be.

Foxboxers Assemble!

So according to the Scrum Alliance, the team does not need to be assembled in Sprint Zero. But what if the team is assembled? Our ideal teams have a mix of experienced Foxboxers and newer joiners and even some members who worked together on previous projects. This allows us to bring in the best of Foxbox and inject diverse viewpoints to provide the team empowerment that is a key to successful Scrum. While we do value responding to change, our clients value predictable pricing based on a clearly-defined staffing plan. Setting this upfront gives our clients and our business a baseline that we can adjust for success as we execute the project.

The Line Between Disorder and Order

According to Sun Tzu, “the line between disorder and order lies in logistics.” Another way to say this is that logistics is the line between disorganization and organization. While it isn’t necessary to have an organization or logistics to start a Scrum project, plugging a team into an organization and having logistics from the start is a huge advantage. From the very beginning, we have tools like Slack, Zoom, Google Workspace, Miro, and Figma to help us in our Design and Discovery projects. During Sprint Zero, we can add tools like Jira, GitHub, and TestRail as well as best practices and automation around public cloud management. Our teams have support structures beyond their immediate project to answer questions around technology, processes, etc. Even when the the client brings their own processes and tools or organization, we are able to draw on our past experience and best practices act as a trusted advisor to help make sure our joint project is successful.

Plans are Useless, but Planning is Indispensable

This quote from Dwight Eisenhower is one of my favorites when talking about Agile. In Agile, you don’t simply throw away the concept of planning entirely, you embrace the fact that no amount of planning can prepare you with the reality of what you will face. Again, our clients are trusting us to deliver some result for some fairly-predictable amount of money. We can be are transparent on the risk around the scope of work, our understanding of it, our technical abilities, etc. but the client will always expect some degree of predictability. This requires at least some macro-level planning. We want to enter Sprint 1 with a high-level plan that we can measure our progress against. Ideally this will capture all of the major milestones, but more importantly needs to allows us to add new scope and communicate the impact to our clients. All of our progress needs to be tracked to this plan so our client knows the definition of done for the project and where we are on the roadmap to get there.

TL/DR;

At Foxbox, our definition of Sprint Zero is simple: complete the work required to start Sprint 1. As a company, we already completed a “Foxbox Sprint Zero” to determine how we are organized, how we work, etc. We continuously improve that over time while adding our preferred tools and best practices. As a result, we can now follow a repeatable process to exit Sprint Zero at a higher level of readiness which means we can deliver more value in Sprint 1.

Pipelines for AWS Lambda – Part 4: Troubleshooting

TL/DR;

It’s best to test Lambda “inside-out” by fist making sure the Lambda works, then the invocation (in this case API Gateway), then external. Pipeline error logging and CloudWatch logging are your best friends for troubleshooting.

Background

In this series of posts we walked through the following steps for using the AWS Serverless Application Model (SAM) for setting up a GitHub Actions pipeline for deploying serverless functions written in Node.js. Previous posts covered:

When building a SAM application, you have two choices for how you configure your API gateway: Api and HttpApi. You can review the AWS Documentation for a comparison of these two options. We will discuss techniques that apply to each of these options.

Linting

I can’t say this enough. When you are looking for problems with your application, Look for horses and not zebras. The longer you spend trying to track down the cause of a problem without finding a solution, the more likely the answer is staring you right in the face. Very often, problems can be found with static code analysis (“linting”) and specifically eslint if you are using Node.js. While working on the proof of concept for this post, I chased a bug way too long that was just an invalid reference inserted by my IDE. The error message pointed me to the exact line but since the error message said it couldn’t find a reference (and I thought I hadn’t added any new references), I thought there was something wrong with loading the dependencies. Since the code I was using for this series was so simple, I didn’t bother adding eslint. As soon as I did, I found the issue since it highlighted the unused reference introduced by the IDE. The moral of the story is use linting to find easily-fixed problems.

Unit Testing

I don’t want to get into a philosophical conversation on what is and is not a unit test. I might venture into that conversation another day. For the sake of this post, let’s consider “unit testing” analogous to “local testing” – any test than can be run outside of the AWS ecosystem. This way you can run the test in your local dev environment or in the pipeline. These test are extremely important to successful development for microservices and the cloud. You need to be able to test the atomic transaction your lambda is supposed to perform. The great part of Lambda is that you can invoke the code multiple ways. The same function can be invoked from an API Gateway like in our example or from an SQS queue, SNS topic, CloudWatch event, etc. If your code works, it should work across any use case. Of course if you are integrating with other AWS services like S3 or need network connectivity, then the permissions and resources need to be configured correctly in AWS. However none of this matters if there is a bug in your code. Test your code thoroughly and shoot for 100% code coverage even if that means your “unit test” smells more like an “integration test” (ex: use docker run or docker-compose to spin up a database in a container rather to test CRUD transactions). Structure your code based on business logic and then have a handler function that only handles the routing of parameters from the event parameters to your function(s). Then test this function.

Note that you can also run Lambda functions in a container. The easiest way to do this is with sam local invoke which will use the information in your SAM template to create the Lambda Function in a container. For this simple Hello World example, I think this is a perfectly valid technique. However, as you start adding other AWS services to your Lambda, you would need to extend any permissions needed to run the Lambda to an access key shared with the developer (i.e the developer’s persona IAM user and role). In other words, you have achieve all of the same security requirements in your local environment that need to be met in the AWS account. I would argue you are better off always running in a dev AWS account rather than locally. This might seem unnecessarily painful at first, but if you follow these other testing techniques, you are very unlikely to have issues and you will actually move more quickly since everything is developed and tested within the ecosystem of the pipeline and AWS account so you don’t run into configuration problems due to differences between the local and AWS environment.

Validating the SAM Template

Even though we are using SAM for the GitHub Actions pipeline, you can follow all of the steps in this series of posts without every having to use the SAM CLI. This is by design. I am a firm believer that you should be able to develop for AWS using only code and your standard development tools and services so these posts are intended to document a process that follows that belief. However, since we are using SAM for deployment, it is good to use SAM for local features where it makes sense. Before you can run SAM, you will need to make sure you have installed it as defined in the AWS documentation. To validate your SAM template (template.yaml in our example), simply run sam validate. Note that you may need to specify your region with the --region option if you have not configured this in you default AWS configuration. If this is the case, sam validate will respond with this information. Below is an example of an error found using sam validate:

$ sam validate
2021-11-04 11:17:30 Loading policies from IAM...
2021-11-04 11:17:32 Finished loading policies from IAM.
Template provided at '/Users/doug/code/aws-sam-demo/template.yaml' was invalid SAM Template.
Error: [InvalidResourceException('LambdaNodeApi', "Invalid value for 'Cors' property.")] ('LambdaNodeApi', "Invalid value for 'Cors' property.")

In this example, I had used the AllowOrigin key inside the CorsConfiguration section for an HttpApi but the correct key is AllowOrigins. Note that this error does not point you to the exact line so it is important to review the exact syntax for the section referenced by the error.

If everything is good, you should see output that looks something like this:

$ sam validate
2021-11-04 10:58:12 Loading policies from IAM...
2021-11-04 10:58:14 Finished loading policies from IAM.
/Users/doug/code/aws-sam-demo/template.yaml is a valid SAM Template

Troubleshooting Lambda Function

Testing Invocation

You can test your Lambda function in the AWS console. Navigate to the function (you can start with the CloudFormation stack for your most recent deployment if you aren’t sure about the name of your function) and select the “Test” tab. The default test data will be based off of the “hello-world” template. This does NOT match the schema for a call from an API Gateway so you will most likely need to modify the data to match the values your code expects from the event parameter.

Finding information on the syntax of the event parameter for your Lambda function is surprisingly difficult since there are multiple ways to invoke a Lambda function and each option has its own unique schema for the event value. This matrix provided in the AWS documentation points to all of the various invocation methods. Since we are invoking our Lambdas from an API Gateway in our example, you might want to review the schema for the event as defined for API Gateway invocation provided in the AWS documentation.

Debug Logging

By default, logs and metrics are enabled for all Lambda functions created with AWS SAM. To view the CloudWatch logs, simply navigate to the Lambda function, select the “Monitor” tab, and then click “View logs in CloudWatch”. Typically, there will be a unique log stream for each invocation of your function. Select the log stream to see the logs. Note that anything you write to the standard output (console.log in Node.js) is written to the CloudWatch log stream.

Note that even though the event schema is defined and documented, not all values are implemented for every configuration or use case. Therefore you may want to write the event value to the CloudWatch logs as follows:

  console.log(event);

Troubleshooting Api Option

Testing via AWS Console

The Api option supports testing directly in the AWS console. You can navigate to the API gateway (again, the CloudFormation stack is your friend here) and select “Resources” from the menu and then select a method (“GET” in our example). Then click the lightning bold icon to access the test page. On this page, you can enter any content required for the request (path/query parameters, headers, body, etc.) and then click the “Test” button to test the API.

After you test the API, you will see the response status, body, headers, and log output on the right-hand side of the page. One important item in the log output is Endpoint request body after transformations. This will show you the value of the event parameter passed to your Lambda function. You can also see the return value of your Lambda displayed as Endpoint response body before transformations. If your response status or body isn’t what you expected, you should review your response object compared to the syntax as defined in the AWS documentation.

Troubleshooting HttpApi Option

Debug Logging

Before you can log API Gateway activity for APIs created with the HttpApi option, you have to enable CloudWatch at the account level. Review the this gist which you can add to the deployment stack template as we did in Part 1.

One key benefit of the HttpApi option is support for generic JWT authorizers which are convenient if your are using a third-party authentication provider such as Auth0. The HttpApi supports a FailOnWarnings property which defaults to false. You can change this value to true as shown below:

  LambdaNodeApi:
    Type: AWS::Serverless::HttpApi
    Properties:
      FailOnWarnings: true

Enabling this setting will provide information on “Warnings” which could actually be preventing AWS from creating resources required for your API to function. The example below shows a failure in the pipeline that occurred during sam deploy due to missing the Audience configuration for the JWT authorizer.

-------------------------------------------------------------------------------------------------
ResourceStatus           ResourceType             LogicalResourceId        ResourceStatusReason   
-------------------------------------------------------------------------------------------------
UPDATE_IN_PROGRESS       AWS::ApiGatewayV2::Api   LambdaNodeApi            -                      
UPDATE_FAILED            AWS::ApiGatewayV2::Api   LambdaNodeApi            Warnings found during  
                                                                           import:         CORS   
                                                                           Scheme is malformed,   
                                                                           ignoring.              
                                                                           Unable to create       
                                                                           Authorizer 'LambdaNode 
                                                                           Authorizer': Audience  
                                                                           list must have at      
                                                                           least 1 item for JWT   
                                                                           Authorizer. Ignoring.  
                                                                           Unable to put method   
                                                                           'GET' on resource at   
                                                                           path '/': Invalid      
                                                                           authorizer definition. 
                                                                           Setting the            
                                                                           authorization type to  
                                                                           JWT requires a valid   
                                                                           authorizer. Ignoring.  
                                                                           (Service:              
                                                                           AmazonApiGatewayV2;    
                                                                           Status Code: 400;      
                                                                           Error Code:            
                                                                           BadRequestException;   
                                                                           Request ID: 43a34e55-d 
                                                                           0d0-4ed2-8571-eb473e71 
                                                                           a9e2; Proxy: null)     
                                                                           (Service: null; Status 
                                                                           Code: 404; Error Code: 
                                                                           BadRequestException;   
                                                                           Request ID: null;      
                                                                           Proxy: null)           

Remote Testing

The final phase of testing is to actually execute the API “in the field”. This can be done using a tool such as Postman or curl.

What Do You Mean CloudFront Error?

One error to watch out for when testing your function through the API Gateway is a CloudFront error. This may seem like a strange error since nowhere in this series of posts do we mention CloudFront, but I have a skill for finding strange errors that aren’t easy to find solutions for. I invoked a function that supported path parameters but passed in an invalid path and a JWT for an authorizer that was also not expected. The body of the response looked like this:

{
    "message": "'[JWT VALUE GOES HERE]' not a valid key=value pair (missing equal-sign) in Authorization header: 'Bearer [JWT VALUE GOES HERE]'."
}

More confusing was the X-Cache header in the response which stated “Error from cloudfront”. My invalid request was being blocked by CloudFront which sits in front of your API Gateway as part of the AWS infrastructure. This was particularly difficult to discover since CloudFront was blocking the API from being called so even once I enabled logging for my API Gateway built on the HttpApi option, I was still not seeing any activity (or error).

Summary

I recommend an “inside-out” approach to troubleshooting as follows:

  • Use static code analysis or “linting”.
  • Structure your code based on business logic and shoot for 100% code coverage testing this code.
  • Validate your SAM template with sam validate.
  • Test your Lambda function in the AWS console.
  • Use debug logging (possibly logging the event parameter) to troubleshoot the Lambda.
  • Test APIs created using the Api option using the AWS console.
  • Use FailOnWarnings and Enable CloudWatch logs for to find issues with the HttpApi option.
  • Test from outside AWS with a tool such as curl or Postman.
  • CloudFront errors usually mean you are sending a request that is way off target for your API Gateway (probably calling the wrong API).

Pipelines for AWS Lambda – Part 3: The Pipeline

TL/DR;

You can create a GitHub Action pipeline with sam pipeline init, but it will be configured for python and feature branches that start with “feature”.

GitHub Action Pipeline

The next step of the tutorial is to run sam pipeline init. Unlike sam pipeline bootstrap, this command does not deploy resources directly to AWS. In the example below, I entered placeholders for the ARNs for the resources created in Part 1, but you can enter these ARNs when configuring your pipeline.

$ sam pipeline init

sam pipeline init generates a pipeline configuration file that your CI/CD system
can use to deploy serverless applications using AWS SAM.
We will guide you through the process to bootstrap resources for each stage,
then walk through the details necessary for creating the pipeline config file.

Please ensure you are in the root folder of your SAM application before you begin.

Select a pipeline structure template to get started:
Select template
	1 - AWS Quick Start Pipeline Templates
	2 - Custom Pipeline Template Location
Choice: 1

Cloning from https://github.com/aws/aws-sam-cli-pipeline-init-templates.git
CI/CD system
	1 - Jenkins
	2 - GitLab CI/CD
	3 - GitHub Actions
	4 - AWS CodePipeline
Choice: 3
You are using the 2-stage pipeline template.
 _________    _________ 
|         |  |         |
| Stage 1 |->| Stage 2 |
|_________|  |_________|

Checking for existing stages...

[!] None detected in this account.

To set up stage(s), please quit the process using Ctrl+C and use one of the following commands:
sam pipeline init --bootstrap       To be guided through the stage and config file creation process.
sam pipeline bootstrap              To specify details for an individual stage.

To reference stage resources bootstrapped in a different account, press enter to proceed []: 

This template configures a pipeline that deploys a serverless application to a testing and a production stage.

What is the GitHub secret name for pipeline user account access key ID? [AWS_ACCESS_KEY_ID]: 
What is the GitHub Secret name for pipeline user account access key secret? [AWS_SECRET_ACCESS_KEY]: 
What is the git branch used for production deployments? [main]: 
What is the template file path? [template.yaml]: 
We use the stage name to automatically retrieve the bootstrapped resources created when you ran `sam pipeline bootstrap`.


What is the name of stage 1 (as provided during the bootstrapping)?
Select an index or enter the stage name: Build
What is the sam application stack name for stage 1? [sam-app]: build-stack
What is the pipeline execution role ARN for stage 1?: pipeline-execution-arn
What is the CloudFormation execution role ARN for stage 1?: clouformation-execution-arn
What is the S3 bucket name for artifacts for stage 1?: build-bucket
What is the ECR repository URI for stage 1? []: 
What is the AWS region for stage 1?: us-east-1
Stage 1 configured successfully, configuring stage 2.


What is the name of stage 2 (as provided during the bootstrapping)?
Select an index or enter the stage name: deploy
What is the sam application stack name for stage 2? [sam-app]: deploy-stack
What is the pipeline execution role ARN for stage 2?: pipeline-execution-arn
What is the CloudFormation execution role ARN for stage 2?: clouformation-execution-arn
What is the S3 bucket name for artifacts for stage 2?: deploy-bucket
What is the ECR repository URI for stage 2? []: 
What is the AWS region for stage 2?: us-east-1
Stage 2 configured successfully.

SUMMARY
We will generate a pipeline config file based on the following information:
	What is the GitHub secret name for pipeline user account access key ID?: AWS_ACCESS_KEY_ID
	What is the GitHub Secret name for pipeline user account access key secret?: AWS_SECRET_ACCESS_KEY
	What is the git branch used for production deployments?: main
	What is the template file path?: template.yaml
	What is the name of stage 1 (as provided during the bootstrapping)?
Select an index or enter the stage name: Build
	What is the sam application stack name for stage 1?: build-stack
	What is the pipeline execution role ARN for stage 1?: pipeline-execution-arn
	What is the CloudFormation execution role ARN for stage 1?: clouformation-execution-arn
	What is the S3 bucket name for artifacts for stage 1?: build-bucket
	What is the ECR repository URI for stage 1?: 
	What is the AWS region for stage 1?: us-east-1
	What is the name of stage 2 (as provided during the bootstrapping)?
Select an index or enter the stage name: deploy
	What is the sam application stack name for stage 2?: deploy-stack
	What is the pipeline execution role ARN for stage 2?: pipeline-execution-arn
	What is the CloudFormation execution role ARN for stage 2?: clouformation-execution-arn
	What is the S3 bucket name for artifacts for stage 2?: deploy-bucket
	What is the ECR repository URI for stage 2?: 
	What is the AWS region for stage 2?: us-east-1
Successfully created the pipeline configuration file(s):
	- .github/workflows/pipeline.yaml

This will create the GitHub pipeline configuration which I have captured in this gist.

Configuring Runtime Platform

Using the default template creates a pipeline to build a python app. So the first change I made was to replace these lines to configure the python actions:

      - uses: actions/setup-python@v2

with the node configuration as follows (note the version specification):

      - uses: actions/setup-node@v2
        with:
          node-version: '14'

Configuring Branch Naming

Another minor issue with the generated pipeline is that it assumes a certain convention for naming branches where all feature branches start with “feature”. Typically for my open source projects, I just use the GitHub issue number and title as my branch name (so something like 123-my-issue-title). Therefore I modified the branch filters at the top of the pipeline configuration as follows:

on:
  push:
    branches:
      - 'main'
      - '[0-9]+**'

Then modified the build-and-deploy-feature stage as follows so it runs on any branch other than main:

  build-and-deploy-feature:
    # this stage is triggered only for feature branches (feature*),
    # which will build the stack and deploy to a stack named with branch name.
    if: github.ref != 'refs/heads/main'

A similar change was required for delete-feature since this runs only in feature branches. Notice that the condition looks at github.event.ref and not github.ref as shown in the previous change.

  delete-feature:
    if: github.event.ref != 'refs/heads/main' && github.event_name == 'delete'

Finally, this naming convention breaks sam delpoy since this uses a CloudFormation stack name that matches the branch name. Since the stack cannot start with a number, I added a “feature-” prefix to the stack name in the build-and-deploy-feature stage as shown:

      - name: Deploy to feature stack in the testing account
        shell: bash
        run: |
          sam deploy --stack-name feature-$(echo ${GITHUB_REF##*/} | tr -cd '[a-zA-Z0-9-]') \
            --capabilities CAPABILITY_IAM \
            --region ${TESTING_REGION} \
            --s3-bucket ${TESTING_ARTIFACTS_BUCKET} \
            --no-fail-on-empty-changeset \
            --role-arn ${TESTING_CLOUDFORMATION_EXECUTION_ROLE}

Summary

The pipeline configuration created by sam pipeline init is fairly comprehensive. It handles creating a unique deployment stack for feature branches, deleting those stacks when the branch is deleted, and a multi-phase deployment for production which includes integration tests. Unfortunately this pipeline defaults to python so we have to update to node.js or whatever platform you prefer. Also, it assumes all feature branches are prefixed with feature so we need to modify the template if we are not following this convention.

Pipelines for AWS Lambda – Part 2: The Code

TL/DR;

One of the great things about AWS Lambda is that you can write your code and deploy without worrying about the hosting environment (kind of). So let’s talk about what that code should look like so you really don’t have to worry.

Background

As I mentioned in my previous post, the AWS Serverless Application Model (SAM), has made (some) things better about developing serverless functions in AWS Lambda. We are going to create a fairly basic Hello World API. The code itself is relatively simple but Lambda only works when deployed with all of the correct resources and permissions linked correctly. Using SAM, we will deploy the Lambda function and an API gateway. The resources and permissions for this initial implementation are pretty simple, but there are still mistakes that can be made so I’ll walk through the troubleshooting steps.

Creating the Lambda Code

Before we talk about deployment, we need to have some code to deploy. To make sure we capture all of the things we need for our function to work, we are just going to scaffold a new project using sam init. There is a large collection of starter templates maintained by AWS and SAM uses this repository to scaffold new projects. Below shows the selections I used to generate a “hello world” project in Node.js:

$ sam init
Which template source would you like to use?
	1 - AWS Quick Start Templates
	2 - Custom Template Location
Choice: 1
What package type would you like to use?
	1 - Zip (artifact is a zip uploaded to S3)	
	2 - Image (artifact is an image uploaded to an ECR image repository)
Package type: 1

Which runtime would you like to use?
	1 - nodejs14.x
	2 - python3.9
	3 - ruby2.7
	4 - go1.x
	5 - java11
	6 - dotnetcore3.1
	7 - nodejs12.x
	8 - nodejs10.x
	9 - python3.8
	10 - python3.7
	11 - python3.6
	12 - python2.7
	13 - ruby2.5
	14 - java8.al2
	15 - java8
	16 - dotnetcore2.1
Runtime: 1

Project name [sam-app]: sam-test-node

Cloning from https://github.com/aws/aws-sam-cli-app-templates

AWS quick start application templates:
	1 - Hello World Example
	2 - Step Functions Sample App (Stock Trader)
	3 - Quick Start: From Scratch
	4 - Quick Start: Scheduled Events
	5 - Quick Start: S3
	6 - Quick Start: SNS
	7 - Quick Start: SQS
	8 - Quick Start: Web Backend
Template selection: 1

    -----------------------
    Generating application:
    -----------------------
    Name: sam-test-node
    Runtime: nodejs14.x
    Dependency Manager: npm
    Application Template: hello-world
    Output Directory: .
    
    Next steps can be found in the README file at ./sam-test-node/README.md

You can view the template code in GitHub to se what is created. Let’s walk through each file.

Function Code

In the hello-world folder, you will find app.js. This file contains all of the code required for the function. There is some commented out code that requires axios for making a simple HTTP call, but the active code does not have any dependencies so if you simply upload this code into a new Lambda function and test it via the AWS console, you will get a simple output message looking like this:

{ "message": "hello world" }

The full code for the function is below:

// const axios = require('axios')
// const url = 'http://checkip.amazonaws.com/';
let response;

/**
 *
 * Event doc: https://docs.aws.amazon.com/apigateway/latest/developerguide/set-up-lambda-proxy-integrations.html#api-gateway-simple-proxy-for-lambda-input-format
 * @param {Object} event - API Gateway Lambda Proxy Input Format
 *
 * Context doc: https://docs.aws.amazon.com/lambda/latest/dg/nodejs-prog-model-context.html 
 * @param {Object} context
 *
 * Return doc: https://docs.aws.amazon.com/apigateway/latest/developerguide/set-up-lambda-proxy-integrations.html
 * @returns {Object} object - API Gateway Lambda Proxy Output Format
 * 
 */
exports.lambdaHandler = async (event, context) => {
    try {
        // const ret = await axios(url);
        response = {
            'statusCode': 200,
            'body': JSON.stringify({
                message: 'hello world',
                // location: ret.data.trim()
            })
        }
    } catch (err) {
        console.log(err);
        return err;
    }

    return response
};

There isn’t much code here but what is here is very important. First off, the lambdaHandler function is exposed as a static function meaning you do not need to create an instance of a class to invoke the function. This is important because this is how Lambda expects to invoke the handler so when you specify the handler in the Lambda configuration, it must point to a static function.

Also notice that the handler function is marked async. If you do not specify an async function, a third parameter named callback will be passed to your handler and you will need to invoke this callback as shown in the AWS documentation.

Note: The event parameter varies based on the type of invocation and documentation is not as thorough as you would think. If you write the parameter out with console.log(event), you can see the contents in the CloudWatch log for the Lambda.

Note that the error handler in this code returns the any error caught by the Lambda handler. This allows Lambda to log the invocation as an error. If your Lambda returns a valid response with an error statusCode value (ex: 500), it will still be logged as a successful invocation since the Lambda itself did not fail.

SAM Template

The next file generated by sam init is the template.yaml file which is also placed in the root folder. This template is similar to CloudFormation and in fact can contain most CloudFormation syntax. However, SAM provides simplified syntax and linkage for creating serverless applications. Let’s take a look at the file generated when I ran sam init.

AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: >
  sam-test-node

  Sample SAM Template for sam-test-node
  
# More info about Globals: https://github.com/awslabs/serverless-application-model/blob/master/docs/globals.rst
Globals:
  Function:
    Timeout: 3

Resources:
  HelloWorldFunction:
    Type: AWS::Serverless::Function # More info about Function Resource: https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#awsserverlessfunction
    Properties:
      CodeUri: hello-world/
      Handler: app.lambdaHandler
      Runtime: nodejs14.x
      Events:
        HelloWorld:
          Type: Api # More info about API Event Source: https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#api
          Properties:
            Path: /hello
            Method: get

Outputs:
  # ServerlessRestApi is an implicit API created out of Events key under Serverless::Function
  # Find out more about other implicit resources you can reference within SAM
  # https://github.com/awslabs/serverless-application-model/blob/master/docs/internals/generated_resources.rst#api
  HelloWorldApi:
    Description: "API Gateway endpoint URL for Prod stage for Hello World function"
    Value: !Sub "https://${ServerlessRestApi}.execute-api.${AWS::Region}.amazonaws.com/Prod/hello/"
  HelloWorldFunction:
    Description: "Hello World Lambda Function ARN"
    Value: !GetAtt HelloWorldFunction.Arn
  HelloWorldFunctionIamRole:
    Description: "Implicit IAM Role created for Hello World function"
    Value: !GetAtt HelloWorldFunctionRole.Arn

Note: We don’t specify a name for our Lambda function or API Gateway. When we deploy using SAM, we provide a stack name that is used for the CloudFormation stack but also carried to other resources for consistent naming. This allows us to identify resources created for testing purposes based off of the branch they were created from.

The first meaningful section of the template is the Globals configuration. This allows you to specify – you guessed it – global information that applies to all resources. In this example, the timeout is set to 3 seconds for all Lambda functions. This just happens to be the default, but you can set any default values here you want to apply for all functions. Since we only have one function in this template, we could have just as easily placed this Timeout key in the Properties section of the Lambda function configuration, but it is placed in Globals as an example.

The second important section is the Resources section. Even though there is only one resource specified, SAM will actually create 2 resources: the Lambda function and the API Gateway. The deployment process will also create a third resource: a Lambda Application which will provide information on all of the resources, deployments, and Lambda invocations all in one place.

The first key under Resources is HelloWorldFunction. This is a logical ID that can be used to reference the function in other parts of the template. The Type key specifies that this is a Lambda function, and the Properties key contains all of the configuration for the function (see the AWS documentation for more options for configuring a function). The CodeUri key is optional and defines the base path for your code and, as I mentioned before, the Handler key in the points to the static function in your Lambda code. If you define multiple Lambda Functions in one template and all of your code is in a folder such as src or bin, you can define the CodeUri in the Globals section and have it apply to all of your functions. Otherwise, you can simply include the path in the Handler key like hello-world/app.lambdaHandler and remove the CodeUri key. The Runtime key allows you to specify a specific version of your runtime. I’m using Node.js version 14 in this series of posts, but you can find the list of supported runtimes in the AWS documentation.

The Events section under the Lambda function is where the most significant SAM magic happens. Once again we provide a logical key (HellowWorld) and then give it a Type value (Api) and then we can configure the resource with Properties. In this example we set the Path of the API to /hello and the Method to get. Under the hood, there is a lot more going on here. SAM does all of this based on just these 2 entries:

  • Creates an API Gateway
  • Creates a Route with path /hello and HTTP method GET
  • Creates an integration between the /hello route and the Lambda function
  • Creates a $default deployment stage for the API Gateway

Summary

In this post, we created code using sam init. The two most important files created by this code are the lambda function code and the SAM template (template.yaml). The code generated by SAM is obviously just a placeholder and will require significant editing. The SAM template is very important to how your function is deployed, but we are sticking with the basic Lambda “Hello World” example with a REST API.

Pipelines for AWS Lambda – Part 1: The Deployment Stack

TL/DR;

Serverless applications are complex and AWS doesn’t do much to make setting up pipelines for them easy. I’m going to walk through how to use CloudFormation templates to configure multiple pipelines.

Background

As I posted before, Tutorials are (Often) Bad and AWS tutorials are no exception. AWS has a tendency to use the CLI or console for tutorials. While this is a fine way to learn, you would never want to use these techniques outside of a sandbox environment. For production applications, you want to use a deployment pipeline. To create a pipeline to deploy to AWS, you need to configure a User with the permissions that the pipeline will need. However, you also should control the creation of Identity and Access Management (IAM) resources such as Users. This creates a “chicken and egg” situation: How do you allow your organization to manage creation of IAM resources which are required to create a deployment pipeline when you don’t have a deployment pipeline to help manage this activity? The short answer is CloudFormation.

While I love the concept of serverless applications, the development experience has always been a challenge. With the introduction of the AWS Serverless Application Model (SAM), things definitely got better, but it is still difficult to find good documentation and SAM itself does not always follow what I consider AWS best practices. In this series of posts, I’ll walk through the creation of a simple REST API written in Node.js and hosted in AWS Lambda behind an API Gateway. I will highlight all of the various “gotchas” I stumbled on along the way. To keep things simpler for this example, I’m not going to be using containers to deploy my Lambda function. In this first post, I want to focus on using CloudFormation to set up a the AWS resources required for your pipeline.

Creating the Deployment Stack

So right off the bat, when trying to follow the tutorial on setting up a SAM pipeline, I noticed that the very first step: sam pipeline bootstrap created resources in AWS. Thankfully this command does use CloudFormation. There is no way to specify a stack name, apply tags, etc. so I don’t understand why SAM doesn’t give you the option of just creating the CloudFormation template and then executing on your own. At least you can grab the template from the stack which is what I have done in this gist.

The template creates the following resources:

  • A bucket to store pipeline artifacts (your Lambda code)
  • A policy to allow the pipeline to write to the bucket
  • A bucket to log activity on the artifacts bucket
  • A policy to allow the bucket activity to be logged
  • A role to be assumed when CloudFormation templates are executed
  • A role to be assumed by your pipeline
  • A policy to allow the pipeline to create and execute CloudFormation change sets, and to read and write artifacts to the bucket
  • A user granted the pipeline execution role
  • An access key for the pipeline user
  • A Secrets Manager entry to store the pipeline user credentials

That is a lot of resources and we aren’t even doing anything with Lambda yet. These are simply the resources required to run the pipeline.

I modified the template to remove any container-related resources and added names to most of the resources. You can find this version in this gist.

You can run this template in the AWS console by going to CloudFormation->Stacks, selecting Create stack->With new resources (standard), select “Upload a template file”, select the file saved from the gist. You must provide a stack name, but you can leave the default parameter values or enter your own unique identifier to be used for the resource names.

If you save the template from the gist as deployment-stack.yml, you can create the stack using the AWS CLI as follows:

$ aws cloudformation create-stack \
--stack-name aws-sam-demo \
--capabilities CAPABILITY_AUTO_EXPAND \
--template-body file:///$PWD/deployment-stack.yml

Note you will need to also specify --region if you have not already defined a default region in your local AWS settings.

Adding Secrets

Managing sensitive data can seem more complicated than necessary sometimes. Since we are building a pipeline with GitHub Actions which supports its own Secrets management, it may seem intuitive to use this to store all of your sensitive information. However, you should only use GitHub secrets (or any pipeline-based secure storage) to store information about connecting to AWS and not for information used by AWS. This is because we will be using CloudFormation to deploy to AWS and if you pass sensitive information via either a parameter or environment variable, it will be visible as plain text in the CloudFormation stack configuration.

For secrets that will be controlled by AWS, you can add the secret to the CloudFormation template and just allow AWS to set the value (and potentially rotate the secret). Below is a CloudFormation template that can be used to create a Secrets Manager entry for a password generated by AWS.

AWSTemplateFormatVersion: '2010-09-09'
Parameters:
  SecretId:
      Type: String
      Default: DbSecret
Resources:
  PostgresSecret:
    Type: 'AWS::SecretsManager::Secret'
    Properties:
      Name: !Sub ${SecretId}
      GenerateSecretString:
        GenerateStringKey: "DB_PASSWORD"
        SecretStringTemplate: '{ "DB_USER": "admin" }'
        PasswordLength: 30

This will create a Secret named “DbSecret” in the format shown below:

{
  "DB_USER": "admin",
  "DB_PASSWORD": "[generated password goes here]
}

For secrets that are defined outside of AWS (ex: third-party API keys), you need to just create the Secret and then enter the sensitive values either via the console or CLI. While this manual process may seem problematic, it can be secure as long as you manage who can update the secrets.

Enabling API Gateway Logging

As described in the AWS Documentation, the API Gateway service does not have access to write to CloudWatch logs by default. Thankfully I found this gist:

AWSTemplateFormatVersion: '2010-09-09'
Resources:
  ApiGwAccountConfig:
    Type: "AWS::ApiGateway::Account"
    Properties:
      CloudWatchRoleArn: !GetAtt "ApiGatewayLoggingRole.Arn"
  ApiGatewayLoggingRole:
    Type: "AWS::IAM::Role"
    Properties:
      AssumeRolePolicyDocument:
        Version: "2012-10-17"
        Statement:
          - Effect: Allow
            Principal:
              Service:
                - "apigateway.amazonaws.com"
            Action: "sts:AssumeRole"
      Path: "/"
      ManagedPolicyArns:
        - !Sub "arn:${AWS::Partition}:iam::aws:policy/service-role/AmazonAPIGatewayPushToCloudWatchLogs"

This template only needs to be executed once for any AWS account so you can run this template on its own to enable logging for your API Gateways. Note that you can still control whether logging is enabled for any gateway. This just makes sure the service can write to logs.

Summary

Before you can deploy a Lambda Function using AWS SAM, you need to create resources (primarily IAM resources). You can create these resources with sam pipeline bootstrap, but you won’t have much control over the details of the resources. Therefore, I recommend using a CloudFormation template that matches the template generated by SAM. This same template can be used over and over for multiple stacks.

CodeTender: A Brief History

I was lucky to be involved in building a micro-services platform with Vasil Kovatchev. Vasil is a great software architect and overall great guy. One concept he introduced was the “bartender” script. The idea was to build a working micro-service with a placeholder name that won’t conflict with any code. The first such template was called “Pangalactic Gargleblaster”. The bartender script replaces placeholders in the code (“Pangalactic” and “Gargleblaster”) and does some other stuff to make the service work with the platform. The bartender script is a bash script and the “other stuff” is…well…more bash script. We were quickly serving up not just Pangalactic Gargleblasters, but also Flaming Volcanos and Tequila Sunrises and Whisky Sours. I fell in love with this concept, but I don’t, however, love the bash script since it is very tightly coupled to our ecosystem. Since node.js is a de facto standard for dev utilities (no offense, Python), I set out to build a CLI with these basic requirements:

  • Clone a git repo or copy a folder to a specified folder
  • Display prompts to collect values to replace placeholders with
  • Run scripts before or after replacing placeholders

Since “bartender” is a bit too common of a name, I went with “CodeTender”. That was March of 2018 (according to my commit history). Fast forward a few years and version 0.0.4-alpha was still full of bugs and my somewhat ambitious issue list was collecting dust along with the code. A somewhat unrelated thread with Vasil and another colleague reminded me of CodeTender so I set out to get it working. A few minutes (hours maybe) later, and the basics were working. So of course, time to use it.

I have been playing with Home Assistant lately and wanted to start playing with custom dashboard cards. I found a boilerplate project and decided why just clone the repo when I could get CodeTender working even better and make a Home Assistant custom card template? So after about another week, I have burned down a lot of my issue list and added some cool features. It is still very alpha, but it’s close to ready for prime time.

I will formally launch CodeTender in a future post, but for anyone interested, you can check it out on my GitHub.

Home Assistant and Reverse Proxy on pi4 with AWS DDNS

The title of this post is a mouthful and that probably isn’t everything. This is going to be a long post about a pretty long technical journey. A few years ago I got a free Amazon Echo and very quickly was hooked. I added Echo Dots in multiple rooms, got some smart plugs to handle some annoying lamps, and eventually added some smart switches and a smart thermostat. A colleague introduced me to Home Assistant and after lurking around in r/smarthome and r/homeautomation I decided to get a pi4 to start playing with HA. After spending a few months playing with the basics (including getting access to integrations with my garage door, vacuum, and phones just to name a few), I decided to start working on adding my SmartThings hub and devices as well as Alexa. This means external access. Since I didn’t want to add another monthly fee to my already crowded bank statement and I have 20+ years of IT experience, I decided to use my brain and existing investments to build a solution. Since HA is all about local control, I figured this also gives me total control of how and what I expose to the internet. This was my plan:

  • Home Assistant running in Docker on the pi4
  • Reverse proxy running in Docker
  • Port forwarding from router
  • Dynamic DNS running in the cloud (most likely AWS)

Honestly, the local reverse proxy wasn’t always part of my plan. I was somewhat hoping to come up with a cloud-based proxy solution, but there are 2 obvious issues with this: security and cost. While it would be possible to route traffic through a TLS encrypted endpoint sitting in the cloud, I would still need to secure the communication between the cloud and my local pi somehow so best to terminate TLS on the pi. Not only is this a security issue, but it would consume unnecessary cloud resources since all traffic would be routed through the cloud as opposed to just the DDNS lookups. So eventually I landed on the local reverse proxy.

Step 1: Home Assistant on Docker

Getting my pi4 setup was not without challenges. I do own a USB mouse, but my only working keyboard is Bluetooth only so while I didn’t do a completely headless install, it took some creative copying and pasting to get the pi up and running. Since I’m pretty experienced with Docker, the setup of HA was a breeze. My docker-compose.yml file for just HA is shown below.

version: '2'
services:
  homeassistant:
    container_name: home-assistant
    image: homeassistant/home-assistant:stable
    volumes:
      - /mylocalpath/config:/config
    devices:
      - "/dev/ttyUSB0:/dev/ttyUSB0"
    environment:
      - TZ=America/New_York
    restart: always
    privileged: true
    group_add:
      - dialout
    ports:
      - "8123:8123"

There are a few things I want to point out in this configuration. I wanted to be able to play with my AVR connected via RS-232 over a USB adapter. There are 3 items in this config required to get access to the USB port:

  • devices: map the physical device to the container
  • group_add: give the container access to the dialout group for port access
  • version: version 3 of docker-compose does not support group_add so had to revert to version 2

Otherwise, this is a vanilla docker-compose config straight from the HA documentation. If you aren’t familiar with Docker, the only things here you really need to understand are the “volumes” config that tells HA were to store your configuration files and the “environment” config that sets and environment variable for your time zone. There are many more things you can do with Docker and HA via this config, but generic config provided by the documentation is enough to get you going with only changing the config path and time zone as needed.

Step 2: Dynamic DNS on AWS

I knew it would be possible to setup dynamic DNS using Route53 and Lambda so a quick googling led to this blog post. Eureka! Better yet, that post led to this git repo and, better yet, this CloudFormation template. Running the template is pretty simple. Just copy/paste the content into the designer in CloudFormation or upload to a bucket. Then provide the required parameters when you create the stack. The only parameter really required is the route53ZoneName which should only be your domain or subdomain. For example if your HA URL will be ha.home.mydomain.com then this value should just be home.mydomain.com. The rest of the parameters can be left as the default.

NOTE: If you already host your domain in Route53 and you want to add a subdomain for your DDNS, it is easiest to reuse your existing Hosted Zone. You can enter the zone ID in the route53ZoneID parameter to reuse the existing Hosted Zone.

After running the CloudFormation template, you will find a new DynamoDB table named mystack-config where mystack is the name of your CloudFormation stack. You will need to create a new A record in this table to store data for your host. Duplicate the sample A record, change the shared_secret (preferably to a similarly long, random string), and update the hostname to your full host (ex: ha.home.mydomain.com.) and make sure to include a trailing . at the end. Make note of the secret since you will need to pass this from your DDNS client.

Next all you need is the client. The good news here is that the git repo has a bash client. The configuration is a bit tricky, but the script to run the client looks like this:

#!/bin/bash

/path-to-ddns-client/route53-ddns-client.sh --url my.api.url.amazonaws.com/prod --api-key MySuperSecretApiKey --hostname my.hostname.at.my.domain.com. --secret WhatIsThisSecret

There are some important items in here that must be configured correctly:

  • The –url parameter value should match what is on the Outputs tab after running your CloudFormation script. Note that this does NOT include the protocol prefix (http://) so make sure when you copy this you copy the text and not the URL since your browser will show it as a link.
  • The –api-key parameter value should be populated with the value generated
  • Note the trailing . at the end of the –hostname parameter value. This is the FULL host name and must match the record in DynamoDB.
  • The –secret parameter value should match the value recorded in the DynamoDB record.

Finally, in order for your IP to be recorded with the DDNS every time you boot, you will want to place the above script to update your IP address in /etc/init.d and make sure to make the script executable.

Step 3: Port Forwarding

In order to route traffic to our personal public IP address, we have to have something that is able to listen within our network. For most of us, that means opening up the built-in firewall on your home network router. My router sits behind the modem provided by my ISP (as opposed to being provided by my ISP) so I have complete control over my router. Your setup may introduce different challenges, but the solution will be similar. First you will need to set a static IP for your server so that you can forward all traffic to that IP. Then you will need to configure port forwarding.

My router is a NETGEAR Nitehawk R6900v2. So the port forwarding setup can be found in the admin console for the router by first selecting the “Advanced” tab, then expanding “Advanced Setup”, and then selecting “Port Forwarding / Port Triggering”. You will need to forward two ports: 80 and 443. The NETGEAR console requires you to select a service name to setup port forwarding. For port 80, you can select “HTTP” and set the port range to 80-80 and set the IP address to your static IP. For port 443 (TLS), you will need to use the “Add Custom Service” option. I set the service to “HTTPS” and port range to 443-443 and set the IP.

Step 4: Reverse Proxy in Docker on pi4

I’ve worked with the jwilder/nginx-proxy Docker image before and not surprisingly it is still the go-to solution for a Docker reverse proxy. It’s very simple to use. You just map a socket so the container can listen to new containers, and then on each container hosting a something behind your proxy, you set the VIRTUAL_HOST and optionally the VIRTUAL_PORT environment variables. The resulting docker-compose.yml file looks like this:

version: '2'
services:
  homeassistant:
    container_name: home-assistant
    image: homeassistant/home-assistant:stable
    volumes:
      - /mylocalpath/config:/config
    devices:
      - "/dev/ttyUSB0:/dev/ttyUSB0"
    environment:
      - TZ=America/New_York
    restart: always
    privileged: true
    group_add:
      - dialout
    ports:
      - "8123:8123"
    # Add environment variables for proxy
    environment:
      - VIRTUAL_HOST=ha.home.streetlight.tech
      - VIRTUAL_PORT=8123

  # Setup reverse proxy
  nginx-proxy:
    container_name: nginx-proxy
    image: jwilder/nginx-proxy:latest
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - /var/run/docker.sock:/tmp/docker.sock:ro
      - /mylocalpath/certs:/etc/nginx/certs

Normally, this is all you need to do. However, when I started my Docker stack, it didn’t work. Looking at the logs with docker logs nginx-proxy revealed the following:

standard_init_linux.go:211: exec user process caused "exec format error"

Apparently the proxy image is not compatible with the ARM architecture of the pi. The Dockerfile used to build the image uses precompiled binaries built for AMD64. The command below are the culprits of this failure.

# Install Forego
ADD https://github.com/jwilder/forego/releases/download/v0.16.1/forego /usr/local/bin/forego
RUN chmod u+x /usr/local/bin/forego

ENV DOCKER_GEN_VERSION 0.7.4

RUN wget https://github.com/jwilder/docker-gen/releases/download/$DOCKER_GEN_VERSION/docker-gen-linux-amd64-$DOCKER_GEN_VERSION.tar.gz \
 && tar -C /usr/local/bin -xvzf docker-gen-linux-amd64-$DOCKER_GEN_VERSION.tar.gz \
 && rm /docker-gen-linux-amd64-$DOCKER_GEN_VERSION.tar.gz

The solution to this is relatively simple. First get a copy of the repository for the nginx-proxy image:

git clone https://github.com/nginx-proxy/nginx-proxy.git

Next, modify the code to pull the ARM version of forego and docker-gen so replace the above code as shown below:

# Install Forego
RUN wget https://bin.equinox.io/c/ekMN3bCZFUn/forego-stable-linux-arm.tgz \
  && tar -C /usr/local/bin -xvzf forego-stable-linux-arm.tgz \
  && rm /forego-stable-linux-arm.tgz
RUN chmod u+x /usr/local/bin/forego

ENV DOCKER_GEN_VERSION 0.7.4

RUN wget https://github.com/jwilder/docker-gen/releases/download/$DOCKER_GEN_VERSION/docker-gen-linux-armel-$DOCKER_GEN_VERSION.tar.gz \
 && tar -C /usr/local/bin -xvzf docker-gen-linux-armel-$DOCKER_GEN_VERSION.tar.gz \
 && rm /docker-gen-linux-armel-$DOCKER_GEN_VERSION.tar.gz

In the first section, we have to replace the ADD with a RUN and then use wget to pull the code and tar to unzip. In the second section, we just need to replace amd64 with armel. I have this change added to my fork of nginx-proxy in the Dockerfile.arm file.

Now you need to build a local image based off of this new Dockerfile:

docker build -t jwilder/nginx-proxy:local .

The -t tag will name the image with a local tag so it won’t conflict with the official image. The . at the end will find the Dockerfile in the current directory so this command must be run from the nginx-proxy folder created when you clone the git repo.

Finally, update your docker-compose.yml file to use the new image:

version: '2'
services:
  homeassistant:
    container_name: home-assistant
    image: homeassistant/home-assistant:stable
    volumes:
      - /mylocalpath/config:/config
    devices:
      - "/dev/ttyUSB0:/dev/ttyUSB0"
    environment:
      - TZ=America/New_York
    restart: always
    privileged: true
    group_add:
      - dialout
    ports:
      - "8123:8123"
    # Add environment variables for proxy
    environment:
      - VIRTUAL_HOST=ha.home.streetlight.tech
      - VIRTUAL_PORT=8123

  # Setup reverse proxy
  nginx-proxy:
    container_name: nginx-proxy
    # UPDATE TO USE :local TAG:
    image: jwilder/nginx-proxy:local
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - /var/run/docker.sock:/tmp/docker.sock:ro
      - /mylocalpath/certs:/etc/nginx/certs

Note that I have also removed the network_mode: host setting from this configuration. This is because nginx-proxy only works over the bridge network.

Step 5: SmartThings

I have my z-wave devices connected to a SmartThings hub. Eventually I plan to replace that with a z-wave dongle on my HA pi, but for now I wanted to setup webhooks to let me control my SmartThings devices through HA. This was a big driver for all of this work setting up DDNS, TLS, and port forwarding. The generic instructions worked fine all the way up to the very last step when I got an ugly JSON error. Thankfully, googling that error pointed me to this post describing the fix. Simply removing the “theme=…” parameter from the URL allowed the SmartThings integration to complete.

Addendum: Creating Certs with letsencrypt

While it is possible to use any valid cert/key pair for your TLS encryption, you can create the required certificate and key using letsencrypt. I did this using certbot. Install of certbot is simple:

sudo apt-get install certbot python-certbot-nginx

Then creating the cert was also simple:

sudo certbot certonly --nginx

Following the prompt for my full domain (ex: ha.home.mydomain.com) was pretty easy. Note that first you must have nginx running on your host so it can do the required validation. So you can either do this before disabling nginx on your pi (if it was enabled by default like mine) or after you setup your nginx-proxy. Just make sure you expose port 80 in your nginx-proxy container so the validation works.

Finally, just copy the certs for mapping to your ngingx-proxy container:

sudo cp /etc/letsencrypt/live/ha.home.mydomain.com/fullchain.pem /mylocalpath/certs/ha.home.mydomain.com.crt
sudo cp /etc/letsencrypt/live/ha.home.mydomain.com/privkey.pem /mylocalpath/certs/ha.home.mydomain.com.key

Alternatively you can symlink the keys rather than making a physical copy but the names must match your VIRTUAL_HOST setting.

Conclusion

Overall I’m very happy with how this all turned out. It was a great learning exercise. Almost every step of the way had at least a minor annoyance which led me to write this post in order to help others out. I would say getting nginx-proxy to work on the pi ARM architecture was the biggest challenge and even that wasn’t too difficult. In the end, I’m glad that I have control over my DDNS, integration with SmartThings and Alexa, and access to my HA server from outside my house.

Tinpothy: Spirit Guide

In the Catholic Church, the regular ceremony or “mass” is centered around a reenactment of “the Last Supper”, or the last meal before Jesus was crucified to later rise from the dead. My favorite part of the mass happens immediately before that and, regardless of religious or philosophical beliefs, most people would agree is at least “nice” if not a fundamental part of their belief system. The priest says, “let us offer each other a sign of peace,” and everyone turns to the people around them and shakes hands, hugs, kisses, waves, or extends 2 fingers in the familiar “peace” sign. In fact as children we are taught to give the peace sign to our friends rather than climb over each other to shake hands with our friends sitting several rows away. As a child, I thought the point of this exercise was to put aside any quarrels we might have with our neighbors or siblings or parents (which were many as a child) and extend a sign of “peace” – assuming the context of the opposite of “war” or “fight” or “quarrel”. I was wrong.

Photo credit Independent Florida Alligator

Practically anyone who spent any significant time in Gainesville, Florida as either a resident or a student the University of Florida knows “the running man”. Having gone to school in the 90s, when he sported dreadlocks, we called him “the Rasta Runner”. He was a mysterious man who could be seen running, seemingly constantly, hands extended in front of him making the “peace sign” with both hands at as many passers by as possible. We wondered if he was homeless, if he was mentally ill, but we didn’t ask. He was just a fixture of the town and was clearly interested in “offering a sign of peace” to as many people as possible, while running. My first (late) wife worked with him so I learned his name was Tinpothy and he was (not surprisingly) a nice guy.

Fast forward to today and I was on my normal 5 mile run: about 1.25 miles to the lake, about 2.5 miles around, and 1.25 home. As I was about 2 miles in, I passed an older gentleman who was running in the opposite direction. He didn’t seem to notice me at all. I thought to myself, “Why run on a paved trail full of people if you are not going to be friendly?” Then I thought about Tinpothy. I’m not sure exactly what my full mental process was. The nice thing about running is your body starts consuming all of the oxygen your brain normally uses for less useful things and lets you focus on one or two things (or just lets your mind go completely blank). Somewhere in that process I decided to channel my inner Tinpothy. At first I just started saying “good morning” but then I started adding the “peace sign”. I found out that holding up my two fingers got people’s attention and then when I said “good morning” something interesting happened: most people smiled (and said good morning back).

I said I was wrong about what the sign of peace meant. As an adult, I now understand that when we say “peace be with you,” we mean peace as in “quiet”. The peace that is mentioned so many times in the Bible is the same peace of many philosophies and religions: “inner peace”. So as I was running and sharing a sign of inner peace with those I passed, I found myself running “harder” (my heart rate went up by about 10 BPM) even though the effort felt the same. I hope the smiles I got in return were a sign that those I connected with on my run recognized for a brief moment that some random person recognized their existence and wished them well and that in some way gave them a moment of peace.

As I was finishing my lap around the lake, I saw the older gentleman who seemed to have ignored me before. This time I held up my 2 fingers to get his attention and smiled and said good morning, and he smiled and said good morning back.

Don’t Dispose Your Own EF Connections

I’m working on upgrading a framework to dotnet core so I am moving from .Net 2.x conventions to netstandard 2.2. Our code was using DbContext.Database.Connection to get DB connections for custom SQL. I needed to switch to DbContext.Database.GetDbConnection(). I made the wrong assumption that GetDbConnection() was a factory method and returned a new connection every time. Therefore I made sure I was disposing of each connection. Tests immediately started failing with “System.InvalidOperationException: ‘The ConnectionString property has not been initialized.'” After investing way too much time due to the complexity of the framework and my own stubbornness, I narrowed the issue down to the following scenario:

    using (var conn = context.Database.GetDbConnection())
    {
      conn.Open();
      using (var cmd = conn.CreateCommand())
      {
        cmd.CommandText = "SELECT * FROM sys.databases";
        cmd.ExecuteNonQuery();
      }
    }

    using (var conn = context.Database.GetDbConnection())
    {
      conn.Open();
      using (var cmd = conn.CreateCommand())
      {
        cmd.CommandText = "SELECT * FROM sys.databases";
        cmd.ExecuteNonQuery();
      }
    }

The real issue is the second call to GetDbConnection(). This does not in fact return a new instance, it appears to return the previous connection and the ConnectionString property has been set to an empty string causing the exception about ConnectionString not being initialized. You can test this yourself with the following:

    var conn2 = context.Database.GetDbConnection();
    Console.WriteLine(conn2.ConnectionString);
    conn2.Dispose();
    Console.WriteLine(conn2.ConnectionString);

The fix is to simply not dispose of your connections or commands. As indicated in this issue comment, disposing of the context will dispose of any connections created using GetDbConnection(). Therefore the correct implementation of this use case is as follows:

  using (var context = new MyContext())
  {
    var conn = context.Database.GetDbConnection();
    conn.Open();
    var cmd = conn.CreateCommand();
    cmd.CommandText = "SELECT * FROM sys.databases";
    cmd.ExecuteNonQuery();
    conn.Close();

    var conn2 = context.Database.GetDbConnection();
    conn2.Open();
    cmd = conn2.CreateCommand();
    cmd.CommandText = "SELECT * FROM sys.databases";
    cmd.ExecuteNonQuery();
  }

Welcome to the Club, Will Smith

For the first 5.41 (give or take) miles of my last 10-mile run, I was pissed at Will Smith. That morning I watched (parts of) a video of him attempting to run a half marathon after only 3 weeks of training. Somewhere in the middle of mile 10, after over an hour and fifty minutes of running, Will started to walk. He “failed” surprising…no one (at least no one who has ever attempted a half marathon). Anyone who has ever laced up their shoes in preparation to run (or walk) a half marathon knew this only had 2 possible results:

  1. Will Smith’s workout routine included enough aerobic exercise that he could run 8-10 miles at an easy pace before he started his “3-weeks of training”. Or…
  2. He would fail.

This is why I was pissed off. I just had a hard time at getting past not only his ignorance, but also his arrogance. Even though he might not have known how much training was required, someone on his support team surely did. So, for those of us on the informed side of the distance running spectrum, this was even more of a publicity stunt than it already was for everyone else. So when Will Smith stopped running, I was happy. My first thought was, “I’m going to kick Will Smith’s ass today.”

But I got over it. You see, that is what running does. It is Jedi mind training. To be a Jedi, you must let go of your anger, and if you run long enough, you will. By the end of my run, I was no longer pissed off. I did kick his ass. My 10 miles in just under 94 minutes was at least 16 minutes faster than his and I had plenty of gas left in the tank. I also had several months of off-and-on running, probably 50-60 pounds less to carry, temperatures 10-15 degrees cooler, and most likely a much flatter course. So by the end of the run, my planned title of this post changed from “I Kicked Will Smith’s Ass” to “Welcome to the Club, Will Smith”.

Today I finally went back and watched the whole video. Full disclosure – I had previously skipped ahead to the part where he was starting the race. What I missed somewhat validated what I already knew – he hadn’t been training since his “Will Smith’s Bucket List” show had him traveling the world, eating a lot, and even drinking (something he mentioned he never did when he was building his acting career). He also mentioned he had never run 13.1 miles before and his goal pace was 2 hours and 10 minutes (just under 10 minutes a mile).

Unfortunately the rest of his training was a bit of a spectacle. He did a stress test with his cardiologist (not a bad idea before trying to run a half marathon), but then things got weird. He did some underwater training and heat and cold tolerance stuff with Laird Hamilton to apparently train his mind for the demands of the half marathon. Then he ran on some dunes. I hope that somewhere during those three weeks he also followed the type of training that running science tends to find effective.

He also had some cringe-worthy moments. There were a few times that he referred to his race as a “marathon” – something I clearly documented my distaste for already. Then when he got to the race, he jumped the barrier to start near the front of the race. In his defense, it isn’t unusual for celebrities to get a preferential starting position. However, this was still bad form. In big races, runners (often by the thousands) are arranged at the start based on their expected finishing time. In most cases, “elite” runners start in their own group before even the fastest of the rest of us riffraff. The sorting of the rest of the runners is done mostly for courtesy of runners to keep slower traffic out of their way, but it is also for safety. You don’t want someone walking and taking selfies at the start to get trampled by hundreds of runners all trying to PR.

Ultimately, it seemed to be a humbling experience for Mr. Smith. And to be fair, he didn’t “fail”. He completed the race without being picked up by the “sad wagon”. As anyone who has ever entered a race will tell you, the only measurable failure is a DNF (“did not finish”). Even this is not always a failure, because a DNF almost always means you did everything your body and mind could handle on that particular day, and THAT is an ultimate form of success. So Will’s race was a resounding success. He learned exactly what his mind and body were capable of on that day, for that race, in Havana. More importantly (in my opinion) he learned that his mind and body are capable of more. Maybe, given more training and better weather (i.e. better race selection), he finishes without breaking stride and within his pacing goal. Maybe with even more training he breaks 2 hours. Maybe he completes a full. Maybe he is competitive with other famous runners like Kevin Hart, Eddie Izzard, and Flea (to name just a few). Maybe he even qualifies for Boston.

Towards the end of the video, will said, “Ten years ago I would have been embarrassed. I would have been pissed…I’m really in a different place in my life…I don’t feel the pressure of living up to the billboard image of myself.” I’m glad that he was able to take this experience in stride. After a little research, it seems that Will has done some running in the past – just not this kind of distance. My hope is that, like so many of us, Will Smith has caught the bug. He now knows how therapeutic and downright enjoyable running longer distances can be. Perhaps more importantly, I hope Will learned how great it is to be part of the running community. As the saying goes, if you can’t say anything nice, don’t say anything at all. As a general rule, runners fall into one of those 2 modes – nice or nothing. The ones who talk are nice, friendly, positive people and the ones who don’t probably are too. This is clearly a group that Will Smith deserves to be part of so we should welcome him with open arms – even if those arms are attached to a faster-running body. 😉

Welcome to the club, Will. We are glad to have you.