The title of this post is a mouthful and that probably isn’t everything. This is going to be a long post about a pretty long technical journey. A few years ago I got a free Amazon Echo and very quickly was hooked. I added Echo Dots in multiple rooms, got some smart plugs to handle some annoying lamps, and eventually added some smart switches and a smart thermostat. A colleague introduced me to Home Assistant and after lurking around in r/smarthome and r/homeautomation I decided to get a pi4 to start playing with HA. After spending a few months playing with the basics (including getting access to integrations with my garage door, vacuum, and phones just to name a few), I decided to start working on adding my SmartThings hub and devices as well as Alexa. This means external access. Since I didn’t want to add another monthly fee to my already crowded bank statement and I have 20+ years of IT experience, I decided to use my brain and existing investments to build a solution. Since HA is all about local control, I figured this also gives me total control of how and what I expose to the internet. This was my plan:
- Home Assistant running in Docker on the pi4
- Reverse proxy running in Docker
- Port forwarding from router
- Dynamic DNS running in the cloud (most likely AWS)
Honestly, the local reverse proxy wasn’t always part of my plan. I was somewhat hoping to come up with a cloud-based proxy solution, but there are 2 obvious issues with this: security and cost. While it would be possible to route traffic through a TLS encrypted endpoint sitting in the cloud, I would still need to secure the communication between the cloud and my local pi somehow so best to terminate TLS on the pi. Not only is this a security issue, but it would consume unnecessary cloud resources since all traffic would be routed through the cloud as opposed to just the DDNS lookups. So eventually I landed on the local reverse proxy.
Step 1: Home Assistant on Docker
Getting my pi4 setup was not without challenges. I do own a USB mouse, but my only working keyboard is Bluetooth only so while I didn’t do a completely headless install, it took some creative copying and pasting to get the pi up and running. Since I’m pretty experienced with Docker, the setup of HA was a breeze. My docker-compose.yml file for just HA is shown below.
version: '2'
services:
homeassistant:
container_name: home-assistant
image: homeassistant/home-assistant:stable
volumes:
- /mylocalpath/config:/config
devices:
- "/dev/ttyUSB0:/dev/ttyUSB0"
environment:
- TZ=America/New_York
restart: always
privileged: true
group_add:
- dialout
ports:
- "8123:8123"
There are a few things I want to point out in this configuration. I wanted to be able to play with my AVR connected via RS-232 over a USB adapter. There are 3 items in this config required to get access to the USB port:
- devices: map the physical device to the container
- group_add: give the container access to the dialout group for port access
- version: version 3 of docker-compose does not support group_add so had to revert to version 2
Otherwise, this is a vanilla docker-compose config straight from the HA documentation. If you aren’t familiar with Docker, the only things here you really need to understand are the “volumes” config that tells HA were to store your configuration files and the “environment” config that sets and environment variable for your time zone. There are many more things you can do with Docker and HA via this config, but generic config provided by the documentation is enough to get you going with only changing the config path and time zone as needed.
Step 2: Dynamic DNS on AWS
I knew it would be possible to setup dynamic DNS using Route53 and Lambda so a quick googling led to this blog post. Eureka! Better yet, that post led to this git repo and, better yet, this CloudFormation template. Running the template is pretty simple. Just copy/paste the content into the designer in CloudFormation or upload to a bucket. Then provide the required parameters when you create the stack. The only parameter really required is the route53ZoneName
which should only be your domain or subdomain. For example if your HA URL will be ha.home.mydomain.com then this value should just be home.mydomain.com. The rest of the parameters can be left as the default.
NOTE: If you already host your domain in Route53 and you want to add a subdomain for your DDNS, it is easiest to reuse your existing Hosted Zone. You can enter the zone ID in the route53ZoneID parameter to reuse the existing Hosted Zone.
After running the CloudFormation template, you will find a new DynamoDB table named mystack-config
where mystack
is the name of your CloudFormation stack. You will need to create a new A record in this table to store data for your host. Duplicate the sample A record, change the shared_secret (preferably to a similarly long, random string), and update the hostname to your full host (ex: ha.home.mydomain.com.) and make sure to include a trailing . at the end. Make note of the secret since you will need to pass this from your DDNS client.
Next all you need is the client. The good news here is that the git repo has a bash client. The configuration is a bit tricky, but the script to run the client looks like this:
#!/bin/bash
/path-to-ddns-client/route53-ddns-client.sh --url my.api.url.amazonaws.com/prod --api-key MySuperSecretApiKey --hostname my.hostname.at.my.domain.com. --secret WhatIsThisSecret
There are some important items in here that must be configured correctly:
- The
–url
parameter value should match what is on the Outputs tab after running your CloudFormation script. Note that this does NOT include the protocol prefix (http://) so make sure when you copy this you copy the text and not the URL since your browser will show it as a link. - The
–api-key
parameter value should be populated with the value generated - Note the trailing
.
at the end of the–hostname
parameter value. This is the FULL host name and must match the record in DynamoDB. - The
–secret
parameter value should match the value recorded in the DynamoDB record.
Finally, in order for your IP to be recorded with the DDNS every time you boot, you will want to place the above script to update your IP address in /etc/init.d and make sure to make the script executable.
Step 3: Port Forwarding
In order to route traffic to our personal public IP address, we have to have something that is able to listen within our network. For most of us, that means opening up the built-in firewall on your home network router. My router sits behind the modem provided by my ISP (as opposed to being provided by my ISP) so I have complete control over my router. Your setup may introduce different challenges, but the solution will be similar. First you will need to set a static IP for your server so that you can forward all traffic to that IP. Then you will need to configure port forwarding.
My router is a NETGEAR Nitehawk R6900v2. So the port forwarding setup can be found in the admin console for the router by first selecting the “Advanced” tab, then expanding “Advanced Setup”, and then selecting “Port Forwarding / Port Triggering”. You will need to forward two ports: 80 and 443. The NETGEAR console requires you to select a service name to setup port forwarding. For port 80, you can select “HTTP” and set the port range to 80-80 and set the IP address to your static IP. For port 443 (TLS), you will need to use the “Add Custom Service” option. I set the service to “HTTPS” and port range to 443-443 and set the IP.
Step 4: Reverse Proxy in Docker on pi4
I’ve worked with the jwilder/nginx-proxy Docker image before and not surprisingly it is still the go-to solution for a Docker reverse proxy. It’s very simple to use. You just map a socket so the container can listen to new containers, and then on each container hosting a something behind your proxy, you set the VIRTUAL_HOST
and optionally the VIRTUAL_PORT
environment variables. The resulting docker-compose.yml
file looks like this:
version: '2'
services:
homeassistant:
container_name: home-assistant
image: homeassistant/home-assistant:stable
volumes:
- /mylocalpath/config:/config
devices:
- "/dev/ttyUSB0:/dev/ttyUSB0"
environment:
- TZ=America/New_York
restart: always
privileged: true
group_add:
- dialout
ports:
- "8123:8123"
# Add environment variables for proxy
environment:
- VIRTUAL_HOST=ha.home.streetlight.tech
- VIRTUAL_PORT=8123
# Setup reverse proxy
nginx-proxy:
container_name: nginx-proxy
image: jwilder/nginx-proxy:latest
ports:
- "80:80"
- "443:443"
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
- /mylocalpath/certs:/etc/nginx/certs
Normally, this is all you need to do. However, when I started my Docker stack, it didn’t work. Looking at the logs with docker logs nginx-proxy
revealed the following:
standard_init_linux.go:211: exec user process caused "exec format error"
Apparently the proxy image is not compatible with the ARM architecture of the pi. The Dockerfile used to build the image uses precompiled binaries built for AMD64. The command below are the culprits of this failure.
# Install Forego
ADD https://github.com/jwilder/forego/releases/download/v0.16.1/forego /usr/local/bin/forego
RUN chmod u+x /usr/local/bin/forego
ENV DOCKER_GEN_VERSION 0.7.4
RUN wget https://github.com/jwilder/docker-gen/releases/download/$DOCKER_GEN_VERSION/docker-gen-linux-amd64-$DOCKER_GEN_VERSION.tar.gz \
&& tar -C /usr/local/bin -xvzf docker-gen-linux-amd64-$DOCKER_GEN_VERSION.tar.gz \
&& rm /docker-gen-linux-amd64-$DOCKER_GEN_VERSION.tar.gz
The solution to this is relatively simple. First get a copy of the repository for the nginx-proxy image:
git clone https://github.com/nginx-proxy/nginx-proxy.git
Next, modify the code to pull the ARM version of forego and docker-gen so replace the above code as shown below:
# Install Forego
RUN wget https://bin.equinox.io/c/ekMN3bCZFUn/forego-stable-linux-arm.tgz \
&& tar -C /usr/local/bin -xvzf forego-stable-linux-arm.tgz \
&& rm /forego-stable-linux-arm.tgz
RUN chmod u+x /usr/local/bin/forego
ENV DOCKER_GEN_VERSION 0.7.4
RUN wget https://github.com/jwilder/docker-gen/releases/download/$DOCKER_GEN_VERSION/docker-gen-linux-armel-$DOCKER_GEN_VERSION.tar.gz \
&& tar -C /usr/local/bin -xvzf docker-gen-linux-armel-$DOCKER_GEN_VERSION.tar.gz \
&& rm /docker-gen-linux-armel-$DOCKER_GEN_VERSION.tar.gz
In the first section, we have to replace the ADD
with a RUN
and then use wget to pull the code and tar to unzip. In the second section, we just need to replace amd64
with armel
. I have this change added to my fork of nginx-proxy in the Dockerfile.arm file.
Now you need to build a local image based off of this new Dockerfile:
docker build -t jwilder/nginx-proxy:local .
The -t
tag will name the image with a local
tag so it won’t conflict with the official image. The .
at the end will find the Dockerfile in the current directory so this command must be run from the nginx-proxy
folder created when you clone the git repo.
Finally, update your docker-compose.yml
file to use the new image:
version: '2'
services:
homeassistant:
container_name: home-assistant
image: homeassistant/home-assistant:stable
volumes:
- /mylocalpath/config:/config
devices:
- "/dev/ttyUSB0:/dev/ttyUSB0"
environment:
- TZ=America/New_York
restart: always
privileged: true
group_add:
- dialout
ports:
- "8123:8123"
# Add environment variables for proxy
environment:
- VIRTUAL_HOST=ha.home.streetlight.tech
- VIRTUAL_PORT=8123
# Setup reverse proxy
nginx-proxy:
container_name: nginx-proxy
# UPDATE TO USE :local TAG:
image: jwilder/nginx-proxy:local
ports:
- "80:80"
- "443:443"
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
- /mylocalpath/certs:/etc/nginx/certs
Note that I have also removed the network_mode: host
setting from this configuration. This is because nginx-proxy only works over the bridge network.
Step 5: SmartThings
I have my z-wave devices connected to a SmartThings hub. Eventually I plan to replace that with a z-wave dongle on my HA pi, but for now I wanted to setup webhooks to let me control my SmartThings devices through HA. This was a big driver for all of this work setting up DDNS, TLS, and port forwarding. The generic instructions worked fine all the way up to the very last step when I got an ugly JSON error. Thankfully, googling that error pointed me to this post describing the fix. Simply removing the “theme=…” parameter from the URL allowed the SmartThings integration to complete.
Addendum: Creating Certs with letsencrypt
While it is possible to use any valid cert/key pair for your TLS encryption, you can create the required certificate and key using letsencrypt. I did this using certbot. Install of certbot is simple:
sudo apt-get install certbot python-certbot-nginx
Then creating the cert was also simple:
sudo certbot certonly --nginx
Following the prompt for my full domain (ex: ha.home.mydomain.com) was pretty easy. Note that first you must have nginx running on your host so it can do the required validation. So you can either do this before disabling nginx on your pi (if it was enabled by default like mine) or after you setup your nginx-proxy. Just make sure you expose port 80 in your nginx-proxy container so the validation works.
Finally, just copy the certs for mapping to your ngingx-proxy container:
sudo cp /etc/letsencrypt/live/ha.home.mydomain.com/fullchain.pem /mylocalpath/certs/ha.home.mydomain.com.crt
sudo cp /etc/letsencrypt/live/ha.home.mydomain.com/privkey.pem /mylocalpath/certs/ha.home.mydomain.com.key
Alternatively you can symlink the keys rather than making a physical copy but the names must match your VIRTUAL_HOST
setting.
Conclusion
Overall I’m very happy with how this all turned out. It was a great learning exercise. Almost every step of the way had at least a minor annoyance which led me to write this post in order to help others out. I would say getting nginx-proxy to work on the pi ARM architecture was the biggest challenge and even that wasn’t too difficult. In the end, I’m glad that I have control over my DDNS, integration with SmartThings and Alexa, and access to my HA server from outside my house.