Last month, I switched my static HTML blog from a 5$/month DigitalOcean instance to the Google Cloud Storage (GCS) to pay only a few cents every month. This blog post is about how I automatically publish articles on my GitHub repository to the storage buckets using the newly released Google Cloud Container Builder.

Disclaimer: I work at Google however this is my personal experience with the Google Cloud products described below and my personal opinions.

Container Builder + Dockerfile + GitHub = Magic

My blog uses Pelican, a Python-based static blog generator. It turns articles written in Markdown files into a directory with static HTML files and images which makes up this blog. This means I don’t have a database storing my articles and I don’t have any code running to render this blog.

Basically, I created a docker image for this Pelican container so that I can build my blog on Mac, Linux or Windows exactly the same way using a single docker run command.

To set up continuous builds for this image, I used Google Container Builder. It was the easiest continuous integration (CI) experience I’ve ever had, for a good reason:

  • you can connect repos from your GitHub account 👏👏
  • a Dockerfile is enough, no extra config needed 🙌
  • no need to configure credentials for Google Container Registry (GCR)
  • doing all this takes way more time/attempts on other CI services

I spent less than 60 seconds to set a continuous build for this image using Build Triggers on Google Cloud Console. I am hoping to forget about it for a few years, which is what makes it great!

Continuous Integration with Cloud Container Builder

Note: If you’re interested in publishing static websites to Google Cloud Storage with Container Builder yourself, check out my tutorial here.

After realizing I can use Google Cloud Container Builderto execute arbitrary build steps, I realized that I can use it to compile and publish my blog.

So, I created a Build Trigger for the GitHub repository that contains my blog articles. This took me extra a few minutes, because this time my build is not as simple as a single Dockerfile.

This time I have custom instructions to compile and publish the website. Meet the cloudbuild.yaml:

steps:
  - name: gcr.io/${PROJECT_ID}/blog-builder:latest
    entrypoint: pelican
    args: ["/workspace/content", "-o", "/workspace/output"]

  - name: gcr.io/cloud-builders/gcloud
    entrypoint: gsutil
    args: ["-m", "rsync", "-r", "-c", "-d", "./output", "gs://ahmetalpbalkan.com"]

The first step pulls the Docker image for Pelican I built in the previous step. Cloud Container Builder automatically clones the repository to /workspace and I point pelican to build the content/ directory and compile the result into the output directory.

What comes next is one of the coolest features of Cloud Container Builder: for each build step, you can use a different Docker image to execute the step and preserve the workspace for the next step. You can also parallelize the build steps to make it faster.

The second step above pulls another Docker image that has the gcloud CLI tools. Then we use the output directory (preserved from the previous step) and upload my blog to Google Container Storage buckets using gsutil rsync.

gsutil -m rsync -c is really fast because it uses file checksums to compare local files vs the cloud bucket in parallel and upload only the changed/new files. For my ~200 MB blog directory with thousands of files, it takes about 5 seconds to synchronize the entire local directory with the remote storage bucket.

Note that I did not have to create any credentials and set permissions anywhere to pull/push images or upload my blog to GCS. Container Builder’s service account already has permissions to use Google Cloud Storage by default.

tl;dr Google Container Builder is cool. Check out my other article if you are interested in learning more.

Hosting Static Websites on GCS

As long as you are the owner of a domain name, you can create a Google Cloud Storage bucket with the domain name and point your CNAME to host a static website as explained here.

Setting up a storage bucket to be a website is very easy:

gsutil mb -c regional -l US-CENTRAL1 gs://ahmetalpbalkan.com
gsutil defacl ch -u AllUsers:R gs://ahmetalpbalkan.com
gsutil web set -m index.html gs://ahmetalpbalkan.com

This creates a regional storage bucket, makes the bucket readable by all users and sets index.html as the default index file for all directories.

That said, I am still using CloudFlare to serve my website with TLS and have it optimized/cached globally by CloudFlare’s edge CDN nodes for free.

Static Hosting Alternatives

  • Stay in DigitalOcean: Although it is only 5$ to run an instance on DigitalOcean, I am not using any real compute to host a static website and I most certainly don’t want to manage a Linux machine (keep it secure, apply updates etc) just to host a static website. Lately I’ve been using CoreOS so that has not been a problem, but still.

  • GitHub pages: I don’t use Jekyll, so this would mean that I would be pushing a lot of compiled HTML to GitHub repositories, which did not seem appropriate given git is not designed to store compiled files or binary formats (images). Not to mention that after a while, my git repo would get slow as hell. Since I did not need the commit history for my compiled files, it wasn’t a good fit.

  • Firebase Hosting: Firebase offers static website hosting with a single command deployment and free TLS termination. However, the firebase deploy command uploads the entire website every time (by design), which takes about ~3 minutes for my blog, therefore I decided not to use Firebase.

Pricing

I used to pay $5/month for a DigitalOcean instance, now I am only paying about 5-10 cents per month. The only cost is the egress network. My blog serves about 5 GB traffic per month.

Here is the downside: If I were to serve 1 TB monthly traffic with Cloud Storage, my storage bill would be around $120/month, which is not very ideal. DigitalOcean’s $5 instance comes with a 1 TB free transfer bandwidth. Thankfully, my blog is not popular enough to worry about that.


Tweet this post if you liked it: