naftuli.wtf An urban mystic, pining for conifers in a jungle of concrete and steel.

A New Era

After many years of internet content management dysfunction, I have finally begun to consolidate everything and solidify my approach to writing articles and publishing content. This site uses Jekyll as a content management system for hosting a static site, a modified version of Lanyon as a theme, and a fleet of other technologies to create a pretty comprehensive system. Since it’s all the rage to talk about how we each choose to do things, I’ll spend a moment to describe how all of this is setup.

Development and Writing

All content for the site is managed in version control in Git and hosted privately on GitHub. I primarily use Atom as my editor, editing Markdown files by hand and previewing them locally.

While I greatly prefer Less for stylesheet management, Lanyon uses Sass, so I just try to pretend I’m writing Less when working in style-land.

My typical local development workflow uses Vagrant to create a VM for each software project I work on or maintain so as to have a reproducible environment in which work happens, and this project is no exception. Ansible is used to provision the CentOS 7.2 VM to install Ruby and do other needful things. While I personally use elementary OS which is based on Ubuntu, I would never run Ubuntu in production, it’s setenforce 1 or GTFO, and AppArmor is a terribly ineffective mandatory access control system… oh right, we were talking about my blog :blush:

I write my posts and do my theming from my local machine, the files are shared between my host and the Vagrant VM, and I use this little SystemD unit to automatically regenerate my site as I change files:

[Unit]
Description=Jekyll Static Site Serving

[Service]
Type=simple
ExecStartPre=/home/vagrant/bin/jekyll clean
ExecStart=/home/vagrant/bin/jekyll serve --force_polling --host 0.0.0.0 --config _config.yml,_config_dev.yml
User=vagrant
Group=vagrant
WorkingDirectory=/vagrant

[Install]
WantedBy=multi-user.target

Using a port-forward to my local machine, I can browse my site as I work on it at localhost:8080, making things absolutely fabulous :ok_hand:

Typically, posts are written in a feature branch, submitted as a pull request, some baseline minimal tests are run, and output can be previewed. When things are ready, I git merge --no-ff -S by hand and push to master. When this happens, deployment starts.

Hosting

Content for my site is stored in Amazon S3, cached and fronted by Amazon CloudFront as a CDN. Additionally, in order to use serve content at the apex of the domain name(s), I use Amazon’s Route 53 for DNS. I’m not doing anything super fancy for geolocated superfast DNS, as I’m preferring reduced cost over ultimate performance victory™, at least for now :wink:

I’m also not making a big deal about getting DNSSEC set up for my domains, though I may in the future. I’m not convinced that it solves the problem it aims to solve, and I’m not entirely even clear about which problems it does solve well or at all. If you are so enlightened, please drop me a line.

TLS certificates are provided by Amazon’s Certificate Manager and are cheap as free™ for CloudFront and for a few other Amazon resources. I get an A on Qualys’ SSL Labs and I have no management/maintenance overhead.

The minimum hosting cost of the aforementioned setup is $1 USD per month per hosted zone that is hosted with Route 53. Yes, you heard that right: one US dollar per month. I am hosting two domains (naftuli.com and naftuli.wtf), so my minimum is $2. Everything else is variable but very cheap, it’s pennies for S3 storage, CloudFront charges by transfer, and Route 53 charges by bulk counts of requests, and I don’t estimate hitting anywhere near where I’d have to worry about these costs being significant, so :muscle:

If you want a private GitHub repository, that’s another $7 USD per month, for a total of $8 USD per month for unlimited private GitHub repositories and one hosted zone serving content out in the described fashion.

Presently, my infrastructure is all automated in Amazon’s CloudFormation, which I have lost a lot of blood, sweat, and tears to over the years. No less evil is Terraform, which is probably what I’ll migrate my resources to in time.

Deployment

Part of what was alluded to previously is continuous integration and continuous delivery/deployment. While most organizations I’ve worked for use Travis CI, I find it prohibitively expensive for individual plans, which at the time of writing is $70 USD per month for personal private repositories. Since the actual usage for me consists of less than 30 minutes of build/deployment time per month, I found this kind of unacceptable.

I shopped around and found CircleCI which actually is free for my purposes, allowing one concurrent build across all repositories, 1,500 build minutes per month. For me this was perfect, as my private repositories are few and far between, and I can use Travis for any public repositories.

EDIT: Whereas before I had some Bash monstrosity, I have migrated to something a little bit better.

After trying to work around an unpredictable Bash deployment script that worked… uh, sometimes ¯\_(ツ)_/¯, I have created a Python script which does the same thing and is far more reusable: s3cf-deploy. It essentially uses the AWS CLI to sync assets to S3, and then interprets output in order to generate a CloudFront invalidation for only those assets which have changed, which is pretty cool.

I can now write and deploy things without thinking too hard about it.

Conclusion

In any case, it’s nice to finally have a consolidated place on the internet to host and write things, and I anticipate that I’ll be migrating many of my old posts from previous blogs. It’s also nice that this entire setup costs a fraction of what I’ve been paying for years to $TRADITIONAL_HOSTING_PROVIDER for something very similar with many more limitations.

I don’t work for Amazon, so there’s no reason specifically that I have chosen them, other than the selling points of it being cheap, working, being relatively fast, and not requiring too much maintenance at all. The last three places I have worked, I have been involved in infrastructure automation in Amazon Web Services, so needless to say I have a bit more experience in it as opposed to other services. If someone finds a cheaper way to do this on Azure or GCE, :clap: that’s awesome and I’d love to hear about it.

For now, the only limitation is that everything here by definition is static, there is no server executing code to render content or pages here. I do have a plan to experiment with Amazon API Gateway and Amazon Lambda, encrypted similarly with Amazon’s Certificate Manager to have on-demand compute resources for arbitrary things I’d like to trigger, but that remains for another post :raised_hands: :sun_with_face: