An urban mystic, pining for conifers in a jungle of concrete and steel.

Integrating systemctl --user and your Graphical Session

For those of us nerds who have been running Linux on the desktop for years now, the addition of systemd in recent releases has opened up some interesting user service management solutions. While not often discussed, systemctl --user is a per-user systemd service management system which sports most if not all of the features of the parent PID 1 systemd.

One issue I ran into on elementary 0.4 Loki (Ubuntu 16.04) was that the environment of systemctl --user was pretty barren:

naftuli@laststand:~$ systemctl --user show-environment

This constitutes the bare minimum of what a user service might need. However, for starting user services on the desktop, this environment doesn’t have a reference back to the DISPLAY or anything else about the desktop session.

I found a workaround for this limitation by defining a script called by the desktop session on login and a user target called which other services can depend on.

My script simply imports the current environment and starts the, allowing other units to start:

#!/usr/bin/env bash

systemctl --user import-environment
systemctl --user start

Drop this wherever you’d like, and then add it as a startup script in your distribution’s startup applications:


Next, let’s define our target at ~/.config/systemd/user/

Description=User Graphical Login

Log out, log back in, and then check that the target has been started:

naftuli@laststand:~$ systemctl --user status
● - User Graphical Login
   Loaded: loaded (/home/naftuli/.config/systemd/user/; static; vendor preset: enabled)
   Active: active since Wed 2017-12-27 11:54:16 PST; 1 day 3h ago

Dec 27 11:54:16 laststand systemd[1938]: Reached target User Graphical Login.
Dec 27 11:54:27 laststand systemd[1938]: Reached target User Graphical Login.

If we now dump the environment from systemd --user, we should now see a ton of environment variables:

LESSCLOSE=/usr/bin/lesspipe %s %s
LESSOPEN=| /usr/bin/lesspipe %s
PROMPT_COMMAND=__bp_precmd_invoke_cmd; dbus-send --type=method_call --session --dest=org.pantheon.terminal /org/pantheon/terminal org.pantheon.terminal.ProcessFinished string:$PANTHEON_TERMINAL_ID string:"$(history 1 | cut -c 8-)" >/dev/null 2>&1;  __bp_interactive_mode;

Now, your user services can start graphical applications and interact with the session.

2017Q4HW: Desktop Monitor Roundup

It’s Q4 of 2017 and I’m in the market for new hardware. I’ve been wanting new monitors for my desktop for some time now, so I thought I’d provide my process and my findings for what’s important for modern desktop monitors.

Displays are something that will likely outlast multiple PC builds, so it’s important to choose displays that will last as long as possible, taking into account features you’d like to retain for the next 5-10 years. We’ll now discuss the various facets of choosing the right monitor.

Physical Size vs. Resolution

This is arguably the easiest part of selecting a monitor for me. 27” monitors have been around for a long time and for me they’re the de facto standard. As for resolution, 2560x1440 is a dream at this display size. Text is crisp, and on-screen real-estate is plentiful.

On a single monitor, it’s realistic to put two windows side-by-side which is helpful when simultaneously reading documentation and writing code. With my setup at work with three, 27”, 2560x1440 Dell U2717D monitors laid out horizontally, I can have my music player, Slack, Signal Desktop, two Atom text editor windows, and a browser: each of these vertically maximized and horizontally divided on each display.

Work Setup

The U2717D is currently on Amazon for around $440. Since I’m needing three of these, every dollar counts.


Another win for the U2717D: bezel size. If you’re laying out multiple monitors next to each other, a wide bezel breaks up the display, leaving unavoidable gaps between the monitors.

Compare the U2717D:


…with my previous monitors, ancient ViewSonic VX2739WMs:

Remember that whatever your bezel is, it is doubled when you put two monitors next to each other. This makes for a two inch gap at home with the ViewSonics versus maybe a half an inch at work with the Dells.

If you’re not planning on a multiple monitor setup, disregard the bezel, unless you like the almost edgeless aesthetic.

Density, AKA DPI

Density is where the Dells fall short. A 4K display at the same physical screen size of 27” has 1,280 more pixels horizontally and 720 more pixels vertically. Due to double the pixels vertically and double the pixels horizontally, a 4K display achieves 1080p at double the density. Text is extremely sharp and HiDPI images look amazing.

The reason I’m not interested in 4K for my setup is just that: 4K is 1080p at double the density. On the other hand, a 5K display is 1440p at double the density. At work, some engineers have 5K displays at 27” and they are absolutely gorgeous.

LG UltraFine 5K

The downside? There are a few:

  • $1,300 price tag. You can have three Dell U2717D displays for the price of one of these 5K displays.
  • Power draw: drawing 14.7 million pixels ain’t cheap. With EPA power saving on, your draw is going to be 140W for each of these.
  • GPU power. With the latest NVIDIA GTX 1080 Ti, you can drive at most two of these displays. It might be possible to use SLI, but prepare to hear your fans all the time forever.

It may be possible that in the future, using technologies like NVIDIA’s GSync, power draw and GPU demand will be driven down to the bare minimum required to display something at any one given time. More specifically, it could be possible to lower the refresh rate to 1Hz when the displays are idle, tremendously reducing the GPU and power requirements for an average desktop setup.

My conclusion: 5K is what you want, but for multiple displays, it’s still way out of reach unless money and power are no object.

Display Technology

One thing that’s often neglected in these discussions is the underlying display technology. The main two contenders in this field are IPS and OLED, with heavy bias in desktop and laptop displays toward IPS: almost every display you’ll find is IPS unless you go out of your way.

If the market is so biased, why should one care?

Open this web-page and maximize it so that your entire display is black. On an IPS display (most likely like the one you’re reading this on right now), you’ll clearly see the edges of your display against your bezel. This is because IPS displays have a backlight. Pixels that are entirely black are still backlit, leaving you with a noticeable glow on your display.

On the other hand, OLED displays do not have a backlight. Your phone may have an OLED display. If you open the same web page on an OLED display, you will not see any glow whatsoever. A fully black pixel is one that is completely and totally off.

Some new laptops like the Dell XPS 13 and the ThinkPad X1 Yoga have OLED displays, but they’re still relatively rare. I’m able to find OLED TVs, but not one single desktop monitor. Fingers crossed for 2018, folks.

Refresh Rate

This primarily concerns the gamers out there and not so much those of us who primarily use our computers for work, but refresh rates are a feature you’re likely to think about regardless. Almost every monitor these days have a refresh rate of 60Hz, or 60 times/frames per second.

At 60FPS, it’s still likely that you’ll notice motion blur and screen tearing at times. Recently, new monitors have entered the market which support 120Hz, 144Hz, or even 165Hz refresh rates.


Above is the ASUS PG279Q 27” monitor, with a 1440p resolution and up to 165Hz refresh rate using NVIDIA’s GSync technology.

Now, the downside of refreshing 165 times per second is that GPUs will have to work that much harder to redraw the display. Your GPU can redraw two 1440p monitors at 60Hz with less GPU load than one of these high refresh rate monitors.

This isn’t fully true, as NVIDIA’s GSync and AMD’s FreeSync allow the monitor and the GPU to speak directly to one another, slowing down the refresh rate when low or no activity is occuring on-screen. This is a huge boon for GPU load in multi-monitor setups: if you’re gaming on your center display and not on your left and right displays, those monitors will reduce their refresh rate and only the center display will be driving a heavy load on the GPU.

The ASUS PG279Q is around $750 right now, so it’s around $150 more per display as opposed to the Dell U2717D.

Bottom line: if you’re going to get a high refresh rate monitor, make sure to find a monitor with GSync or FreeSync ( as is appropriate for your GPU) to optimize the load on your GPU and to save power both on the GPU and on the monitor.


One of the more subtle parts of selecting a monitor is the color profile. If at all possible, don’t mix different monitors in a single setup. Even for me, two different factory runs of the ViewSonics led to dramatically different color response on each display. Dell has a good history of getting color right. Otherwise, your mileage may vary.

If you’re super anal and OCD, buying a USB color calibrator can help you tune your monitors to nearly identical output. With the Dells I have at work, I can’t discern any difference between the monitors, so consider this a side quest.

Power Draw

At standby, the Dell U2717D consumes 0.5W of power, on average it consumes 36W, and at maximium load it consumes 88W of power. If you’re running multiple monitors, wattage adds up and you might see it on your energy bill.

The ASUS PG279Q 165Hz monitor consumes 0.5W at idle and up to 90W under load.

Notably, the LG UltraFine 5K display consumes up to 140W of power, which isn’t surprising given how many pixels it needs to draw.

Physical Characteristics

Finally, you’ll want to optimize for weight and depth. Monitors lighter in weight are generally preferable, but as is often the case, beggars can’t be choosers.


Triple monitor stands used to be hard to come by, but are now becoming much more generally available.

Here is the KONTOUR Triple Monitor Mount, retailing at around $250:

It supports 27” monitors, which isn’t too common for triple monitor mounts.


To reiterate: monitors last a long time, so choose wisely. What you buy today may very well still be on your desk in ten years. Hopefully this article has clarified the various criteria by which you should choose your next monitor.

Blue/Green Deployments with Route 53

Blue-green deployments with Route 53 involve transferring traffic from one endpoint to another using weighted DNS records. We can automate this relatively easily via the AWS CLI and jq, though there are a few things that are left to be desired: the syntax is quite verbose for updating DNS records.

Finding Your Hosted Zone

We can find a hosted zone id by its name using the following AWS CLI script, tranformed with jq:

$ aws route53 list-hosted-zones | jq -r --arg name \
     '.HostedZones[] | select(.Name == $name and .Config.PrivateZone == false) | .Id | ltrimstr("/hostedzone/")'

Do note that the trailing . is important due to our jq matching. Now that we have our hosted zone, we can use it to discover our records.

Discovering Record Identifiers

Weighted record sets have identifiers so that they can be uniquely referenced and updated. Let’s discover those records and their identifiers.

$ aws route53 list-resource-record-sets --hosted-zone-id 147MAIJCWVL9PC \
    --query "ResourceRecordSets[?Name=='']" | jq .
    "Name": "",
    "Type": "CNAME",
    "Weight": 100,
    "TTL": 1,
    "SetIdentifier": "blue",
    "ResourceRecords": [
      { "Value": "" }
    "Name": "",
    "Type": "CNAME",
    "Weight": 0,
    "TTL": 1,
    "SetIdentifier": "green",
    "ResourceRecords": [
      { "Value": "" }

We have now obtained our record sets with their identifiers. Unfortunately, due to the nature of what the AWS API and CLI expect, we’ll need these full literal values to make updates.

Shifting Traffic

Now that we have obtained our two record set identifers and their weights, we can begin to transfer traffic. It is unfortunately necessary to send all DNS record related data in the request, so it is not possible to omit things:

$ aws route53 change-resource-record-sets --hosted-zone-id 147MAIJCWVL9PC --change-batch '
  "Comment": "{ blue: 90, green: 10 }",
  "Changes": [
      "Action": "UPSERT",
      "ResourceRecordSet": {
        "Name": "",
        "Type": "CNAME",
        "TTL": 60,
        "Weight": 90,
        "SetIdentifier": "blue",
        "ResourceRecords": [{ "Value": "" }]
      "Action": "UPSERT",
      "ResourceRecordSet": {
        "Name": "",
        "Type": "CNAME",
        "TTL": 60,
        "Weight": 10
        "SetIdentifier": "green",
        "ResourceRecords": [{ "Value": "" }]

In this change set that we have requested, we have shifted to 90% traffic to blue, 10% traffic to green. The output from this command is:

  "ChangeInfo": {
    "Id": "/change/C1F72YWO0VLCD8",
    "Status": "PENDING",
    "Comment": "{ blue: 90, green: 10 }",
    "SubmittedAt": "2017-02-01T22:35:52.221Z"

We can use this value to then wait for the change to fully propagate:

$ aws route53 wait resource-record-sets-changed --id "/change/C1F72YWO0VLCD8"

Once this has completed, the update will have been propagated to all Amazon nameservers. This process can be looped in order to complete the deployment, shifting traffic in batches until completed.

It is important in addition to waiting for nameserver synchronization that DNS TTLs be observed; if your TTL is 60 seconds, you must wait at least a minute after synchronization has been achieved across nameservers so that clients will respect the changes to DNS, if the clients behave :wink:, which with DNS isn’t always guaranteed.

systemd Sucks, Long Live systemd

systemd seems to be a dividing force in the Linux community. There doesn’t seem to be a middle ground to systemd, polarizing opinions suggest that you must either love it or want to kill it with fire. I aim to provide a middle ground. First, let’s discuss the awful things about systemd.

The Bad and the Ugly


The fact that systemd-escape exists screams that there’s something horrifyingly wrong. If you haven’t seen or used these commands in the wild, consider yourself blessed.

The use case is running a command like this:

/bin/bash -c 'while true; do \
    /usr/bin/etcdctl set my-container \
        "{\"host\": \"1\", \"port\": $(/usr/bin/docker port my-container 5000 | cut -d":" -f2)}" \
    --ttl 60; \
    sleep 45; \

Now, to be fair, this seems like a bad idea in general, but sometimes you’re writing cloud-init for CoreOS and this is your best option. The newline escapes are mine to make the command more intelligible.

If we were to create an ExecStart command with this as the contents, systemd fails to understand quotation marks, as it’s not running a shell, and the command which works in your shell won’t work in a systemd unit. The straightforward solution would be for systemd to implement something like Python’s shlex or Ruby’s Shellwords, but instead, a bandaid was forged in the bowels of the underworld, systemd-escape:

$ man systemd-escape | head
SYSTEMD-ESCAPE(1)                                            systemd-escape                                            SYSTEMD-ESCAPE(1)

       systemd-escape - Escape strings for usage in system unit names

       systemd-escape [OPTIONS...] [STRING...]

       systemd-escape may be used to escape strings for inclusion in systemd unit names.

Let’s convert the script above to be acceptable to SystemD:

$ systemd-escape 'while true;do /usr/bin/etcdctl set my-container "{\"host\": \"1\", \"port\": $(/usr/bin/docker port my-container 5000 | cut -d":" -f2)}" --ttl 60;sleep 45;done'


Now agreed, if your workflow demands that you embed a Bash while loop in a unit, you’re already in a bad place, but there are times where this is required for templating purposes.

Binary Logs

If you weren’t aware, journald stores its logs in binary format. This breaks the typical tools we are accustomed to using for monitoring a system. tail, cat, less, and grep aren’t useful any more. With binary logging formats, the capability for log corruption also becomes real. If a plaintext log accidentally gets binary content in it, most editors like vim and less will handle it gracefully. If a binary log gets binary data in the wrong place, your logs are toast.

The justification for storing logs in a binary format was speed and performance, they are more easily indexed and faster to search. However, it was definitely a difficult choice to make with obvious consequences to end users on either side of the debate. If fast logs/logging are desired, that can be accomplished, but users need to learn the new journalctl command and can’t use the tools they’re familiar with.

I don’t see binary logs as a bad thing, but it was yet another hurdle to systemd adoption. I’ll review logging later on in the post and defend my position on why I think that journald was a good idea.

The Good

Now, let us turn our attention to the benefits that systemd brings us. I believe that these are the reasons that all Linux distributions have adopted systemd.


Let’s just start by comparing a SysV init script for ZooKeeper, which is 169 lines of fragile shell script, as indicated by comments throughout their source code:

# for some reason these two options are necessary on jdk6 on Ubuntu
#   accord to the docs they are not necessary, but otw jconsole cannot
#   do a local attach

Let’s realize the above as a systemd unit:


ExecStart=/usr/bin/java -cp ${ZK_CLASSPATH} ${JVM_FLAGS} org.apache.zookeeper.server.quorum.QuorumPeerMain ${ZOO_CFG_FILE}


I wrote that in less than ten minutes. Admittedly, it requires an environment file which defines the following variables:


But… that’s it. It’s done.

If this process just logs to standard output and standard error, its logs will be recorded by the journal, and can be followed, indexed, searched, and exported using syslog-ng or rsyslog. I’ll review logging below.

Process Supervision

Back in the day, we used something like supervisord to make sure our processes stayed running. This was because before systemd, if you didn’t write it, it didn’t happen. Don’t think that the init scripts running your system services would actually monitor the processes that they started, because that didn’t happen. Services could segfault and stay stopped until manual user intervention was made.

Enter systemd:


This tells systemd that if this process crashes, wait one second and always restart it. If you stop the service, it will stay off until you have started again, just as you’d expect. Additionally, systemd will log when and why the process crashed, so finding issues later on is straightforward and trivial.

Process Scheduling

Back in the dark days of Sys V init scripts, what were our options for starting a service after another service? Further, what were our options for starting service A after service B but before service C? The best option was this:

while true ; do
  if pgrep serviceB ; then
    sleep 1

For starting service A before service C, we’d need to amend service C’s init script and add a similar while loop to detect and wait for service A. Needless to say, this is a disaster.

Enter systemd:

Description=Service A

And that’s all. There is nothing left to do. systemd will create a service dependency graph and will start the services in the correct order, and you’ll have a guarantee that serviceA will start after serviceB but before serviceC.

What’s even better is unit drop-ins, which I’ll cover shortly. In a nutshell, it means that it’s easy to drop in additional unit files to a unit without rewriting the source unit file.

Bonus Points: Conditional Units

systemd also makes it easy to conditionally start units:

Description=Service A

This will make Service A only start if the /etc/sysconfig/serviceA file is present. There are many different conditionals available, and all of them can be inverted.

Bonus Points: Parallelism

Since systemd knows the dependency ordering of all of its units, starting up a Linux machine using systemd is much faster than on older init systems. This is because systemd is parallel and will start non-dependent services in parallel.

Unit Overloading

As discussed above, systemd makes it trivial to drop in additional configuration for a given unit to extend it. Let’s say that we need to only start rsyslog after cloud-final has run. cloud-final is the final stage of cloud-init running.

The source file for the rsyslog.service unit lives at /usr/lib/systemd/system/rsyslog.service, but we won’t be editing that file. We will create a systemd drop-in unit at /etc/systemd/system/rsyslog.service.d/after-cloudinit.conf:


The final name of the file isn’t entirely relevant, so long as it ends in .conf. Whatever is defined in this file will be appended into the default unit file. This small drop-in will make sure that rsyslog does not start until cloud-final.service has started/finished.

EDIT: It was pointed out to me on Twitter that systemd loads these files in alphabetical order. In order to maintain sanity amid chaos, it would probably be a good idea to name these with numerical prefixes so that load order is intelligible, ie %02d-%s.conf.

Overwriting Units

What if the underlying unit needs to have certain bits entirely removed from the unit? Removing them is simple:


What we have done here in our overloading unit is to remove all ExecStartPre blocks from the upstream unit. If we add another ExecStartPre line underneath the empty one, we can provide our own pre-start scripts completely different than those provided upstream.


Logging with systemd is incredibly straightforward and sports all the bells and whistles one would want. If a process simply logs to standard output or standard error, by default its logs will go into the journal. Looking up those logs is then trivial:

$ sudo journalctl -u rsyslog.service

This will launch a less-like browser to scan through the log history of rsyslog.service. Following a unit is also easy:

$ sudo journalctl -u rsyslog.service -f

This is basically the equivalent of tailing a log file.

Logs are rotated automatically by the journal and this can be configured, so no more logrotate nonsense, the journal just handles it.

Plugging in rsyslog or syslog-ng into the journal is simple, and this means that none of your applications need to speak syslog, their standard output will go into the journal and will be imported and sent according to your syslog configuration.

Go Forth and Learn

We’ve covered a lot of ground here. I have personally bookmarked the following pieces of documentation for systemd to help me write units:

I haven’t even covered glorious systemd mount points, timers, or many of the security related options that systemd affords. I have also not covered the userspace tools systemctl and journalctl, which are documented here:

I was definitely in the “systemd sucks” camp for a long time, until I started investigating what systemd actually made possible. I now see systemd as a necessary part of my system-level infrastructure and it has become increasingly difficult to do without it on older distributions.

PSA: Don't Break Public APIs

The date is December 1st, 2016. Amazon announces an interesting new feature for CloudFront allowing running Lambda functions at CloudFront edge locations. This is a powerful addition to the AWS arsenal for running code in locations geographically closest to users. This feature is a “preview” feature, and it’s opt-in only.

What was not mentioned, however is this change to the CloudFront API. Namely, Amazon added a field to DefaultCacheBehavior objects in CloudFront Distributions which is documented as being not required, but is nevertheless required, resulting in the following error message if UpdateDistribution is called:

InvalidArgument: The parameter Lambda function associations is required.

Their documentation states:


A complex type that contains zero or more Lambda function associations for a cache behavior.

Type: LambdaFunctionAssociations

Required: No

Emphasis on “no” is mine.

Of course, the reality is that this parameter is required, and not passing that XML element breaks all API calls, as seen and documented in this Terraform bug report. A simple hack works around the issue by always creating an empty <LambdaFunctionAssociations> block for every request:

diff --git a/builtin/providers/aws/cloudfront_distribution_configuration_structure.go b/builtin/providers/aws/cloudfront_distribution_configuration_structure.go
index b891bd26b..1eff7689f 100644
--- a/builtin/providers/aws/cloudfront_distribution_configuration_structure.go
+++ b/builtin/providers/aws/cloudfront_distribution_configuration_structure.go
@@ -261,6 +261,9 @@ func expandCacheBehavior(m map[string]interface{}) *cloudfront.CacheBehavior {
                MinTTL:               aws.Int64(int64(m["min_ttl"].(int))),
                MaxTTL:               aws.Int64(int64(m["max_ttl"].(int))),
                DefaultTTL:           aws.Int64(int64(m["default_ttl"].(int))),
+               LambdaFunctionAssociations: &cloudfront.LambdaFunctionAssociations{
+                       Quantity: aws.Int64(0),
+               },
        if v, ok := m["trusted_signers"]; ok {
                cb.TrustedSigners = expandTrustedSigners(v.([]interface{}))

A full fix is forthcoming from the Terraform community thankfully, but this isn’t Terraform’s problem. Amazon broke the interface to this API without warning and in contrast to their documentation which says that the field isn’t required. This change would have broken their CLI if not for a fix in botocore, and even appears to have broken some of their web interface for origins for a distribution; configuring an S3 origin is broken and doesn’t appear to work for adding origin access identities for S3 origins.

All of this added up to finding myself in a predicament: I had tampered with my origin configuration on CloudFront and I had no way of returning to a sane state. I couldn’t use Terraform to revert, the CLI was very hard to work with, and I couldn’t use the web interface to revert.


I was able to ultimately get around the issue by manually compiling Terraform myself after patching the source code. After recompiling, I was able to apply changes again and get my distribution working. Again, Terraform is not at fault here, it’s entirely Amazon’s fault for breaking a public API.

Lessons Learned

API breakage, whether we like it or not, happens.

However, the fact that Amazon could release software that would break things like this and release documentation contrary to the actual functionality, all without some testing alarms going off, engenders a serious violation of trust for me as a user of Amazon Web Services.

It should go without saying to developers of REST APIs that if you introduce backwards incompatible changes, you :clap: must :clap: bump :clap: the API version in the URL.

Amazon, please update your documentation or please make LambdaFunctionAssociations a truly optional field in DefaultCacheBehavior. In the meantime, everyone should scramble and try to work around this API breakage.