An urban mystic, pining for conifers in a jungle of concrete and steel.

dnsmasq Everywhere

dnsmasq is one of those pieces of software that you might not be familiar with. Personally, on all new deployments for employers or customers, I advocate for dnsmasqing all the things.


dnsmasq, at a high level, is a DNS resolver. However, it’s decidedly not BIND. Think of it more as a smart DNS router and caching layer.

Consider the following network layout:

Network Diagram

This is based on a real environment that I worked with, the names have been changed to protect the guilty.

In a nutshell, there was a VPC in Amazon in us-east-1, a VPC in Amazon in us-west-2, and a datacenter somewhere in between. The VPCs in Amazon could not talk to each other, but could each communicate with the datacenter. The web applications in the diagram all ran the same codebase with the same configuration, using a MySQL cluster located in the datacenter. Yeah, it’s quite inefficient to query a MySQL cluster across the continental US, but this isn’t a contrived example: this was something we had to support.

In order for the VPCs to resolve the IP addresses of the MySQL cluster, we injected the IP addresses for ns1 and ns2 into Amazon’s DHCP option set, yielding three resolvers: our two resolvers and the Amazon provided DNS resolver.

After vigorous testing, we brought us-east-1 online to begin serving traffic, using a CDN to geographically route to the nearest set of servers. Almost immediately, our average request time skyrocketed. We immediately suspected that this was related to MySQL, but after a lot of digging, it turned out to be DNS.

It was DNS

If you’ve been in the industry long enough, this haiku will strike home and strike hard.

What happened? Well, it turned out that all DNS queries were crossing the US. Where can I find Here, lemme ask the west coast about that. Where can I find Here, lemme ask the west coast about that. Not only would it cross the US once, for resolving local DNS entries, it would cross the US twice until it failed over to using the local Amazon DNS provider.

This was bad. However, a quick web search for “dns caching resolver” brought up dnsmasq. What we subsequently did was to install dnsmasq on all of our servers, using a dhclient script to prepend as the first and primary resolver. This meant that all DNS queries would go to the local dnsmasq service.

The first order of business was to figure out a way to prepend as the first resolver. On CentOS 7 at least, this was accomplished by creating /etc/dhclient.conf with the following contents:

prepend domain-name-servers;

After this was created, systemctl restart NetworkManager.service correctly updated /etc/resolv.conf with our local resolver as the first resolver:

# Generated by NetworkManager
search nowhere
options single-request-reopen

Note that is the Amazon provided DNS resolver.

All DNS entries have a time-to-live or TTL value specifying how long, in seconds, that a record should be considered valid for. This is the basic premise of DNS: things change, but to make things more efficient, cache them as long as they are to be considered valid. Just like your CDN should be caching your assets for as long as it is reasonable to do so, DNS entries should be cached as long as they are allowed to live.

Unfortunately, as in the haiku above, almost everything seems to get DNS wrong. It either doesn’t cache at all, or permanently caches records, leading to stale data.

As an aside, there is a bug in the ZooKeeper Java client libraries: it only resolves DNS names once. If you change an A record to a ZooKeeper node to point to a different host, the running JVM process will never be able to find it until you restart the process.

Therefore it’s either surprising or entirely not surprising that gethostname(3) makes no effort to cache values, making things very inefficient for programming languages like PHP (which was the web application in question) in that every HTTP request made by PHP to other services introduces whatever latency that gethostname(3) involves.

So what do we do, rewrite PHP? Find every bad DNS implementation in every library in every language? Well, we could, but I don’t know about you but I’ve got better things to do with my time.

Enter dnsmasq: a caching DNS resolver with smart routing rules. Requesting the first time will take tens to hundreds of milliseconds, but the next time, it’ll usually take less than a millisecond!

$ dig | grep -P 'Query\sTime:'
;; Query time: 63 msec
$ dig | grep -P 'Query\sTime:'
;; Query time: 0 msec

Sending and receiving UDP packets to the local interface is, unsurprisingly, pretty fast. If your application is making hundreds or thousands of DNS requests per second, dnsmasq will bounce these down into a single outbound request to a resolver, cache the value, and will re-request the record when the TTL is up, caching the old value until a new one is received.

Before we get to the final summary, here’s how we configured dnsmasq, this is /etc/dnsmasq.conf:

# listen only at localhost for security
# drop privileges to user 'nobody' after binding port 53
# route requests to all backends

And that’s it. dnsmasq will automatically discover the other resolver from /etc/resolv.conf, which is the Amazon provided DNS resolver. All requests for DNS names within the datacenter will be sent to one of the two nameserver instances in the datacenter; all other requests will be sent to the local Amazon resolver. In Amazon, the IP of your nameserver is somewhat deterministic as being the third host in your subnet, in Google Cloud, this value is always, but it’s not necessary to hardcode it into the dnsmasq configuration: if it’s in /etc/resolv.conf, dnsmasq will detect it automatically.

Not only did this fix our issue, but it made everything across all of our servers that involved DNS faster.

Finally, let’s consider the following example:

Bidirectional DNS

In our previous example, we only needed unidirectional DNS; our instances in AWS needed to be able to resolve names in our datacenter, but the datacenter didn’t need to resolve names in AWS. In this new example (also something that I helped build), we consider a setup for bidirectional DNS resolution between the datacenter and GCP.

This is a two-tier caching setup for bidirectional name resolution. A DNS query from a process goes to the local dnsmasq on the local machine, it is then intelligently routed based on the zone name to the local “bridge” dnsmasq instance within the network, which then routes the request over the interconnect to a dnsmasq resolver on the other side.

This does exceed the bare minimum of what would be required for bidirectional resolution, but not by much. The bare minimum would consist of two DNS servers, one on each side of the connection between networks. We simply enhance this bare minimum by adding caching as a feature in multiple stages.

I’ve seen extremely large deployments (tens of thousands of servers), and at that scale, you’d be surprised at how the oft-forgotten low-level network primitives like DNS can flare up to cause colossal problems. The architecture diagrammed above uses multiple caching layers to dramatically step down the amount of load placed on DNS resolvers in addition to making most queries much faster.

5,000 servers request a DNS record at localhost, then at the dnsmasq instance within the local network, then at the dnsmasq instance within the remote network, if that’s where the request is heading. At each stage, we step-down the number of requests, and resolved names are only re-requested when they expire. 5,000 DNS requests to a single dnsamsq instance will result in a single DNS query from that dnsmasq to its configured resolver for that zone. These instances will then all cache DNS locally as well, dramatically reducing the pressure on resolvers.

So that’s it. There is a lot more that dnsmasq can do, but we’ve seen how simple it is to deploy and how it can reduce request latency for everything that uses DNS, which is almost everything on your system.

The Logitech G700s Mouse

In today’s episode of “discontinued stuff that I rely on,” enter the Logitech G700s mouse. Feast your eyes on this:

Logitech G700s

Before there was the G700s, there was the G700, which was, for all intents and purposes, identical to this mouse but without some of the graphics on the mouse.

Here are a series of images to get you familiar with the layout of the mouse. These are all pictures of the G700 and not the G700s, but the button layout is identical.

An aerial view:

G700 Above

From the side:

G700 Side

Apart from the mouse wheel, its button, and the left and right click buttons, there are eight additional programmable buttons: three near the first finger, one behind the mouse wheel, and four paddles on the side near the thumb.

This is a gaming mouse, but I don’t use it as a gaming mouse; I have programmed it for productivity and hacking.

All the Features

Before we drop into my actual configuration, let’s just run through the features:

  • Uses rechargeable batteries; can simultaneously charge over micro USB and be used. (This wouldn’t be a feature if Apple hadn’t made the Magic Mouse so bad.)
  • Varying DPI settings; anywhere from 200 to 8200(!) DPI is possible.
  • Varying battery use settings; change reports per second to be small to save battery or high for gaming.
  • Scroll wheel with tilt, a middle button, and free/infinity scroll via a toggle: either ratcheting clicks or no resistance/free flowing.
  • Multiple profiles; you can program up to five profiles on the mouse’s firmware and change between them via buttons.

Pretty cool stuff. Let’s discuss how I use it.

Profile 1: Default

My default profile is a pretty well-rounded general productivity setup. I have media control, browser forward/back control, three DPI modes, and an “expose” button to show all windows on my desktop.

Here is the profile configuration:

Default Profile

Additionally, here is a .dat export of the profile that can be imported via the Logitech Gaming Software.


From my thumb, I can skip forward and backward in my media player (upper paddles), and can navigate forward and backward in my browser (lower paddles).

From my first finger on top of the mouse, I can cycle through three DPI settings (furthest button), play/pause my media player (middle button), and change the profile (closest button). The button behind the mouse wheel simply executes the “expose” command on my windows for elementary OS.

I have all profiles set to save battery, reporting as seldom as possible.

The DPI settings are:

  • 600: perfect for editing things in Photoshop or GIMP with a high degree of accuracy.
  • 1200: perfect for mousing around on a laptop.
  • 1900: perfect for navigating three 27” 1440p monitors :trollface:

In summary: I can control my media player, the browser, DPI, profile, and manage windows without lifting my hand from my mouse.

Profile 2: BIOS

The other profile I keep around is a BIOS profile for entering and navigating a BIOS menu. This has proved to be quite useful.

Here is the profile configuration:

BIOS Profile

Additionally, here is a .dat export of the profile that can be imported using the Logitech Gaming Software.


Left and right click work normally. The middle mouse button will send the Enter key, tilting left on the wheel will send Left, and tilting right on the wheel will send Right. The furthest button on top sends Up, the middle button on top sends Down, and the closest button changes the profile, just like the default profile above. The button behind the mouse wheel sends Delete to trigger the BIOS menu during boot. If your BIOS uses a different key, program that key here.

In summary: I can enter and navigate a BIOS without lifting my hand from my mouse.


Unfortunately, like everything else I love, this mouse and its predecessor have been discontinued by Logitech. I was able to obtain a new one from eBay, but as time goes on, they will likely get harder and harder to obtain.

Logitech hasn’t launched a mouse with a similar form factor since discontinuing it, so until then, HODL.

The Apple USB Keyboard

I can’t quit you babe…

The Apple USB Keyboard

I honestly can’t stop using hardware that isn’t made anymore. Today, I’ll be discussing my favorite keyboard, the Apple Wired USB Keyboard without the num-keys. Let’s break down why I can’t stop using this keyboard.

Media Keys

Simply put, the media keys are right there. I don’t have to look to play/pause or skip tracks, it’s all just a short reach away. Volume up and volume down are right there. Things just work, which is the overarching theme here.

PageUp, PageDown, Home, and End

Probably the most compelling feature of this keyboard on Linux is the key bindings for function keys and arrows:

  • Fn+Up: equivalent to PageUp
  • Fn+Down: equivalent to PageDown
  • Fn+Left: equivalent to Home; moves the cursor to the absolute beginning of the line.
  • Fn+Right: equivalent to End; moves the cursor to the absolute end of the line.

Normally, keyboards throw PageUp, PageDown, Home, and End to a hard-to-reach location in the upper right near the numpad. When this is the case, these keys are very hard to use and so I simply don’t use them.

Placing these as a simple Fn+Direction shortcuts allows me to navigate editors, text editors, browser URL bars… it works everywhere because it’s built into the kernel’s keymap for these keyboards.

Here are some use cases:

  • Need to indent a line more or less? Fn+Left to navigate to the beginning of the line, then Tab, then Fn+Right to head back to the end of the line.
  • Need to delete an entire line? Shift+Fn+Left will select the entire line, and then a simple Backspace or Delete and the line is gone.

The amazing thing about these keys being bound the way that they are is that they mesh very well with normal navigation using Ctrl+Direction; it’s simply a different modifier key to move around the line rather than around words using Ctrl+Direction. It just works, and I have yet to find another keyboard that replicates this extremely important functionality.

I own one of these keyboards for all of my machines, laptops and desktops.


It’s hard to gauge comfort, but I don’t get any pain whatsoever using this keyboard. I like the action, and I don’t get fatigue.

It’s hard to discuss whether this will be the same for everybody, I type in Dvorak so I admittedly use the keyboard differently than most people.


It’s USB! Gaming just works. The Apple Wireless Keyboard, being Bluetooth, works around 95% of the time. Sometimes, it’ll disconnect, the batteries die eventually, and with USB all these problems disappear.

On top of this, underneath the keyboard are two USB ports. I typically have my YubiKey hanging off of one and my mouse USB receiver off of the other. The convenience of this cannot be understated.

Epilogue: Find Me a Better Keyboard

I actually ordered a test set from WASD keyboards which demonstrates all of the different types of MX switches. MX Browns are what I would probably use in the end, but that’s contingent on me finding a keyboard which mirrors this layout and has media keys in the same place and supports the same key shortcuts as detailed above.

Find me a mechanical keyboard which does all this, and maybe, just maybe, I’ll be able to move on from these discontinued keyboards I have to obtain via eBay.

Integrating systemctl --user and your Graphical Session

For those of us nerds who have been running Linux on the desktop for years now, the addition of systemd in recent releases has opened up some interesting user service management solutions. While not often discussed, systemctl --user is a per-user systemd service management system which sports most if not all of the features of the parent PID 1 systemd.

One issue I ran into on elementary 0.4 Loki (Ubuntu 16.04) was that the environment of systemctl --user was pretty barren:

naftuli@laststand:~$ systemctl --user show-environment

This constitutes the bare minimum of what a user service might need. However, for starting user services on the desktop, this environment doesn’t have a reference back to the DISPLAY or anything else about the desktop session.

I found a workaround for this limitation by defining a script called by the desktop session on login and a user target called which other services can depend on.

My script simply imports the current environment and starts the, allowing other units to start:

#!/usr/bin/env bash

systemctl --user import-environment
systemctl --user start

Drop this wherever you’d like, and then add it as a startup script in your distribution’s startup applications:


Next, let’s define our target at ~/.config/systemd/user/

Description=User Graphical Login

Log out, log back in, and then check that the target has been started:

naftuli@laststand:~$ systemctl --user status
● - User Graphical Login
   Loaded: loaded (/home/naftuli/.config/systemd/user/; static; vendor preset: enabled)
   Active: active since Wed 2017-12-27 11:54:16 PST; 1 day 3h ago

Dec 27 11:54:16 laststand systemd[1938]: Reached target User Graphical Login.
Dec 27 11:54:27 laststand systemd[1938]: Reached target User Graphical Login.

If we now dump the environment from systemd --user, we should now see a ton of environment variables:

LESSCLOSE=/usr/bin/lesspipe %s %s
LESSOPEN=| /usr/bin/lesspipe %s
PROMPT_COMMAND=__bp_precmd_invoke_cmd; dbus-send --type=method_call --session --dest=org.pantheon.terminal /org/pantheon/terminal org.pantheon.terminal.ProcessFinished string:$PANTHEON_TERMINAL_ID string:"$(history 1 | cut -c 8-)" >/dev/null 2>&1;  __bp_interactive_mode;

Now, your user services can start graphical applications and interact with the session.

2017Q4HW: Desktop Monitor Roundup

It’s Q4 of 2017 and I’m in the market for new hardware. I’ve been wanting new monitors for my desktop for some time now, so I thought I’d provide my process and my findings for what’s important for modern desktop monitors.

Displays are something that will likely outlast multiple PC builds, so it’s important to choose displays that will last as long as possible, taking into account features you’d like to retain for the next 5-10 years. We’ll now discuss the various facets of choosing the right monitor.

Physical Size vs. Resolution

This is arguably the easiest part of selecting a monitor for me. 27” monitors have been around for a long time and for me they’re the de facto standard. As for resolution, 2560x1440 is a dream at this display size. Text is crisp, and on-screen real-estate is plentiful.

On a single monitor, it’s realistic to put two windows side-by-side which is helpful when simultaneously reading documentation and writing code. With my setup at work with three, 27”, 2560x1440 Dell U2717D monitors laid out horizontally, I can have my music player, Slack, Signal Desktop, two Atom text editor windows, and a browser: each of these vertically maximized and horizontally divided on each display.

Work Setup

The U2717D is currently on Amazon for around $440. Since I’m needing three of these, every dollar counts.


Another win for the U2717D: bezel size. If you’re laying out multiple monitors next to each other, a wide bezel breaks up the display, leaving unavoidable gaps between the monitors.

Compare the U2717D:


…with my previous monitors, ancient ViewSonic VX2739WMs:

Remember that whatever your bezel is, it is doubled when you put two monitors next to each other. This makes for a two inch gap at home with the ViewSonics versus maybe a half an inch at work with the Dells.

If you’re not planning on a multiple monitor setup, disregard the bezel, unless you like the almost edgeless aesthetic.

Density, AKA DPI

Density is where the Dells fall short. A 4K display at the same physical screen size of 27” has 1,280 more pixels horizontally and 720 more pixels vertically. Due to double the pixels vertically and double the pixels horizontally, a 4K display achieves 1080p at double the density. Text is extremely sharp and HiDPI images look amazing.

The reason I’m not interested in 4K for my setup is just that: 4K is 1080p at double the density. On the other hand, a 5K display is 1440p at double the density. At work, some engineers have 5K displays at 27” and they are absolutely gorgeous.

LG UltraFine 5K

The downside? There are a few:

  • $1,300 price tag. You can have three Dell U2717D displays for the price of one of these 5K displays.
  • Power draw: drawing 14.7 million pixels ain’t cheap. With EPA power saving on, your draw is going to be 140W for each of these.
  • GPU power. With the latest NVIDIA GTX 1080 Ti, you can drive at most two of these displays. It might be possible to use SLI, but prepare to hear your fans all the time forever.

It may be possible that in the future, using technologies like NVIDIA’s GSync, power draw and GPU demand will be driven down to the bare minimum required to display something at any one given time. More specifically, it could be possible to lower the refresh rate to 1Hz when the displays are idle, tremendously reducing the GPU and power requirements for an average desktop setup.

My conclusion: 5K is what you want, but for multiple displays, it’s still way out of reach unless money and power are no object.

Display Technology

One thing that’s often neglected in these discussions is the underlying display technology. The main two contenders in this field are IPS and OLED, with heavy bias in desktop and laptop displays toward IPS: almost every display you’ll find is IPS unless you go out of your way.

If the market is so biased, why should one care?

Open this web-page and maximize it so that your entire display is black. On an IPS display (most likely like the one you’re reading this on right now), you’ll clearly see the edges of your display against your bezel. This is because IPS displays have a backlight. Pixels that are entirely black are still backlit, leaving you with a noticeable glow on your display.

On the other hand, OLED displays do not have a backlight. Your phone may have an OLED display. If you open the same web page on an OLED display, you will not see any glow whatsoever. A fully black pixel is one that is completely and totally off.

Some new laptops like the Dell XPS 13 and the ThinkPad X1 Yoga have OLED displays, but they’re still relatively rare. I’m able to find OLED TVs, but not one single desktop monitor. Fingers crossed for 2018, folks.

Refresh Rate

This primarily concerns the gamers out there and not so much those of us who primarily use our computers for work, but refresh rates are a feature you’re likely to think about regardless. Almost every monitor these days have a refresh rate of 60Hz, or 60 times/frames per second.

At 60FPS, it’s still likely that you’ll notice motion blur and screen tearing at times. Recently, new monitors have entered the market which support 120Hz, 144Hz, or even 165Hz refresh rates.


Above is the ASUS PG279Q 27” monitor, with a 1440p resolution and up to 165Hz refresh rate using NVIDIA’s GSync technology.

Now, the downside of refreshing 165 times per second is that GPUs will have to work that much harder to redraw the display. Your GPU can redraw two 1440p monitors at 60Hz with less GPU load than one of these high refresh rate monitors.

This isn’t fully true, as NVIDIA’s GSync and AMD’s FreeSync allow the monitor and the GPU to speak directly to one another, slowing down the refresh rate when low or no activity is occuring on-screen. This is a huge boon for GPU load in multi-monitor setups: if you’re gaming on your center display and not on your left and right displays, those monitors will reduce their refresh rate and only the center display will be driving a heavy load on the GPU.

The ASUS PG279Q is around $750 right now, so it’s around $150 more per display as opposed to the Dell U2717D.

Bottom line: if you’re going to get a high refresh rate monitor, make sure to find a monitor with GSync or FreeSync ( as is appropriate for your GPU) to optimize the load on your GPU and to save power both on the GPU and on the monitor.


One of the more subtle parts of selecting a monitor is the color profile. If at all possible, don’t mix different monitors in a single setup. Even for me, two different factory runs of the ViewSonics led to dramatically different color response on each display. Dell has a good history of getting color right. Otherwise, your mileage may vary.

If you’re super anal and OCD, buying a USB color calibrator can help you tune your monitors to nearly identical output. With the Dells I have at work, I can’t discern any difference between the monitors, so consider this a side quest.

Power Draw

At standby, the Dell U2717D consumes 0.5W of power, on average it consumes 36W, and at maximium load it consumes 88W of power. If you’re running multiple monitors, wattage adds up and you might see it on your energy bill.

The ASUS PG279Q 165Hz monitor consumes 0.5W at idle and up to 90W under load.

Notably, the LG UltraFine 5K display consumes up to 140W of power, which isn’t surprising given how many pixels it needs to draw.

Physical Characteristics

Finally, you’ll want to optimize for weight and depth. Monitors lighter in weight are generally preferable, but as is often the case, beggars can’t be choosers.


Triple monitor stands used to be hard to come by, but are now becoming much more generally available.

Here is the KONTOUR Triple Monitor Mount, retailing at around $250:

It supports 27” monitors, which isn’t too common for triple monitor mounts.


To reiterate: monitors last a long time, so choose wisely. What you buy today may very well still be on your desk in ten years. Hopefully this article has clarified the various criteria by which you should choose your next monitor.