Twenty seven

October 1st just passed yesterday. Another year in the pandemic, though I tried to make the best of it! Here’s the 27th edition of of my yearly reflections on what I’ve been up to, what I’ve achieved, and where I’ve grown. You can find previous years reflections since I’ve been Twenty, Twenty-one!, Twenty-Two, Twenty-Three!, Twenty-four!, Twenty Five, and Twenty Six.

I made the biggest purchase in my life to buy a duplex here in Ottawa’s Centretown neighbourhood. I’ve also become a lead of leads in my engineering organization. I’ve even had a few big accomplishments with my road cycling hobby thanks to a few friends.

One of my friends asked me about the high and lows of the year. After a bit of thinking, buying the house was definitely the height of the year. The low likely was not spending as much time as I usually would with family and friends either up at the cottage or travelling around. Let’s get in to a few of the highlights!

House

I bought a house, and moved in on the 5th of July! The landlord of the place I was renting previously was a seasoned real-estate agent. He shared some of his marketing materials with me sometime last year. With lots of people buying places in the country instead of the city to gain more space, the tradeoff wasn’t particularly worth it to me. Over a year into the pandemic and I approached my landlord to ask him if he’ll be my agent. He agreed and a month or two later of not many listings going up due to the lockdown, one place in Centretown meeting my criteria eventually showed up. I viewed it, ended up falling in love with it, and put in an offer for the place. As luck had it, my offer was accepted!

The dining room after some fresh paint and furniture

Over the past few months, most of my free time has been going towards making cosmetic improvements to the place, running ethernet cables through the walls, deep cleaning everything, buying new furniture, and starting the never-ending decorating journey. I’m quite glad that all of the DIY skills and confidence I’ve gained helping my family with their projects when growing up is helping me greatly now. Having extra time on my hands certainly helps as well!

I’m quite thankful for having an interior designer friend who’s been very handy when suggesting furniture pieces and paint colours. I definitely wouldn’t have as cool looking of a place without it! Another great friend has also lent me a number of tools to help with the handful of jobs I’ve been doing around the place.

I’m most excited about entertaining and enjoying the house when I’m not in DIY mode all the time. This Christmas should be a blast hosting my family, and there’s likely a few house parties I want to throw as things open up more. As long as I’m enjoying the space with others, I’ll be fulfilled.

The top floor deck with obligatory string lights

Career

Earlier this year I received a promotion to become a manager of managers! Yes, I’m in full Office Space-esque “What do you do here?” territory. Jokes aside, this has been incredibly exciting as I am now accountable for the people and product across a handful of teams. Very recently many of my previous responsibilities have been handed over to two fantastic people leads on my team. The conversations I’m having with them, some senior developers, and high growth individuals are very focused on helping them grow their careers and impact that they bring to the team, which has been quite fulfilling.

On the product side of things, a recent trend has evolved from “build extendable features for the long term” to “build self-service, or low-code features for the long term”. This is a neat observation and paradigm shift, which reflects on the development teams being more mature, and the need to build the right knobs and switches into the system to more easily enable the business to change how they work.

For a number of months, I was short a people lead for one of the teams and took on the extra load of performing all of the people management work until we hired on a permanent replacement. I had something like 17 half-hour weekly one on ones with everyone who normally reports to me, and every developer from the team that didn’t have a people lead. This took a crazy amount of time out of my schedule, but I loved the chaos and leaned into it. This was a great test of my time management, prioritization, and delegation skills. Since I wasn’t able to be involved in each team’s day to day, I heavily leaned on the seniors of each team to take ownership over the technical and product decisions. This worked out miraculously well, and was an amazing growth opportunity for these individuals to take on more ownership and make more decisions. Each team being in a mature enough state to not require my day to day involvement was key for me to focus on the more important people management side of things.

Growing of these teams also took precedence, as it periodically does every year. We grew the teams by several developers, hiring folks from Ireland, around Canada, and even the US. I still have to remind myself that we truly hire great people to work with, both professionally and socially.

Cycling

Happily cycling from the cottage to Bala in my Shopify-branded kit

Where to begin. One of the biggest forces that has helped push myself out of my comfort zone and see just how much cycling I can endure was thanks to a great amount of healthy competition with some friends. When the weather got cold, indoor cycling started, and a number of cyclists from work came together to do some virtual group rides. Three of us wanted to go further and ended up cycling multiple times a week. Over a number of months of seeing our cycling strength and endurance increase, we signed up for some very tough challenges in our virtual cycling app of choice, Zwift. Those challenges were:

  • The PRL Full route, consisting of 175 km, 2281 m of climbing. It took 7 hours!!!

I have to pause here, since going into this, we knew that this would be pushing our limits, and then some. Our times continuously riding were about 4 hours max. My cycling buddies and I were expecting this to be a 6-7 hour ride for us. Cycling for this long becomes quite the mind game along with the expected fatigue. As I shared above, my cycling buddies and I were able to finish it! I was seriously questioning why I enjoyed this whole cycling thing for a few days after that. To get over the pain and suffering of riding the same hilly route 11 times during the challenge, I forced myself a few days later to go ride it once more to get over my newfound loathing of it. It worked. I got over it. The best part about this challenge was that every other challenge paled in comparison since none were as challenging as this!

Otherwise here’s a few other notable achievements from doing all of the indoor cycling on Zwift:

From all of the cycling, the amount of power I could exert increased from 2020 to 2021 significantly! Some quick number crunching shows a 40-60% improvement, which is mind blowing!

2020 (lighter line) to 2021 (darker line) watts/kg power curve.

When the weather warmed up, there was a number of great adventures and achievements that were had:

One of the funnest hills to climb on the Forks of the Credit ride, this switchback was beautiful to take in

I can’t wait to see what I’ll get up to next year cycling-wise!

What’s next?

Well, there’s probably a decent amount of travel I’m looking forward to over the next year. Some already figured out such as a handful of business trips to hang with the teams, and an unknown amount of personal ones with friends and family that I’m most excited about.

Once the renovations and DIY around the house have settled down, figuring out what I’ll do with the other unit is on the list. Having a second source of income can only help set myself up more for the future.

Hopefully I’ll do some even bigger cycling trips, and get around to that bike camping I wanted to get around to this past summer. Buying a bike computer and power meter would help on these adventures and regular training too.

2022 is looking bright!

Binge-worthy podcasts discovered during the pandemic

Throughout 2020 I have listened to hundreds if not now thousands of hours worth of podcasts. Have I learned anything useful? Not really. Did it help keep me entertained through the pandemic? Hell yes.

This blog post complements another post I wrote a number of years ago which collected my favourite podcasts over technology, entertainment, and software development. This blog post focuses more on the different types of entertainment that are great for binging through while on a long road trip, while doing some chores, or desiring an escape from the day to day. Here are my reviews for a number of the most noteworthy podcasts that have kept me busy over the last year.

Not Another D&D Podcast

Not Another D&D Podcast paints an incredible adventure through its hundreds of hours of episodes. Dungeon Master Brian Murphy is an expert at storytelling while balancing the randomness of the game of Dungeons and Dragons. His ability to have such a wide array of voices for characters in the story complements the improv of players Jake Hurwitz, Emily Murphy, and Caldwell Tanner. The allure of the podcast is building up an adventure that the listener is highly invested in while joking around enough to not alienate the story.

I haven’t ever played a game of Dungeons and Dragons before, but got introduced to this podcast and the idea of D&D through listening to a hilarious holiday special episode that featured Amir Blumenfeld. I was hooked on this podcast ever since, and have gone so far to subscribe to this podcast’s Patreon primarily for the again-hilarious post-episode commentary.

The Adventure Zone

The McElroy family puts this excellent role-playing games show together. Brothers Justin, Travis, Griffin, and father Clint partake in a handful of sagas across Dungeon and Dragons, and other role-playing games. I particularly find their D&D seasons more entertaining than the rest. Their gameplay takes a bit more of an absurdist comedy approach compared to Not Another D&D Podcast, but the storytelling and character building is still maintained.

Some of the other off-season games they have played haven’t been as interesting to me. The game of D&D brings out more excitement and variety to the storytelling, keeping me on the edge of my seat, compared to the other games which involved a lot less game mechanics and leant more on the story being told.

Black Box Down

Gus and Chris walking through the unbelievable chain of events that go into many air flight incidents leaves you with a new appreciation for the safety of the flying industry. Each episode follows the timeline of events until disaster or rescue and then dives deep into the results of multi-year investigations that most of these flight disasters go through.

Gus is the primarily the one driving the show with storytelling and introducing new information, while Chris adds questions and commentary that many of us layman listeners would wonder about. The show keeps the listener tuned in solely based on what surprising or interesting new information will unfold for the current real-life story.

Some of my favourite episodes are the unbelievable fight between a hijacker and the flight crew aboard a FedEx flight, an interview with a plane crash survivor who believes they benefitted from the incident, and the recent Malaysian Airlines flight 370 which disappeared over the Indian Ocean.

Triforce!

This isn’t for everyone. Some poop jokes, a hint of political incorrectness, and banter about normal life from these three guys is surprisingly entertaining and definitely NSFW. Their day jobs are of streamers – those who play videogames for others to watch online. They convene weekly to catch up and make each other laugh over the mundane experiences they have, whats latest in the news, or the games they play.

To add to the uniqueness, many of the early episodes feature Pyrion’s original short stories of Bodega, a gunslinger in a futuristic galaxy. Scoffee, short for space coffee, is this universe’s version of our own coffee. This, and a handful of other original words add to the fictional world. After enough interest, Pyrion wrote a Bodega novel to connect together many of the storylines originally read during the podcast. I haven’t read it yet, but should eventually pick up a copy for myself.

Whenever I’m in the mood for a good laugh I know I can revisit a couple favourited episodes, or choose a random one if I’m feeling lucky. Some of those favourites are the absurdity of being at kids parties (#25), Pyrion and his sketchy neighbour (#40), and imagining a new and very NSFW gameshow (#89).

Notable mentions from this year

Deep Cover

An FBI agent retelling their experiences of going undercover and taking down drug lords? ’nuff said.

The Orange Tree

Not your typical murder mystery, The Orange Tree chronicles the brutal murder of Jennifer Cave, a student at Univeristy of Texas at Austin. The series is hosted by Haley Butler and Tinu Thomas who both attended the same university where the murder happened a decade earlier. They kept hearing about the infamous Orange Tree apartment complex from friends and the murder being in the news, which ultimately led the two to produce this show.

The format of the series consist of multiple interviews, retellings of news clips, court transcripts, and questioning to tell the story. Each episode does a great job at keeping you wanting to listen to the next episode based on a big reveal in the last few minutes of each episode.

Brainwashed

Revisionist History and Hardcore History

I’m no history enthusiast, but from listening to the Genghis Kahn series from the Hardcore History podcast helped change my view that history can be intriguing if told the right way. The same goes for a few episodes from Revisionist History’s telling of Curtis LeMay who was a prominent American Air Force General during World War II. I still have a vast amount of episodes to listen to from these two podcasts, but they will likely keep my interest for many hours.

That’s all for now

In the end, I wish there were more hours in the day to listen to more podcasts. Thankfully when I need a break from one, there’s another great podcast to start or pick back up.

ZFS snapshots in action

I recently had my laptop running Xubuntu 19.10 reach its end of life for security updates. I needed to upgrade to a newer version of Xubuntu to continue receiving the important updates. Luckily when I originally put Xubuntu 19.10 on this laptop, I installed the OS using ZFS as the filesystem – a feature new to the Ubuntu installer at that time. Thankfully ZFS proved itself as a great safety net for when the upgrade failed midway through the Xubuntu upgrade to 20.04 (the laptop abruptly turned off). But first, some background on ZFS snapshots.

ZFS Snapshots

One of the features not discussed in my previous article on ZFS was the powerful snapshotting features available to use. For any ZFS dataset (synonymous with a filesystem), snapshots can be created to mark a moment in time for all data stored within the dataset. These snapshots can be created or removed at any time, and will take up more storage space over time as files are added and removed after a snapshot has been taken. With a snapshot created, at any point in the future it’s possible to rollback to this snapshot, or go and read the data within it. Rolling back to a snapshot effectively erases anything that happened after that snapshot was taken. There are more advanced uses for snapshots that can be discovered in this great resource.

Right after I installed 19.10 a year ago, I created a snapshot to mark a clean install of Xubuntu in case I messed something up and needed to revert to a fresh new install. I haven’t yet needed to use this at all. Next up I’ll walk through my experience upgrading to 20.04 and using ZFS snapshots.

Taking ZFS Snapshots

Xubuntu 19.10 recently stopped receiving security updates, and therefore I needed to upgrade. The 20.04 version is Ubuntu’s long-term support (LTS) release, which provides a number of years of support and security updates – far greater than the non-LTS releases such as 19.10. Going into the upgrade I made sure to make a snapshot of all of the different datasets before performing the upgrade. From the Ars Technica article referenced eariler, the following command takes a recursive snapshot of all datasets that are part of the rpool:

$ zfs snapshot -r [email protected]

No output means that the command was successful. The following command then shows all of the different datasets that were snapshotted in the pool named rpool. If you’re following along, this may look a bit different for you. Ubuntu’s installer creates many different datasets for different directories, and two pools, one named rpool, and the other named bpool (not important for this article).

$ zfs list -rt snap rpool | grep 2020-upgrade
[email protected]                                                0B      -       96K  -
rpool/[email protected]                                           0B      -       96K  -
rpool/ROOT/[email protected]                          1.98G      -     6.49G  -
rpool/ROOT/ubuntu_191r26/[email protected]                         0B      -       96K  -
rpool/ROOT/ubuntu_191r26/[email protected]                         0B      -       96K  -
rpool/ROOT/ubuntu_191r26/usr/[email protected]                  72K      -      112K  -
rpool/ROOT/ubuntu_191r26/[email protected]                         0B      -       96K  -
rpool/ROOT/ubuntu_191r26/var/[email protected]                   0B      -       96K  -
rpool/ROOT/ubuntu_191r26/var/[email protected]                  35.8M      -     1.32G  -
rpool/ROOT/ubuntu_191r26/var/lib/[email protected]     0B      -       96K  -
rpool/ROOT/ubuntu_191r26/var/lib/Net[email protected]    156K      -      284K  -
rpool/ROOT/ubuntu_191r26/var/lib/[email protected]              6.41M      -     88.6M  -
rpool/ROOT/ubuntu_191r26/var/lib/[email protected]             18.8M      -     40.8M  -
rpool/ROOT/ubuntu_191r26/var/[email protected]                  23.0M      -     1011M  -
rpool/ROOT/ubuntu_191r26/var/[email protected]                    0B      -       96K  -
rpool/ROOT/ubuntu_191r26/var/[email protected]                    8K      -      160K  -
rpool/ROOT/ubuntu_191r26/var/[email protected]                  72K      -      112K  -
rpool/ROOT/ubuntu_191r26/var/[email protected]                     0B      -       96K  -
rpool/[email protected]                                       0B      -       96K  -
rpool/USERDATA/[email protected]                          396M      -     17.8G  -
rpool/USERDATA/[email protected]                         404K      -     1.87M  -

Now that a snapshot was created, I could make any change to the system and be able to rollback to this snapshot, undoing any changes that were made after the snapshot.

Upgrading to 20.04

To perform the OS upgrade to 20.04, a sudo do-release-upgrade was entered, initiating the upgrade. Things were progressing well until the laptop’s battery unexpectedly ran out. Plugging in the power and starting the laptop back up, the login screen wasn’t showing up. Great. Thankfully there’s the little-known virtual tty consoles available a keyboard combo away for cases where you need a terminal but aren’t able to use the graphical window manager.

Now that I have a terminal on the laptop, poking around has shown that the upgrade was definitely interrupted midway through. Only a handful of packages were installed and many more needed to be installed and configured.

Instead of going on and trying to manually fix the failed upgrade, why not roll back to the ZFS snapshot taken just before the upgrade and restart the upgrade from this fresh state? This is what is shown next. With the open terminal, executing this command rolled back the system to the state taken at the 2020-upgrade snapshot.

$ sudo zfs list -rt snap rpool | grep 2020-upgrade | awk '{print $1}' | xargs -I% sudo zfs rollback -r %
$ reboot now

Performing a reboot right after executing this series of commands makes sure that the system is properly initialized from the 2020-upgrade snapshot’s state.

To get a better idea of what the above series of commands does, refer to this Ars Technica article.

And it Worked

After the reboot, the system came back up looking like it was exactly where the snapshot had been taken. I was able to proceed again with the upgrade to 20.04, this time leaving the laptop plugged in.

The safety net of ZFS snapshots proved itself during this experience. It can feel scary knowing that your data is on the line if things go wrong. Having a strong understanding of how ZFS and related systems work helped me get through this without making any unrecoverable mistakes. If you haven’t read it already, my previous article on ZFS takeaways includes many of the references I used to build a strong understanding of ZFS.

ZFS takeaways

ZFS is quite a feature rich filesystem. If you’re managing a number of hard drives or SSDs, the benefits of using ZFS are numerous. For example, ZFS offers a more powerful software-based RAID than normal hardware-based RAID. Snapshotting is a powerful feature for versioning and replicating data. Data consistency is built in and automatically makes sure all data continues to stay readable over time. The pool of drives can grow or shrink while being transparent to the filesystem above it. These are only a few of the powerful features gained from using ZFS. With this power comes a fair amount of initial overhead learning about how ZFS works. Though it’s worth it if you value flexibility and your data. Here’s a number of resources and tips I found useful as I learned and used ZFS over the past few months.

For a general overview of ZFS and more of its benefits, see this great ZFS 101 article on Ars Technica.

Testing out ZFS by using raw files

Throughout the Ars Technica article above, the author uses files on the filesystem instead of physical devices to test out different pool configurations. This is very handy to build up experience with using the different zpool and zfs commands. For example, I used this to get a feel for using the different types of vdevs and using the zpool remove command. A quick example is as follows:

$ for n in {1..4}; do truncate -s 1G /tmp/$n.raw; done
$ ls -lh /tmp/*.raw
-rw-rw-r-- 1 jon jon 1.0G Dec 21 17:09 /tmp/1.raw
-rw-rw-r-- 1 jon jon 1.0G Dec 21 17:09 /tmp/2.raw
-rw-rw-r-- 1 jon jon 1.0G Dec 21 17:09 /tmp/3.raw
-rw-rw-r-- 1 jon jon 1.0G Dec 21 17:09 /tmp/4.raw
$ sudo zpool create test mirror /tmp/1.raw /tmp/2.raw mirror /tmp/3.raw /tmp/4.raw
$ zpool status test
  pool: test
 state: ONLINE
  scan: none requested
config:

	NAME            STATE     READ WRITE CKSUM
	test            ONLINE       0     0     0
	  mirror-0      ONLINE       0     0     0
	    /tmp/1.raw  ONLINE       0     0     0
	    /tmp/2.raw  ONLINE       0     0     0
	  mirror-1      ONLINE       0     0     0
	    /tmp/3.raw  ONLINE       0     0     0
	    /tmp/4.raw  ONLINE       0     0     0

errors: No known data errors
$ sudo zpool remove test mirror-0
$ zpool status test
  pool: test
 state: ONLINE
  scan: none requested
remove: Removal of vdev 0 copied 38.5K in 0h0m, completed on Mon Dec 21 17:39:09 2020
    96 memory used for removed device mappings
config:

	NAME            STATE     READ WRITE CKSUM
	test            ONLINE       0     0     0
	  mirror-1      ONLINE       0     0     0
	    /tmp/3.raw  ONLINE       0     0     0
	    /tmp/4.raw  ONLINE       0     0     0

errors: No known data errors
$ sudo zpool destroy test

Here I use truncate to create four 1 GB sized empty files, then created a zpool named test with two mirror vdevs, using those four raw files. Then mirror-0 is removed, moving any blocks over to mirror-1. The pool is then finally destroyed.

Really understanding vdevs

Vdevs are a foundational part of using ZFS, and knowing what each vdev type accomplishes, and their strengths and weaknesses helps build confidence in keeping your data safe. This Reddit post on the ZFS subreddit goes into detail about many of these considerations. Again, in advance of making changes to a production ZFS pool, dry-running the changes on a test pool can provide more confidence for the changes to be made.

Adding and removing disks

One of the newer features allows removing only certain types of vdevs from a pool via the zpool remove command. This Reddit post and answer goes into some of the different potential scenarios. I did some thorough testing with a test pool of raw files before making any changes to my production pool. The zpool manpages mention the following about what can and can’t be removed:

Removes the specified device from the pool. This command supports removing hot spare, cache, log, and both mirrored and non-redundant primary top-level vdevs, including dedup and special vdevs. When the primary pool storage includes a top-level raidz vdev only hot spare, cache, and log devices can be removed.

I was amazed at the ability of being able to remove a vdev from a pool and have all data transparently moved over to the rest of the pool with the pool still online. One command moved terabytes of data and verified its integrity before removing the vdev.

Use different /dev/ references when creating pools

A small, but important tip when it comes to which /dev/ to use when adding devices to a production pool is to stick to using the device symlinks provided by /dev/disk/by-id/ or /dev/disk/by-path/ as they are less likely to change. Referencing drives directly like /dev/sdc can be more risky as these identifiers can change whenever hardware is added or removed from the system. The OpenZFS docs provide a great rundown on why this is recommended.

Other helpful resources

These were just a handful of my biggest takeaways of using ZFS over the past couple of months. A number of useful resources I’ve found along the way can be found here:

Building a homelab – local DNS

This is a second post in a series of my experiences while Building a Homelab. The first post focusing on the history, hardware, and OS can be found here.

Having a number of networked devices at home presents some management overhead. You may find yourself asking, what was the IP address of that one laptop? or just getting plain old tired of looking at IP addresses. One method people often use to manage their network is to assign Domain Name System (DNS) names to their devices. Instead of constantly typing in 192.168.1.1 you could instead assign it the domain name router.home. Entering router.home into your browser then transparently brings you to the same webpage as 192.168.1.1. This not only works for browsing the internet, services such as SSH, FTP, and other places where an IP address would normally be used can likely use the friendlier domain name instead.

So how can this be done? It’s actually quite simple given you have an always-on computer on the same network as the rest of your devices, a router with DNS serving capabilities, or even a DNS provider such as Cloudflare. This article will focus on the DIY solution of running a DNS server on an always on computer.

Before we get to how to set this up, let’s first explain what DNS is and how it works. Feel free to skip over this section if you’re already knowledgeable.

What is DNS?

DNS is a technology used to translate human-friendly domain names to IP addresses. For example, we can ask a DNS server what is the IP address for the domain google.com? The DNS server would then respond with the IP address for google.com: 172.217.1.174. DNS is used for almost every request your computer, phone, smart lightbulbs, and more when it communicates with the internet.

Anyone who runs a website is using DNS whether they know it or not. Usually the basic premise is that each domain name (eg. mysite.com) will have a DNS record which points to an IP address. The IP address is the actual computer on the internet which traffic for mysite.com will be sent to.

An example of DNS being used can be for jonsimpson.ca. This site is hosted on a server that I pay for at DigitalOcean. That server has an IP address of 1.2.3.4 (a fictitious example). I use Cloudflare as the DNS provider for jonsimpson.ca. Anytime a user’s browser wants to go to jonsimpson.ca, it uses DNS to figure out that jonsimpson.ca is located at 1.2.3.4, then the user’s browser opens up a connection with the server at 1.2.3.4 to load this site.

This is quite a simplified definition of DNS as the system is distributed across the world, hierarchical, and involves hundreds of thousands, if not millions, of different entities. Cloudflare provides a more detailed explanation as to how DNS works, and Wikipedia has comprehensive coverage of multiple concerns relating to DNS. But what was explained earlier will provide enough context for this article.

Running a local DNS server

If there’s an always-on computer – whether that’s a spare computer or Raspberry Pi – a DNS server can run on it and provide DNS capabilities for the local network. Dnsmasq is a lightweight but powerful DNS server that has been around for a long time. Many hobbyists use Dnsmasq for their home environments since it’s quite simple to configure and get going. One minimal text file is all that’s needed for configuring a functional DNS service.

I chose to run Dnsmasq on my always-on server in a Docker container. When configuring Dnsmasq, for each device that I wanted to provide a domain name for, I added a line in the configuration mapping its IP address to the name I wanted to give it. For example, my router which lives at 192.168.1.1 was assigned router.home.mysite.com, and my server which lives at 192.168.1.2 was assigned server.home.mysite.com.

I then configured my router’s DHCP to tell all clients to use the DNS provided by the server (contact 192.168.1.2 for DNS), and configure some manually networked devices to explicitly use the DNS provided by the server. Now on all of my devices I can type in server.home.mysite.com anywhere I would type 192.168.1.2 – so much nicer compared to having to type in an entire IP address.

nslookup and dig are both common command line tools to query the Domain Name System. They are often found already available on many Linux and Unix operating systems, or a straightforward install away. Using these tools can help with inspecting and debugging DNS setups. Here’s an example query using nslookup to find google.com:

$ nslookup google.com
Server:          192.168.1.2
Address:        192.168.1.2#53

Non-authoritative answer:
Name:   google.com
Address: 172.217.1.174

The first Server and Address denote the DNS server that was used to find the IP address for google.com. In this case, it was the Dnsmasq DNS server running on my home server. Name and Address at the bottom signify the actual response we’re interested in. In this case, 172.217.1.174 is the IP address I get whenever I go to google.com.

The configuration

I use Docker as a way to simplify the configuration and running of different services. Specifically, I use docker-compose to define the Dnsmasq Docker image to use, which ports should be opened, and where to find its configuration. Here’s the docker-compose.yml file I use:

The docker-compose file defines one dns service that uses the base image of strm/dnsmasq, as its one of the more popular Dnsmasq images available on hub.docker.com. The volume option specifies that we map a config file located alongside the docker-compose.yml file at config/dnsmasq.conf into the container’s filesystem at /etc/dnsmasq.conf. This is done to allow the container to be recreated at any time while keeping the same configuration. Networking-wise, TCP and UDP port 53 are exposed (yes, DNS operates over TCP sometimes). The network-mode is set to the host’s network (Dnsmasq just doesn’t work without this). And lastly, the NET_ADMIN capability so that we can use privileged ports below 1024. The last option restart, (one of my favourite features of docker-compose) is to keep the container running even when the host reboots or the container dies.

All of these docker-compose.yml options can be understood in more detail in Docker’s reference docs.

More importantly, here’s the dnsmasq.conf file I use to actually configure Dnsmasq’s DNS capabilities:

A lot of these settings were based off of the following blog post. Many of these options can be looked up online in the official documentation, therefore I will focus on the ones relevant to this article.

I have my Ubiquity router handle providing DHCP for my network, therefore the no-dhcp-interface=eno1 is set here to not provide any DHCP services to the local network, as eno1 is the interface my server uses to connect to the network.

When Dnsmasq needs to find the DNS record for something that it doesn’t know, it performs a request to an upstream DNS server. server is used for this and can be specified multiple times to provide redundancy in case one of these DNS servers are down. I’ve specified both the Google and Cloudflare DNS servers. In addition to this, the all-servers option results in all defined server entries being queried simultaneously. This has the benefit that one DNS server may respond quicker than the others, resulting a net-faster response to the DNS query.

The most important part of this dnsmasq.conf configuration file are the last lines defined in the file that start with address=. This is Dnsmasq’s way to declare DNS mappings. For example, any device on my network performing a request for server.home.mysite.com will have 192.168.1.2 returned.

The really cool thing with DNS is that subdomains for any of these records return the same IP, unless declared explicitly otherwise. An example of this is blog.apps.site.jonsimpson.ca doesn’t exist in the configuration file, but performing a DNS request for it will return 192.168.1.2. This has the effect that “multiple services” can each have its own domain name, but all be served by the same IP address.

Conclusion

Hopefully this article gives a background about what DNS is, how it can be useful in a home environment, and how to setup and operate a Dnsmasq DNS server. A future post will build on top of the DNS functionality that has been setup here to provide multiple HTTP services running on separate domain names, all served by the same server, for the home network to use.